Whether it’s routine check-ups, emergency care or managing chronic illness, everyone relies on healthcare at some point in their lives. AI is no longer a future prospect in medicine; it’s already being used in hospitals to analyse medical images, assist with reading scans and support diagnostic decisions as part of routine clinical practice.
Our research highlights how AI systems can operate at scale, flagging urgent abnormalities and enabling clinicians to make faster and, in some cases, more accurate diagnoses. In a sector as large and complex as healthcare, their adoption raises urgent questions not only for medicine, but for organisational design, ethics and trust.
Transforming diagnostic practice
Radiology and pathology are uniquely suited to AI adoption due to their reliance on pattern recognition and image interpretation. In radiology, our research shows that AI trained on millions of scans can detect conditions such as tuberculosis, stroke, lung nodules and breast cancer at a level comparable to specialist clinicians. In pathology, AI systems can analyse digitised tissue slides to identify cancer cells, quantify biomarkers and prioritise cases requiring urgent review.
In practice, these tools don’t replace clinicians but function as decision-support systems. AI models typically analyse images immediately after they’re taken, identifying potential abnormalities for review. A radiologist or pathologist remains responsible for issuing the final diagnostic report. This human-AI collaboration can significantly reduce reporting times and administrative burden, enabling clinicians to focus on complex decision-making and patient care.
The impact is measurable. Our research shows that AI-assisted workflows have reduced stroke treatment delays from hours to minutes, accelerated tuberculosis detection in low-resource settings – such as high-density prison populations – and improved early cancer diagnosis. In health systems facing workforce shortages and rising demand, these efficiency gains aren’t merely convenient but are increasingly necessary.
Beyond efficiency: organisational change
Despite its technical capabilities, AI’s success in healthcare depends as much on organisational design as on the models themselves. Introducing AI into a hospital isn’t simply a procurement decision; it requires redesigning workflows, redefining professional roles and building trust among clinicians.
Research highlights how intelligent organisations aren’t merely defined by their ability to automate tasks, but by how effectively they combine the complementary strengths of humans and technology. AI excels at processing vast volumes of data quickly and consistently, while clinicians contribute contextual understanding, ethical judgment and accountability. When AI is positioned as an assistive tool, it can augment diagnostic accuracy and free professionals from repetitive tasks. When poorly integrated, it risks creating friction, mistrust and overreliance.
Clinician acceptance is therefore critical to successful adoption. Concerns about job displacement, de-skilling and opaque decision-making remain significant barriers. Even highly accurate tools may be underused if they are difficult to interpret and poorly integrated into existing systems, or if they lack endorsement from senior clinicians. Effective leadership by individuals trained in “AI literacy” and transparent communication are essential to bridge the gap between AI adoption and sustained use in clinical workflows.
Where AI falls short
While AI has demonstrated impressive performance under controlled conditions, real-world deployment exposes important limitations. One key risk is automation bias – the tendency for humans to over-trust algorithmic outputs. Incorrect AI suggestions can increase diagnostic errors, even among experienced clinicians, particularly when errors aren’t clearly signposted.
In pathology, AI systems can be vulnerable to factors such as tissue contamination. Unlike human experts, who can recognise and disregard irrelevant material, AI models may misinterpret contaminants as clinically significant features.
What’s more, many models are trained on historical medical data. When those datasets reflect long-standing inequalities in research and clinical practice, these disparities risk being reproduced at scale. Women, people from ethnic minority backgrounds and younger patients have historically been underrepresented in clinical trials, and patterns of bias in care have further shaped the data available for training AI systems. In practice, this can translate into underdiagnosis or delayed treatment for already underserved populations, amplifying existing health inequities.
Ethical and legal accountability
The integration of AI into diagnostics complicates traditional notions of responsibility. In medicine, clinicians are morally and legally accountable for patient outcomes. When AI contributes to clinical decisions, accountability becomes distributed across clinicians, developers, institutions and regulators.
Opacity in AI systems, often described as the “black box” problem, further complicates accountability. If clinicians can’t understand how an algorithm reached a conclusion, their ability to critically evaluate its output is diminished. This concern has influenced regulatory approaches. For example, the United States Food and Drug Administration has historically approved only so-called “locked” AI models that do not change after deployment, prioritising predictability over adaptability. While this stability supports safety and accountability, it also limits learning. Locked systems can’t adapt to new data or correct for emerging biases in real-world use.
The challenge for healthcare leaders is to balance the need for reliable, auditable systems with the potential benefits of carefully governed adaptive AI. Globally, regulatory strategies vary. The European Union’s AI Act introduces risk-based obligations for healthcare AI, while the United Kingdom has adopted a more flexible, innovation-friendly approach through regulatory sandboxes – engaging firms to test products or services that challenge existing legal frameworks. These frameworks reflect an ongoing effort to balance patient safety, innovation and ethical responsibility. However, legal systems continue to lag behind technological progress, particularly in addressing harms related to bias or systemic design flaws.
The path forward
AI has already begun to reshape the landscape of healthcare diagnostics, and avoiding it due to fear of error or liability risks forfeiting substantial benefits. The solution lies in responsible implementation: rigorous validation on diverse populations, continuous monitoring after deployment, clear accountability frameworks, transparency with clinicians and patients, and sustained human oversight.
Ultimately, AI should be understood not as a replacement for clinical expertise, but as an intelligent partner. When embedded thoughtfully within organisational structures and guided by ethical principles, it can help reduce diagnostic error and support clinicians in their work. Governed by adaptive regulation, it also has the potential to extend specialist care to underserved populations and support more sustainable healthcare systems.
The challenge for future healthcare leaders isn’t simply to adopt AI, but to design systems where humans and machines work together – safely, ethically and effectively – in the service of patient care.
No comments yet.