Skip to main content
hand balancing human brain and AI as if on a scale

Leadership & Organisations

Unravelling the Deep Tensions of Human-AI Collaboration

Unravelling the Deep Tensions of Human-AI Collaboration

What appears like AI hesitance is in fact a profound process of adaptation as the technology reshapes the role of human judgement.

Whenever new systems alter how information flows, they inevitably alter how people understand their responsibilities, their judgement and their place within the organisation. This is evident in how AI is reshaping the organisational and psychological foundations of work - an insight long emphasised in organisational sociology.

We saw this dynamic unfold repeatedly in clinical and diagnostic settings in a study of healthcare professionals across Danish hospitals using AI in clinical imaging. When an AI system becomes part of professional judgement – whether as a second reader, a triage assistant or a risk classifier – clinicians do not simply adopt or reject the technology. Instead, they reorganise their work: pausing at unexpected moments, re-sequencing steps, double-checking outputs or temporarily withholding acceptance until they can reconcile the AI model’s recommendation with what they see. 

Often misinterpreted as hesitation or resistance, these behaviours are in fact signs of a profound and intelligent process of adaptation. 

The deeper shifts beneath the surface

Before any visible change in workflows appear, the introduction of AI surfaces four deep forces that reshape how expert judgement is exercised. 

The first force is identity. Clinical professions derive part of their meaning from authorship, from interpreting an image, forming a diagnosis or making a call that matters to a patient. When AI begins to offer its own interpretation, clinicians’ identity becomes unsettled – not because they are territorial, but because expertise has always been anchored in interpretive ownership. AI introduces a new actor into this space. 

The second force is responsibility, which, in medicine, is not merely a bureaucratic construct; it is ethical and relational. If an AI system contributes to or overrides part of a diagnostic judgement, clinicians must decide who is accountable. The answer cannot be vague or shared in the abstract. Tension arises where accountability is expected to stay with humans when part of the reasoning has shifted to the machine.

The third is truth. When AI systems surface patterns or assign confidence scores that diverge from a clinician’s reading, the question is not only “who is correct?” but “how do we integrate two modes of seeing?” 

Finally, there is trust. This becomes a source of tension not because people are sceptical of AI, but because they must decide when to rely on a system when it cannot fully access its internal reasoning. Trust, in clinical work, is less about comfort with technology and more about understanding its behaviour well enough to make prudent, high-stakes decisions.

The tensions felt in practice

Together, identity, responsibility, truth and trust reshape the psychological terrain of clinical work, giving rise to observable tensions that clinicians experience when AI becomes part of diagnosis and care.

  1. Tension between trust and expertise: When an AI system flags a suspicious region a radiologist would normally dismiss or misses something an experienced clinician would have seen immediately, a negotiation begins. It is a negotiation not between a human and a machine, but between two sources of “expertise”. This tension is not about whether clinicians believe in technology; it is about how to honour their professional obligation to judge carefully, especially when two interpretations compete.
  2. Tension around responsibility: When a decision is produced jointly, responsibility becomes ambiguous. Clinicians often respond by reordering tasks: they make their own assessment first and consult the AI afterwards, rather than the other way around. They do this not because they distrust the model but because they need to preserve what neuro-cognitive psychologists call the sense of agency: the internal assurance that “I am still the one in the driver’s seat”. In medicine, as in many other fields, this sense of agency is essential to responsibility.
  3. Tension around objective prioritisation: AI systems are typically optimised for throughput, speed or statistical accuracy. Clinicians, by contrast, prioritise patient safety, contextual nuance, learning and fairness. These objectives do not naturally align. When AI accelerates work, but clinicians need to slow down, question or widen the diagnostic frame, friction in the system emerges. What may look like inefficiency is often a deliberate safeguard.

These three tensions manifest in every clinical environment where AI touches judgement. In response, clinicians have learnt to collaborate with AI in different ways, adopting one of four distinct models.

Four models of human-AI collaboration

Parallel expertise or “dual-track mode” is extremely common in radiology: Clinicians read the scan while AI produces an independent interpretation. The outputs coexist but do not directly interact. This model of parallel expertise allows clinicians to retain authorship while managing identity and truth tensions, making this a safe starting point for integrating AI into practice.

The second model is forwarded expertise, or “AI-as-decider mode.” This applies to contexts such as triage tools, automated prioritisation systems or structured decision-support protocols. Here, AI produces the operative decision and the human’s role is to relay or enact it. This is often a rational workflow choice but can generate significant responsibility tension if clinicians are accountable for decisions that they do not fully shape. 

A third model is augmented expertise or “amplified judgement”. In this mode, clinicians remain in charge of the decision but use AI to widen perspectives or avoid diagnostic blind spots. The machine may highlight areas of interest or provide probability scores that prompt deeper scrutiny. This mode preserves human agency while leveraging AI’s perceptual capacity, thus reducing the three forms of tensions rather than amplifying them.

Finally, there is collective expertise, or “co-created judgement.” Here, the clinician and AI contribute different pieces of insight that neither could produce alone. In risk stratification or complex ICU decision support, AI may spot subtle statistical or temporal patterns while clinicians integrate patient history, symptoms and lived context. This shared judgement model is the most demanding but also the most potent form of human-AI collaboration.

These four models do not represent stages of maturity. They are the system’s natural adaptations to the deeper forces of identity, responsibility, truth and trust – and the tensions clinicians experience when AI becomes part of the workflow.

A more confident way forward

AI may feel disruptive, fast and opaque, especially in clinical settings where the stakes are high. But clinicians already know how to use AI responsibly. They pause when they should pause, question what needs questioning, protect the integrity of their role, and adapt in ways that preserve safety and quality.

The real opportunity for leaders – whether in clinical or organisational environments – is not AI adoption but design: redesign workflows, accountability structures and decision rights in ways that allow clinicians to work with AI confidently, safely and professionally.

In the clinical settings we observed, the organisations that succeeded did not mandate compliance. They created, more or less formally, space for clinicians to test the system, compare human and machine interpretations, articulate disagreements and raise uncertainties. In one radiology department, AI was introduced as a second reader while clinicians were encouraged to annotate differences and reflect collectively on patterns of disagreement. 

As trust and understanding grew, so did responsible use – not because of forced adoption, but because clinicians on the ground maintained authorship and preserved a sense of agency while learning how the system behaved.

Designing work for the AI era means beginning with “work as done” instead of “work as imagined”. If leaders focus on resolving the deep forces of identity, responsibility, truth and trust, the visible tensions diminish and a new, more capable form of clinical judgement becomes possible. Agency can be protected by defining when and how clinicians may override the model. This calls for the alignment of responsibility with actual decision-making power and performance evaluation to reward the quality of judgement – including prudent overrides – rather than output.

When leaders build environments that support these conditions, AI does not erode expertise. It strengthens it.

Edited by:

Geraldine Ee

About the author(s)

Related Tags

Artificial intelligence

About the series

AI: Disruption and Adaptation
Summary
Delve deeper into how artificial intelligence is disrupting and enhancing sectors – including business consulting, education and the media – and learn more about the associated regulatory and ethical issues.
View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.