The physician's aura is not vanishing. It is moving.

A response to John Lantos’s JAMA Perspective on AI and medicine.

Editorial — April 2026

Claudio S. Cinà is a vascular surgeon and the author of Medical Ethics: The Surgeon's Perspective (Springer, 2025)

In a Perspective published in JAMA on 2 March 2026, the bioethicist John Lantos argues that medicine is approaching a Benjamin-esque moment: as artificial intelligence absorbs more of what has long been considered distinctly physicianly work — diagnosis, image interpretation, triage, counselling, even peer review — the professional “aura” that surrounded the doctor is quietly draining away. Borrowing from Walter Benjamin’s 1936 essay on mechanical reproduction, Lantos suggests that when clinical competencies become reproducible at scale, physicians will be recast as supervisors of semi-autonomous systems: still accountable, but with diminished singular authority.(1)

Lantos is careful to note that AI did not begin this shift. The clinical gaze, anaesthesia’s silent body, evidence-based guidelines, and templated electronic health records had already taught medicine to think in machine-compatible forms long before the first large language model entered a hospital. AI, on his reading, does not initiate the transformation — it perfects it. What is new, he argues, is that AI is both interactive and widely available outside institutional walls, and so it simultaneously accelerates de-skilling and simulates warmth and empathy at scale.

Lantos is right about the direction of travel. He may be wrong about where it leads.

1. Efficiency technologies can increase presence, not replace it

If “aura” is partly the felt experience of attention — of a clinician who is with the patient rather than next to a screen — then the near-term effect of well-governed AI may be the opposite of aura-loss. It may restore the conditions under which presence becomes possible at all.

Documentation burden is the best-evidenced driver of eroded attention in the consulting room. In a time-and-motion study across four specialties, Sinsky and colleagues reported that for every hour of direct clinical face time, ambulatory physicians spent nearly two additional hours on EHR and desk work during the clinic day, with a further one-to-two hours after hours.(2) Against that baseline, the first rigorous randomized evidence on ambient AI scribes is now in. In a three-arm pragmatic trial of 238 outpatient physicians across fourteen specialties at UCLA Health, Lukac and colleagues found that ambient scribe use reduced time-in-note and produced improvements in burnout, task load and work exhaustion, without evidence of degraded note quality.(3)

Those are modest numbers in absolute terms, but the direction matters philosophically. Presence is not merely a virtue; it is a resource with operational prerequisites. If AI reduces cognitive switching and returns the clinician’s gaze to the patient, it increases the very human signal that patients interpret as this doctor is with me. The dominant class of consumer-facing clinical AI over the next three to seven years will not be the independent-reasoning system Lantos invites us to fear. It will be workflow AI — documentation, inbox triage, order-drafting, patient instructions — because that is where the return on investment is immediate and the authority claim is modest. Workflow AI is more likely to re-humanise the encounter than to supplant it, provided privacy, consent and accuracy safeguards keep pace.

2. What becomes reproducible is not the same as what becomes trusted

Lantos suggests that as “caring” becomes reproducible, the physician’s aura diminishes. But in medicine the scarce commodity has never been fluent language or even pattern recognition. It is warranted trust under uncertainty.

Even the most optimistic reviews of large language models in medicine converge on the same structural hazards: hallucination, brittleness under distribution shift, biased outputs from biased data, and difficulty ensuring clinically safe behaviour across contexts and languages. These are not implementation bugs awaiting the next model release; they are features of probabilistic systems interacting with open-ended clinical reality.

Trust, in turn, is anchored by accountability and moral agency: someone must take responsibility when tradeoffs collide — beneficence against autonomy, risk against dignity, guideline against person. This is precisely why modern regulation continues to insist on human oversight and lifecycle governance for higher-risk medical AI. In the European Union, medical-purpose AI is treated as high-risk under the AI Act, with requirements for risk management, data governance, transparency and human oversight; the U.S. Food and Drug Administration has explicitly moved toward lifecycle-management concepts, including planned modification controls for adaptive AI-enabled device functions. Adaptive systems require ongoing governance rather than one-time approval, and that governance requires clinicians in the loop.

The centre of gravity therefore shifts. The physician’s authority becomes less about being the sole source of knowledge, and more about being the warrantor of a decision process — how evidence was selected, how values were weighed, how uncertainty was communicated, and how responsibility is held when something goes wrong. The aura that survives will attach to clinicians and teams who can audit AI outputs, explain uncertainty clearly, and demonstrate ethical steadiness when the algorithm’s recommendation conflicts with the person in front of them.

3. Benjamin can be read differently: the original becomes the relationship, not the output

Benjamin’s argument is not simply that aura disappears. It is that society stops mistaking fidelity of reproduction for the whole meaning of the work. Applied to medicine, the clinical output — a differential, a risk score, a discharge letter — can indeed be replicated. But the relationship in which that output is received, interpreted and acted upon is not a commodity in the same way.

An AI can draft an eloquent explanation of anticoagulation. It cannot genuinely participate in the patient’s social world: who will care for them if they bleed, what they fear losing, what tradeoffs they accept, how earlier harms shape their consent, how family dynamics govern adherence. These are not soft extras bolted onto the clinical encounter. They are the causal drivers of outcomes.

Here is the inversion that Lantos’s framing risks missing. In an AI-saturated environment, the distinctly human work becomes morevisible, not less — because everything else becomes cheap. When explanations are abundant, what patients pay attention to is whether the clinician can situate an explanation inside their life, and whether that clinician will still be there when the plan fails.

4. A new definition of excellence is available now, and it is teachable

Lantos correctly predicts controversy about standards. The constructive response is to define excellence in a way that incorporates AI without surrendering the moral core of the profession:

•      Epistemic excellence — the ability to validate AI claims against primary evidence and clinical context.

•      Relational excellence — sustained attention, narrative competence, clarity under emotion.

•      Ethical excellence — transparent reasoning about tradeoffs; respect for autonomy; active resistance to automation bias.

•      Systems excellence — designing workflows in which AI reduces burden without erasing responsibility.

These are competencies. They can be taught, assessed, and credentialed. The physician’s future role is not supervisor-of-machines in any diminished sense; it is steward of a sociotechnical covenant — the alignment of tools, institutions and values so that care remains both effective and humane.

5. The real threat to the physician’s aura is not AI. It is misaligned incentives.

If AI is deployed primarily to increase throughput, intensify coding and compress visits further, aura will indeed wither — and deservedly so. But that is a governance choice, not a technological destiny. Regulatory frameworks already push toward transparency, oversight and risk management. What remains is aligning payment and institutional metrics so that time reclaimed from documentation becomes patient time, not simply additional tasks.

The decisive battleground is organisational rather than technological. Health systems that treat AI as a tool to expand relational bandwidth — longer eye contact, clearer counselling, better continuity — are likely to see rising patient trust and improved clinician retention. Systems that treat AI as a tool for volume and surveillance will provoke backlash, moral injury, and a measurable erosion of trust.

Bottom line

Lantos diagnoses a real risk: when clinical competencies become reproducible, professional identity must change. But the best evidence available today supports a more hopeful counter-claim. Properly governed AI is more likely to shift the physician’s aura — from knowledge monopoly toward accountable presence, ethical judgment and trustworthy stewardship — than to dissolve it. It may even restore the time and attention that modern systems had already stripped away, long before the first algorithm entered the clinic.

The question Lantos leaves open, and the one I think we must now answer, is not whether the aura survives. It is what, in an AI-saturated medicine, we now choose to make it of.


References

1.  Lantos JD. The Lost Aura of the Physician in the Age of Artificial Intelligence. JAMA, published online 2 March 2026. doi:10.1001/jama.2026.0946

2.  Sinsky C, Colligan L, Li L, et al. Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties. Annals of Internal Medicine 2016;165(11):753–760. doi:10.7326/M16-0961

3.  Lukac PJ, Turner W, Vangala S, et al. Ambient AI Scribes in Clinical Practice: A Randomized Trial. NEJM AI 2025. doi:10.1056/AIoa2501000

Previous
Previous

The Trevallion Case and the Two Separations