Reclaiming care in the age of AI
October 15, 2025
Sixty years ago, the patient-doctor interaction was, at its best, about human beings connecting, engaging, listening, observing and caring.
Notes and letters were written by hand or typed afterwards, placed in an envelope, and sent by post. Later, notes were dictated into handheld tape recorders and typed up by medical secretaries, then by patchy automated dictation tools. Emails were introduced and patient data was entered into a multitude of poorly integrated systems.
These developments occurred alongside important movements for patient safety, an increasingly litigious environment, the industrialisation and commercialisation of most healthcare interactions and the proliferation of medical evidence. The leviathan that is clinical administrative work was born. Although its purpose is crucial, the sheer scale of this work has had a blunting effect on the human, caring part of healthcare.
The tension between human attention and bureaucracy has become one of the defining challenges of modern medicine. Some primary health-care workers in the US spend more than 60% of their time on non-patient-facing activities and having too much administrative work is the most common reason clinicians in North America, Europe, and South America feel they cannot give top-quality care. The cognitive burden and relentless pace of this work have driven record burnout levels and eroded joy.
For patients, these distractions might result in poorer communication and less empathy from healthcare providers. For many years, technology in healthcare has promised to return time to clinicians, but often has done the opposite. The arrival of generative artificial intelligence, however, marks a possible turning point. From ambient artificial intelligence scribes that document everything in a consultation and develop a patient note and letter to AI-enabled scheduling tools, clinical administrative work is being transformed.
Qualitative studies show AI-facilitated documentation reduces time spent on notes, reduces worries of healthcare workers about remembering details until notes can be made and enables more meaningful patient encounters. There is also evidence that it is associated with reductions in burnout and an increased sense of well-being. Seventy percent of clinicians globally surveyed in the Elsevier Clinician of the Future 2025 report think AI will save them time over the next two to three years. But between the many benefits AI might bring to administrative tasks and the importance of human-to-human interactions and oversight in medicine, there are a multitude of questions.
Not least, where is this all heading? Will the increasing number of disembodied digital and AI interventions mean healthcare professionals become obsolete? Not yet. Surveys of patients and clinicians indicate that as the clinical stakes rise, patient comfort with AI drops – transcribing a consultation is a low-risk, high-value intervention, an AI chatbot interacting with patients is not. In a survey of more than 2000 Americans, more than 65% reported low trust in healthcare systems to use AI responsibly and 57% reported low trust that their healthcare system would make sure a healthcare AI would not harm them.
The global boom in telemedicine spurred by COVID-19 continues – 44% of 1000 Chinese adults reported being very likely to seek it and the UK National Health Service is introducing its first digital trust, where specialists will consult purely online. But such services still do not answer the question: who will deliver my baby? And even though some AI chatbots can provide empathetic responses to patient questions ( one study judged that a chatbot was empathetic or very empathetic in 45% of responses to a range of health-related questions, compared with just 5% for physicians), the decision-making process during a consultation is complex.
One study of paediatric cardiologists indicated that only 3% of their decisions were based on a published study –rather, they were using their professional expertise and the individual factors of the patient in front of them to make challenging decisions. Large language models can perform well in medical licensing exams and answering questions, but have persistent limitations in updating their judgments on the basis of new information amid uncertainty – a key skill of an expert human physician dealing with an individual patient.
Care of an individual begins and ends with a human being. AI can help that person be seen more clearly – free from the noise of paperwork, distraction and exhaustion. The next phase of progress in healthcare will depend less on technical capacity and more on ethical stewardship and the healthcare community’s ability to keep humans at the centre of design and deployment. If done properly, AI will not replace care; rather, it could help us rediscover it.
Republished from The Lancet, 11 October 2025
The views expressed in this article may or may not reflect those of Pearls and Irritations.