Dr Muiris Houston: An Untrammelled AI Could Lead to a Significant Risk to Patients' Health
Artificial Intelligence (AI) is everywhere, and medicine is no exception. For example, it has been shown to help radiologists more accurately interpret X-rays and scans. And there are potential applications for artificial neural networks in research, information transfer and retrieval, as well as in clinical decision systems.
But does it have a role in helping patients get more out of healthcare?
The jury is still very much out on this, although one area that is showing promise is the use of AI as a medical scribe.
Ambient artificial intelligence scribe programs are software that uses a microphone to listen to conversations between clinicians and patients. They then summarise the visit into a structured medical note that the doctor can use to share with other health professionals as well as becoming part of the patient's medical file. One big disadvantage is that an AI scribe tool can't pick up everything that goes on during the appointment, such as how a patient appears or acts. So if someone is upset and starts crying, this does not automatically appear in the generated medical notes. However, this can be added later to the record by the doctor.
"I can spend quality time" with patients and "be a lot more empathic," she says.
"I believe that this technology has cracked a nut which has eluded everyone working in healthcare up until now - that of fast, accurate, efficient, secure and cost-effective documentation of medical encounters," he says. "It has transformed our practice since we started using it two months ago."
"I see AI in health becoming part of the standard way that we take care of patients, and I think that it won't be long in the future where patients themselves are wanting to receive care that's AI informed," he told the Canadian Broadcasting Corporation.
I was glad to see him strike a note of caution, however: "I don't think that patients or clinicians are ready to take humans out of the loop or let AI make critical decisions or be, you know, driving those decisions without human oversight."
"Humans want human judgments ... AI can be an assistant, but diagnosis or rationing decisions should not be left solely to AI."
I remain sceptical of AI's supposed future ability to empathise with patients. For those of us who practice narrative-based medicine (NBM), a key question is: Will AI have the capacity to listen closely to patients, and even more importantly respond in a natural empathetic way to the issues raised by them?
Close listening and close reading are essential elements of NBM. By realigning consultations to place the patient more at the centre of clinical interactions, we value active listening and reflective summarising statements focusing on people's emotions.
I'm afraid I just cannot see AI being able to undertake these complex tasks. In fact, I reckon an untrammelled AI could lead to significant risk for patients' health.
AI is not deus ex machina; it's just machina. And it's a machine I believe doctors and patients must engage with very carefully indeed.