Medicine, AI and what it means to be human

Taureef Mohammed -
Taureef Mohammed -

TAUREEF MOHAMMED

STORIES MAKE up the day-to-day work of a doctor. To make a diagnosis, we need a story. Even radiologists – doctors who spend their days interpreting computer-generated images of the human body – need a story to go with the pictures. Pathologists, doctors who examine the dead at macroscopic and microscopic levels to trace back the cause of death, tell stories that may seem straight out of a thriller. Geriatricians listen to stories from the best storytellers, older people.

So, it is not surprising that books, TV shows and movies about the medical profession are widespread. Over the last three years, in particular, stories from the front lines of the covid19 pandemic told by healthcare workers and patients have dominated many spaces.

On the academic side, the field of narrative medicine, which is geared toward harnessing these stories toward improved healthcare for everybody involved, has also gained popularity.

In making her case for narrative medicine, Rita Charon, physician and literary scholar at Columbia University, wrote in a 2001 article: “The effective practice of medicine requires narrative competence, that is, the ability to acknowledge, absorb, interpret, and act on the stories and plights of others.”

In medical schools that incorporate narrative medicine into their curriculums, it is not unusual for medical students to go on a trip to the museum or theatre to hone these skills.

When I was a medical student, I remember being moved by a surgeon who had a Paul Keens-Douglas way about him. “In medicine we get to see and hear things that nobody else gets to see or hear,” he said.

As a student, I had also developed an interest in books by physician and surgeon writers. I had enjoyed Every Patient Tells a Story by Lisa Sanders – her NY Times columns had inspired the creators of the TV show House MD – and Complications by Atul Gawande. These stories from the bedside showed how medicine was as much an art as it was a science.

The bedside – that place where a doctor spoke to a patient, got the story, performed an examination, asked probing questions, answered questions, made a diagnosis, broke bad news, observed reactions, covered up frustration and exhaustion, taught students – was a place where our humanity was laid bare. And a physician or surgeon who was able to bring out this humanity through his/her use of language was one who patients and students invariably admired.

Creeping in at the bedside is technology, and it has, inevitably, complicated the doctor-patient relationship, like it has done to all other relationships.

Technological advancements over the last century in the areas of diagnostics, therapeutics and hospital administration have undeniably led to better healthcare delivery. Artificial intelligence (AI) was a cornerstone in some of these advancements. But in 2022, with the launch of ChatGPT – an AI chatbot that provided impressive responses to questions, mimicking natural human conversation – it seemed that AI had reached another level, and its implications on everyday life have been a talking point throughout society.

Is there a place for a chatbot at the bedside? And if there is, would it take away from the art of medicine? Would conversations at the bedside suffer, just like how our everyday conversations have suffered because of smartphones? Could complicated bedside conversations even be replicated by machines?

Two weeks ago, the New England Journal of Medicine launched a series, “AI in Medicine.” The first articles in the series described the potential of AI in healthcare and its limits. One of the articles noted that GPT4 – who I guess is a relative of ChatGPT – stumbled when posed with a scenario that had no right or wrong answer. It struggled with ambiguity.

Where AI fails, we find what it means to be human.

Lewis Thomas, the late American physician and writer, whose clear and lyrical essays described the steep ascent of 20th-century medicine, wrote in his book, The Fragile Species: “I can think of nothing to match…human language. When we speak to each other…it contains the two most characteristic and accommodating of all human traits, ambiguity and amiability.”

It’s been almost 30 years since Lewis Thomas’s death and, it seems, human language still cannot be matched.

What does this mean for us in healthcare? That until we start treating robots in the hospital, the bedside will always be a human experience, a place where ambiguity is explored through human language. A place where stories – and diagnoses – are made.

Taureef Mohammed is a graduate of UWI and a geriatric medicine fellow at Western University, Canada

Comments

"Medicine, AI and what it means to be human"

More in this section