Alberta Machine Intelligence Institute

Building Trust in Medical AI Models | Amii

Published

May 12, 2022

How likely are people to trust a diagnosis from a digital doctor? That’s one of the pressing questions being asked by Russ Greiner, Fellow and Canada CIFAR AI Chair at Amii, as artificial intelligence becomes more and more integral in healthcare.

Technology has reached a point where machines can outperform human physicians at various specific tasks. But still, research shows that patients and clinicians hesitate to trust medical AI systems. One of the reasons for that lack of trust, according to Greiner, is that people often aren’t comfortable receiving a diagnosis alone. Instead, we naturally want to know how a doctor reached their conclusion.

“When we’re talking to a doctor, they give us a story, an explanation: what they think and why,” Greiner says.

“And we’re much less trusting of computers not explaining things than we are of people.“

The first part of this series examined some of the specific technical hurdles that came with integrating artificial intelligence into healthcare, based on recent work published by Greiner and many colleagues in the Canadian Medical Association Journal. This second article looks at different challenges in accepting medical AI: trust and explainability.

“People assume the world is simple, with just simple combinations of a few features. If it were, we wouldn’t need complex machine learning tools."

Russ Greiner

Explainable AI

An explainable AI is an artificial intelligence that is transparent in how it came to a decision. It means that people can see the machine’s calculations and, just as importantly, understand why they were made. An explainable AI model that predicts a patient’s risk of heart disease, for example, might show exactly how much weight it gives to a patient’s blood pressure, family history and lifestyle habits. Greiner says explainability is an incredibly important concept in artificial intelligence. When used correctly, it can help us to be sure an AI model makes the correct conclusions for the right reason.

It seems, then, that the simple answer is to design medical AI to have perfect explainability. Not exactly, Greiner argues. Full transparency comes with its own problems.

For one, it is a much higher standard than we demand of human physicians. Medical professionals obviously use hard data and tests to inform their diagnoses. But experience and intuition are also necessary parts of medicine. That’s why doctors and nurses have residencies and internships to gain that vital experience.

A psychiatrist might diagnose a patient with depression using criteria such as a flat affect, where someone doesn’t show emotions in expected ways, based on experience over hundreds of other cases they’ve seen throughout their career. Similarly, a future AI model might make a similar diagnosis based on patterns it has identified after being trained on millions of other cases of depression. It can tell that specific patient features indicate depression, but it might not be able to precisely describe why they are important.

“Apparently it is ok for human clinicians to use these intuitions because they are human. But, with the program, no, you have to tell me a story, you have to describe precisely why the readings were high,” he says.

“By demanding more explainability from the machine diagnostician than we do from a human physician, we might be missing out on the main advantages that machine learning provides.”

Like building better microscopes

Just because a system isn’t explainable, that doesn’t mean it is wrong. Greiner says that if we focus too much on explainability, it could mean losing effectiveness, leading to worse outcomes for patients.

Even when an AI medical model can explain how it came to a diagnosis, is it always done in a way that people understand? And would that even be ideal? One of the main advantages of machine-learned models is that they may be able to do things that human beings can’t. While a person might weigh a handful of variables to make a decision, AI systems can consider hundreds or thousands of variables.

“People assume the world is simple, with just simple combinations of a few features. If it were, we wouldn’t need complex machine learning tools. But the world is not that simple – some decisions may inherently involve complex combinations of dozens, or hundreds or thousands of factors.. I don’t know about you, but I can’t keep track of a thousand things in my head,” he says.

Greiner thinks if AI is going to integrate into healthcare successfully, it will require a balance between explainability and fully taking advantage of the power that artificial intelligence provides.

“I think that there will eventually be … a slow, gradual acceptance that they can be like better microscopes, that they can give good advice to doctors,” Greiner says.

Striking that balance is no easy task. The final part of this series will look at the lessons learned in an attempt to integrate AI models to assist real-world physicians at St. Michael’s Hospital in Toronto.

Discover some of the other technical challenges in implementing artificial intelligence into healthcare, as well as some potential solutions, in the first part of this series – Overcoming challenges in healthcare AI: successfully deploying machine-learned models in medicine

Share