Learn more about the research and work of Amii Fellow and Canada CIFAR AI Chair Ross Mitchell. Ross is a professor in the Faculty of Medicine & Dentistry and an adjunct professor in Computing Science at the University of Alberta. He is also a Chair in Artificial Intelligence in Health at Alberta Health Services, as well as their Senior Program Director, Artificial Intelligence Adoption.
Check out his conversation with machine learning scientist Jubair Sheikh. They'll talk about Ross' work on medical imaging, using large language models to make better use of patient data, and the massive impact AI will have on our medical system.
[This transcript has been edited for length and clarity. To see the expanded conversation, watch the video above.]
Jubair Sheikh:
Thanks, Ross for joining us here today at Amii. We are very proud to have you as one of our fellows, I'm especially very honoured to interview you here.
Ross Mitchell:
Thank you.
Jubair Sheikh:
What got you interested in AI?
Ross Mitchell:
It really started at my PhD when I started my PhD study.
So my background is in medical imaging — specifically, my PhD is in medical biophysics. And I got it at Western University in Canada in Ontario.
I was in an imaging lab where we were studying, for example, how to use radiation to treat cancer and how to program MRI scanners to pull new information out of the body.
And I got interested in not so much that but what do we do with the images of the information once it's collected, how do we extract information from that that will affect care? And that very quickly led me to machine learning.
So my PhD ended up being about pulling information out of medical images to improve care for patients.
Jubair Sheikh:
Can you give me an elevator pitch of your current research program?
Ross Mitchell:
At my role at the University of Alberta and AHS, we're very interested in pretty much any type of AI data science that can improve outcomes or reduce the cost of care. So it's very broad. We are looking at things like electronic medical records, Medical texts, medical images and combinations of all of those.
And we're applying them to things like Alzheimer's , cancer, inflammatory bowel disease and more.
Jubair Sheikh:
What kind of these areas do you think interests you more?
Ross Mitchell:
You know a few years ago. I would have said I'm really interested in medical imaging and then I sort of got really interested in medical texts and the use of large language models to extract information.
But now it's a multimodal world. The models that we're interested in, you train on everything because the information is complex and it's highly correlated and you can't just look at an image. You can't just look at the radiology report and you can't just look at what was done to the patient and what drugs they were treated with. You need to look at all.
So that's really where we're heading in the future. We have projects in each but the goal is in the next few years to develop these models where you cross-train on everything
Jubair Sheikh:
With ChatGPT we see there are lots of applications coming up. So, how do you see the application of large language models for medical research and for medicine?
Ross Mitchell:
Sure, so [its] massive. It's going to really have an effect.
So I'll give you a practical example. I'm working with emergency physicians at Alberta Health Services to automate the process of doing what's called a chart review.
In medicine, a chart review happens when a physician or a researcher is interested in extracting information from the medical chart of a large number of patients. It's been a highly manual process, you hire an expert like a nurse or a research fellow and they go through and they read all of these charts and they jot down information, for example, in an Excel spreadsheet. “ Is this mentioned in the chart, yes or no?”
It can take months to a year to do a large chart review process. And so these people are interested in looking at utilization of AI a particular type of imaging exam to diagnose what's called a pulmonary embolism, which is a clot to the lung.
It can be very serious. It needs to be detected and treated immediately. It's like a heart attack, but for the lung, very serious condition. And so to test this CT scan of the lungs is very effective, but it's expensive. It's worth optimizing it.
So this is what the project was. It's not to predict who has one or not. It's retrospective. Are we ordering enough, too few, too many, or just the right number of tests? So it's quality.
They have 10,000 of these reports. And so they said you think we could use some AI techniques to extract the information and did the Radiology just say, yes, no or indeterminate in the report?
And so we did the training, we built the system and then we compared it to what the doctors had labelled. In the end, we found that it was like 98.99% accurate at predicting what the radiologist. At least as accurate as the medical students who provided the ground truth. But instead of taking months to read 10,000 reports, it took 18 minutes
And in addition to getting a very high accuracy. I also asked the model to justify or explain its rationale. And so it also provides a paragraph explaining why it came up with its conclusion. And that's a form of explainable AI and it's extremely important in healthcare to explain how you come up with your decisions.
It also is very useful for the prompt engineering process to see what the model was thinking, if you will, when it came up with its answer.
So the reaction from my clinical colleagues was this is going to radically change chart reviews. In the future, chart reuse will be done first by an AI system and then specific subcases, special exceptions, will be reviewed by humans in now, it's first done by humans and takes months.
So we're interested now in building a chart bot — not a chatbot but a chartbot — that will allow physicians without any special AI training or experience, they don't know how to program Python, to be able to use a tool to extract information from charts themselves.