Doctor visits augmented by artificial intelligence are nothing to fear, so long as the health industry comes to understand and accept the technology’s benefits and blind spots.
The algorithm will see you now
You’ve had splitting headaches for weeks and your doctor recommends a scan. Just to rule out, you know, anything nasty.
The imaging process is relatively quick and painless. But it’s not a person interpreting whether your brain appears normal or not. It’s a computer program. Sarah Keenihan is an Adelaide-based freelance science and technology writer. She works with the Australian Institute for Machine Learning on a casual basis.
Sarah Keenihan is an Adelaide-based freelance science and technology writer. She works with the Australian Institute for Machine Learning on a casual basis.
Algorithms look over the curves and spaces and structures inside your head. A report is printed. “No abnormalities,” it reads. You take the printout in your hand, unsure of how confident to feel in the diagnosis.
Relax – this is not yet our reality.
Though the field of artificial intelligence, or AI, is moving fast, we’re not yet at the point where the technology is commonplace in our hospitals.
But there are AI tools in development for interpreting X-rays and endoscopy images, for reviewing skin lumps and bumps, and for looking at pathology samples in laboratories.
And so it’s with some urgency that Luke Oakden-Rayner is working to make sure medical AI is safe and reliable before it comes anywhere near you or your loved ones.
Luke is a radiologist – an expert in the field of medicine concerned with imaging tests such as X-rays, ultrasounds and MRIs.
He is Director of Research, Medical Imaging at the Royal Adelaide Hospital, and he’s just finishing up his PhD in medical imaging and diagnostic AI through the Australian Institute for Machine Learning at the University of Adelaide.
“One of my major agendas is safety in medical AI,” Luke says.
“A large part of my research has involved looking at the gap between performance testing of technology, and whether it actually delivers reliable outcomes and is safe for patients.”
Luke is an invited expert at workshops and conferences all over the world, valued for his high-level knowledge at the intersection of medical care, AI and machine learning, research and health policy.
Among 47 other global professionals, he is the only Australian author on 2020 guidelines for researchers looking to conduct clinical trials on AI. The clinical trial is the gold standard required to ensure a new diagnostic tool or treatment is effective for its intended purpose – and safe.
“So right now in the USA, there are about 100 medical AI tools that have theoretically been cleared for use, but not one has been tested under clinical trial conditions,” Luke says.
“Instead, testing involves comparing AI to human capability at a task, like cancer diagnosis on an X-ray. If the AI performs roughly as well as a human, it is standard practice to assume it will be as safe.”
The tricky thing about AI is it can do a good job at imitating human performance – most of the time.
“It’s tempting to make the assumption that when AI performs similarly to humans, it’s making decisions in the same way a person does,” Luke says.
“That’s absolutely not true – the way AI makes decisions is partially like what humans do, but not the same.”
It’s when rare issues show up on images that this becomes a problem.
“When we train as humans, we focus a lot of our effort on the rare, dangerous things – because of course we want to be able to identify and treat early,” Luke says.
“AI systems don’t know the difference between dangerous and safe, so they tend to learn to recognise the things they see most often.
“This means they get good at identifying common findings, but can fail at rare or unusual cases, and unfortunately these often tend to be the most dangerous.”
Luke is a proponent of humans and technology working together to improve healthcare.
“Yes, technology will continue to improve, and AI has a huge potential to improve medical practice,” Luke says, “but equally we need to put a lot of effort into working out where problems are going to occur, and then keep humans in the loop to avoid issues happening.”
A few years ago, Luke received emails every day from medical students asking whether they should bother doing specialist training in radiology.
“There was this idea around that AI and machine learning meant radiology was a profession on the way to extinction,” he says.
“Now talking to students, they really want to do radiology because it’s seen as a really exciting part of medicine, where technology is making huge inroads. They are excited about working with AI.”
Maybe machines aren’t going to take away medical jobs – they’re just going to make healthcare different. Hopefully for the better.