
Making a generative AI program “hallucinate” is not difficult. Sometimes, all you need to do is ask it a question. While AI chatbots spouting nonsense can range from “charming” to “obnoxious” in everyday life, the stakes are much higher in the medical field, where patients’ lives can hang in the balance. To keep medical AIs from hallucinating, developers might have to do something counterintuitive, and limit what the programs can learn.
Simon Arkell, CEO and cofounder of Synthetica Bio, touched on this issue during Techonomy’s recent Accelerate Health: Pioneering Solutions for a Healthier Society conference. Arkell joined Wensheng Fan, CEO of Spectral AI, as well as Solomon Wilcots, sports health leader at Russo Partners, for a discussion called “AI’s Medical Odyssey: How Technology Is Rewriting the Healthcare Story.” Naturally, generative AI’s penchant for misinformation was a salient Topic.
To start, Arkell gave a brief refresher on what an AI hallucination is:
“As you may have seen, this technology, if you ask it something about yourself or something you know a lot about, it’s going to come back very confidently and give you the answer,” he said. “And it may be wrong, but it’s going to act really confident, like it’s 100% right.” He then asked the audience to imagine what might happen if a medical AI recommended a “confident but wrong” course of treatment for a lung cancer patient.