Even ai is not perfect.
Healthcare professionals are increasingly using artificial intelligence (AI) as part of their workflows. This technology can increase efficiency, but also increase the risk of errors.
In July, CNN reported that ELSA, the FDA's AI tool created to accelerate drug and medical device approval, produced fake research studies with its citation. And in early August, The Verge reported that Google's healthcare AI model Med-Gemini mentioned a non-existent body part in a 2024 research paper.
Furthermore, a study on Mount Sinai Health System in New York, released on August 2, found that AI chatbots are “extremely vulnerable” to attacks that promote false medical information.
Given the growing trend in AI use in healthcare, we asked industry-wide executives how leaders respond when errors occur and what safeguards can be taken.
hospital
Matthew Decamp, an internal medicine physician and health services researcher at the University of Colorado's Center for Bioethics and Humanities, said there is “a lot of uncertainty” about who is responsible when AI errors occur in healthcare, as they have not been thoroughly tested in a legal environment.
However, he said the industry could turn to existing protocols for sharing responsibility among a variety of stakeholders, including AI developers and end users.
“Even before AI was involved, I found ways to divide responsibility between the manufacturers of CT scanners, the healthcare systems that purchased it, the radiologists who use it, and primary care physicians like me who might read the final report,” Decamp said. “I think this will help us feel at ease and we will probably take a similar approach.”
Pharmaceuticals
Thomas Fuchs, SVP and Chief AI Officer at drug company Eli Lilly, said the priorities of healthcare providers using AI are “always patient and maintain confidence.”
Healthcare Brew covers drug development, health startups, the latest technologies, and how it will impact hospitals and providers and keep administrators and providers informed.
He reiterated Decamp's sentiment that the group's efforts would be needed to mitigate AI errors, adding that technology developers “need to design systems with rigorous verification, transparency and continuous surveillance.”
Meanwhile, organizations using technology must set up processes to quickly “detect, evaluate, and respond to errors and adhere to guidelines like the National Institute of Technology's AI Risk Management Framework designed to be designed by government agencies to manage the risks associated with the use of AI.
Health Technology
Ethan Berke, Chief Medical Officer and SVP of Integrated Care at Virtual Care Company Teladoc Health, said that just as health care organizations have systems in place to address other types of medical errors, they need to create a system that will “track, classify, and investigate” the safety threats that may arise using AI. You can then use that data to prevent potential future errors.
Teladoc has over 60 unique AI models and the company has created a “strict” process to test and evaluate “accuracy, bias, and safety” before deploying.
“We are all responsible for ensuring that our AI solutions are reliable, ethical and safe,” he said. “By investing in patient safety and quality, we were able to build a program that could analyze potential safety events more quickly, address the root cause, and catch errors before reaching the patient.”