From building consensus on the definition of responsible artificial intelligence to building consensus on use cases for semi-autonomous medical AI, even in these polarized times as we look to 2025, industry and government are on the path to shared values. “Looks like we're moving towards that,” said Brian Anderson, CEO of the Coalition for Healthcare AI.
“Policymakers and regulators need to understand and embrace this kind of framework that is being developed in the private sector for what responsible AI looks like in health,” Anderson told Healthcare IT News. We need to develop a regulatory framework around this.”
Match to date
The technology industry and federal health regulators have given much thought to AI model cards, or “AI nutrition labels.” This is an easy-to-understand form of communication used to identify important aspects of AI model development for users.
On Thursday, as CHAI announced the open source version of its draft AI model card, we discussed the coalition's recent experience and what's coming in the near future in developing public-private frameworks to safely disseminate healthcare AI. We spoke to Mr. Anderson.
“It's great to see that the direction of the private sector innovation community is aligned with the direction of the public sector regulatory community,” he said.
This month, CHAI is seeking feedback on an open source draft model card, which was published by the Office of the National Health Technology Coordinator's Health Data, Technology, and Interoperability: Certification Program Update, Algorithmic Transparency and Information Rule. With plans to roll out updates over the next six months, which will be “streamlined,” Anderson said he expects the various regulators have and will continue to move in the same direction. Ta.
Specifically, he cited some coordination with medical device regulators regarding AI requirements.
Earlier this week, the U.S. Food and Drug Administration included an example of a voluntary AI model card in its draft full product lifecycle recommendations (design, development, maintenance, and documentation) for AI-enabled devices.
“One of the interesting things about looking at FDA's sample model card and then looking at ONC's HTI-1 rule is that CHAI's model card aligns very well with both rules.” Anderson said.
According to the FDA, AI-enabled medical device model cards can address communication challenges in presenting critical information to medical users such as patients, clinicians, regulators, researchers, and the public. .
“Research has demonstrated that the use of model cards increases user trust and understanding,” the FDA said in its draft guidance. “They are a way to coherently summarize important aspects of AI-enabled devices and can be used to succinctly describe their characteristics, performance, and limitations.”
As an example of healthcare AI regulation, this shows where FDA regulators are heading in their work to build trust around the use of AI.
“But stakeholder groups in the private and public sectors need to work together, informing each other,” Anderson said.
When asked about the incoming administration, he said, “All the leaders I've talked to in the Senate and the House are very interested in understanding how to form public-private partnerships with organizations like CHAI.'' “There is,” he pointed out.
The door is open for more rigorous healthcare AI tasks to continue, such as government-industry collaboration on the AI Assurance Lab and its capabilities, but “work still needs to be done,” Anderson said.
“We need time to do that and we appreciate the incoming administration working with us and collaborating and hopefully providing that time. It's very refreshing and very I think he’s excited.”
Annual label updates and IRL usage
Anderson said CHAI's model cards are intended to be “a living document as new capabilities emerge, especially in the field of generative AI.”
“The metrics and methodologies used to evaluate these new capabilities will most likely need to be changed or created,” he said.
Even before issuing the draft Medical Device Total Lifecycle Guidance, FDA finalized its review of the prescribed change control plan for AI and machine learning applications without triggering the need for new marketing applications.
“When you think about the different sections of the model card, you need to include different data: different evaluation results, different metrics, different types of use cases,” Anderson said.
“That kind of flexibility is going to be very important,” he added, noting that nutrition labels in AI models and systems will need to be updated regularly “to ensure at least an annual level.”
For healthcare providers, there are significant complexities to consider when using AI-enabled clinical decision support tools to minimize errors and oversights.
“We will struggle with incomplete transparency and we need to recommit to it,” he said.
Whether a model was trained based on a specific set of attributes that may be relevant to a particular patient may not be included in the user-friendly model card.
“These model cards can contain any information, but the vendor community is at risk of[intellectual property]disclosure. So how do you protect vendor IP? That balance is important,” he said. But give the customer, in this case the physician, the information they need to make the right decision about whether or not they should use that model for the patient in front of them,” Anderson said.
“The causality of how it affects the specific outcome for the patient in front of you is very deep,” he acknowledged.
Invite others to join the AI evaluation table
While HTI-1's 31 category areas for electronic health records and other qualified health IT are “a very good starting point,” there are many different use cases for AI, especially in the direct-to-consumer space. That's not enough, he said. Anderson.
“The model cards we are developing are intended to be used very broadly in a variety of use cases, and we see a lot of new use cases emerging in the next year in the consumer space, especially in generative AI. “I guess.” he explained.
However, over the next two to five years, evaluating healthcare AI models will become more complex, raising questions about how we define “human flourishing.”
Anderson believes that use cases will be closely tied to medical AI agents, and developing trust frameworks around them will involve “ethicists, philosophers, and social scientists” who advise engineers and AI experts. He said he would need the support of “scholars and spiritual leaders.” Consider evaluation frameworks for those tools.
“Developing something like a framework for evaluation is going to be a big challenge in the future of agent-based AI,” he said. “This is a very intimate and personal space. How do we build trust with our models? How do we evaluate them?”
Anderson said that starting next year, CHAI will spearhead a “very intentional effort to bring together community members and stakeholders that you might not necessarily think about first about the kinds of stakeholders you include in an effort like this.” He said he would stand.
“We really need to make sure these models are aligned with our values, but there's no rubric for how to evaluate models. I don't know how to do that yet. I don't think anyone understands it yet.
Andrea Fox is a senior editor at Healthcare IT News.
Email: afox@himss.org
Healthcare IT News is a publication of HIMSS Media.