listen to article
This voice is automatically generated. Please let us know if you have any feedback.
Diving overview:
Risks from products powered by artificial intelligence are the most significant technology hazard in healthcare, according to a report released Thursday by research nonprofit ECRI. While AI has the potential to improve care, bias, inaccurate or misleading responses, and poor performance over time can harm patients, the analysis said. Healthcare organizations should think carefully when deploying AI tools, clearly define their goals, verify and monitor their performance, and insist on transparency from model developers, according to a safety and quality research firm. It is said that it is necessary.
Dive Insight:
The report ranks the top healthcare technology risks that require “urgent attention” in the coming year, and picks one of the sector's most exciting emerging technologies for the top spot.
Healthcare industry leaders say AI has the potential to solve some of the industry's most pernicious labor issues, including provider burnout and staff shortages. These tools can be used for a wide range of medical applications, including triaging critical imaging results, assisting clinicians with note-taking, and assisting in scheduling patient appointments.
However, the ECRI report says that unless healthcare organizations carefully assess and manage the risks posed by AI, quality care and patient safety could be compromised.
For example, AI can perpetuate bias in the underlying data used to train models, exacerbating existing health disparities. Additionally, model performance can degrade and produce inaccurate or inappropriate responses when used with patient populations that do not reflect the model's training data.
“When implementing an AI model, you need to be careful whether the population the model was trained on actually matches the characteristics of the population you want to use the model within your facility,” says Principal Project Officer Francisco Rodríguez Campos says: This was stated at an ECRI webinar on the report.
Hallucinations and AI systems providing inaccurate or misleading information can also be a risk for healthcare providers. Additionally, model performance can degrade over time, especially for AIs that continually incorporate new information or when clinical situations change.
According to ECRI, risks vary depending on how an organization implements AI. According to ECRI, inadequate monitoring, poor data management practices, and overreliance on models can pose risks to patient care.
The regulatory environment for AI in healthcare can also be a patchwork, with the federal government still working on a comprehensive strategy. According to ECRI, some AI systems for tasks such as creating clinical documentation and scheduling appointments, which can have a significant impact on patient care, may be regulated as medical devices by the Food and Drug Administration. The sex is said to be low.
As companies implement AI, they must establish effective governance structures and ensure staff are trained on the model's capabilities and limitations. You should also validate your model's performance, ideally using external sources, and continue to monitor your system over time.
According to ECRI, when purchasing an AI product, an organization should ask the technology company what data the system is trained on, a clear explanation of how the AI will work, and ideal conditions. It is also necessary to demand transparency, such as indicators that show performance under management.
However, AI is not the only technology threat to healthcare organizations. Cybersecurity incidents with vendors, such as electronic medical records or third parties that provide scheduling and billing services, can seriously impact patient care. ECRI ranked cyber threats to vendors third on its list of dangers.
The healthcare industry witnessed a number of cyber attacks against major vendors earlier this year after an incident at Change Healthcare, a major clam processor, delayed payments to providers for weeks and exposed the data of 100 million Americans. There were concerns about the potential impact.
To mitigate potential damage, organizations should conduct vendor risk reviews, ensure redundancy for critical systems, and develop incident response and recovery plans.
Callie Smith, vice president and head of information security at ECRI, said response training should not be limited to cybersecurity teams.
“To be able to work with health care providers in the specific settings in which they are providing care to identify what an outage means for the health care workers who are actually providing care. We need to do that,” Smith said during the webinar.