Hospitals, clinics, and health systems across the United States are feeling the strain: workforces are strained and understaffed, operating costs are rising, and demand for services often exceeds capacity, limiting access to care.
Enter artificial intelligence. Nearly two years after the launch of ChatGPT thrust AI into the spotlight, investors, technology companies, and healthcare organizations have invested heavily in AI, issued countless press releases, launched countless pilot projects, and sometimes painted breathtaking visions of AI saving healthcare.
But AI’s real impact on healthcare has so far been limited. Are we expecting too much, too soon?
Expectations and reality
Outside of healthcare, and across the broader economy, AI doomsayers are growing as the gap between expectations and reality widens. Many companies currently use AI to create emails, images and marketing materials, but there is no “killer application” to justify AI’s high costs.
Compared to other industries, AI may be even more challenging in healthcare transformation, where the stakes are high, organizational complexity is complex, and regulations are uncertain.
First, there are technical challenges: predictive algorithms do not generalize across different contexts. For example, implementing an “out-of-the-box” sepsis algorithm in a hospital (without training it on local data) resulted in many false alarms and undetected cases of sepsis. Moreover, generative AI is too unreliable to be applied to high-value tasks such as performing triage, diagnosing, and recommending treatments. “Generative AI systems like GPT-4 are both smarter and dumber than anyone we’ve ever met,” explains Peter Lee, president of Microsoft Research. “We’re both overestimating and underestimating their potential in healthcare.”
Additionally, many doctors, nurses, and healthcare consumers are skeptical of AI, concerned that it will jeopardize privacy, exacerbate bias, and negatively impact the doctor-patient relationship. Having experienced electronic medical records as failing to meet expectations and contributing to burnout, they no longer feel they can trust claims that AI will necessarily improve healthcare.
Finally, implementing AI in the real world is complex, involves many stakeholders, requires significant resources, and is fraught with potential pitfalls. But unlike previous digital initiatives such as electronic health record implementations (over $34 billion in Meaningful Use Payments) or the temporary pivot to virtual care (COVID-19 pandemic), provider organizations do not have a significant incentive to adopt AI products that will increase costs, force changes to workflows, and typically do not result in a direct increase in reimbursement.
Surviving a long transition
This is not to say that AI is useless or not helpful in healthcare. Some organizations are already using AI solutions to achieve meaningful benefits, such as preventing hospital readmissions and reducing physician documentation burden. As AI technology advances, it will improve many aspects of clinical care, operations, and research.
Still, we need to temper our expectations: history has shown that it takes years, not months, to build useful AI products, integrate them into workflows, and ultimately enable new and better ways to deliver care.
During this transition period, healthcare delivery organizations should take the following actions to maximize the current and future net benefits of AI:
1. Experiment and evaluate safely
No matter how compelling the technology, healthcare professionals must recognize that healthcare is about people, not products, and follow the foundational principles of evidence-based medicine. Organizations like the Coalition for Healthcare AI are developing standards for the implementation of healthcare AI models and establishing assurance labs to evaluate them. Healthcare providers must pilot solutions to meaningful problems and establish the governance and evaluation necessary to ensure effective AI tools are used safely and equitably.
2. Improving the care system
Healthcare is a complex adaptive system with multiple dynamically interacting components that determine performance. Organizations implementing AI must follow a holistic systems approach that goes beyond technology to also include people, systems, and design.
First, before rushing to automate any medical task, we should first ask ourselves whether it is worth doing in the first place. As Peter Drucker taught us, “There is no greater waste than doing very efficiently what should not be done.”
Second, constraints (the point at which demand exceeds capacity) dictate the pace of the entire process, so downstream bottlenecks must be identified and eliminated before AI can be applied to make various processes more efficient. For example, automating patient scheduling will do little good if physicians’ schedules are already full. And early identification of sepsis patients is futile if nurses and clinicians cannot act on the information.
3. Embrace incremental gains
Organizations need to resist bold claims and stay grounded in reality: AI won’t magically solve all healthcare’s woes, and because large-scale language models can’t reason or provide true understanding, many healthcare problems will likely require new hybrid approaches that combine machine learning with traditional symbolic AI.
But you can still leverage AI today to reap small benefits (e.g., reducing some tedious tasks, customizing patient education and engagement content) while setting yourself up for success in the future. The key is not to overlook opportunities beyond AI, and above all, you need to think about who you are, what you do, and how you can improve it, with or without AI.