In what seems like the blink of an eye, mentions of AI have become ubiquitous in the healthcare industry. From deep learning algorithms that can read computed tomography (CT) scans faster than humans to natural language processing tools that can comb through unstructured data in EHRs, the applications for AI in healthcare seem endless.
But like any technology at the peak of its hype curve, AI faces criticism from its skeptics alongside enthusiasm from die-hard evangelists. Despite its potential to unlock new insights and streamline how providers and patients interact with healthcare data, AI might bring considerable threats of privacy problems, ethical concerns and medical errors.
Balancing the risks and rewards of AI in healthcare will require a collaborative effort between healthcare providers, technology developers, regulators and consumers. To facilitate this collaborative effort, the industry must first address the highly divisive discussion points commonly raised when considering the adoption of some of the more complex technologies available in the healthcare world.
AI will challenge the status quo in healthcare
AI will challenge the status quo as the healthcare industry adapts to new technologies. As a result, patient-provider relationships will be forever changed, and the idea that AI will change the role of human workers to some extent is worth considering.
Pro: AI could reduce providers’ administrative burdens, mitigate burnout
Research shows that AI automation could ease clinician workflow, which could, in turn, help curb projected workforce shortages across the industry. Burnout resulting from increased workload, administrative burden and lack of organizational support can push clinicians out of the healthcare field early.
A study published in the Feb. 2025 issue of JAMA Network Open reveals that ambient AI can reduce the cognitive burden associated with clinical documentation. The study, which combined objective metrics with physician feedback, found that using the ambient scribing tool was associated with 20.4% less time in notes per appointment and a 30% reduction in after-hours work per workday.
Additionally, a Nov. 2024 survey by the AMA found that 68% of physicians see definite or some advantage to using AI technology. More than half (57%) noted addressing administrative burdens as the top area of opportunity for AI.
Automating routine tasks that take up a physician’s time, such as EHR documentation, administrative reporting and triaging CT scans, can free human clinicians to focus on the complicated challenges of patients with rare or serious conditions. According to research published in Aug. 2024, AI alleviating the administrative workload could give clinicians more time with their patients without distractions.
“AI can contribute to restoring a sense of purpose, fulfillment and efficacy among healthcare workers, reinforcing their essential role as caregivers, rather than distancing them from these core professional attributes,” wrote researchers from Yale University School of Medicine.
Pro: AI tools continue to find valuable new use cases
Most AI experts believe that the blend of human clinical experience and digital augmentation will be the natural settling point for AI in healthcare. Each type of intelligence will bring something to the table, and both will work together to improve care delivery. As a result, many health systems are deploying the tools across many use cases.
For instance, the University of Iowa (UI) Health Care uses an ambient AI tool to ease clinical documentation burdens. The tool offers clinicians draft documentation in seconds, enabling them to turn around clinical notes in hours. Clinicians at UI Health Care reported a 26% decrease in burnout after a five-week pilot of the tool.
AI and other data analytics tools can also be key in population health management. A comprehensive analytics strategy to manage population health requires health systems to use a combination of data integration, risk stratification and predictive analytics tools.
Care teams at Intermountain Health use risk stratification tools to identify and address behavioral health needs within its primary care population. Meanwhile, Parkland Health & Hospital System and Parkland Center for Clinical Innovation developed a risk stratification infrastructure to support suicide prevention efforts.
Despite the potential for AI in healthcare, implementing the technology while protecting privacy and security is not easy.
Con: Radiologists and pathologists could face job disruption
Despite these promising projections, fears about AI and the workforce might not be entirely unfounded. AI tools that consistently exceed human performance thresholds are constantly in the headlines, and the pace of innovation is only accelerating.
A June 2024 survey conducted by consultancy firm Customertimes found that 57% of the 2,000 Americans polled believe that AI adoption will result in layoffs within the healthcare industry, while 13% think it will save jobs in the sector. Radiologists and pathologists might be especially vulnerable, as many of the most impressive breakthroughs are occurring in imaging analytics and medical diagnostics.
In recent years, AI-based imaging technologies have advanced rapidly. For instance, in 2024, University of California, Los Angeles researchers developed an AI tool that uses 3D biomedical imaging data to predict disease risk factors. The Slice Integration by Vision Transformer (SLIViT) model uses artificial neural networks to address the lack of large data sets — a common hurdle to developing AI models for 3D imaging analysis.
Further, a 2024 study conducted by Stanford researchers shows that large language models (LLMs), a type of AI, could improve diagnostic accuracy. The researchers fed a series of cases based on actual patients to GPT-4 and asked for a diagnosis. The LLM achieved a median accuracy score of 92. These technologies are finding a home in healthcare, leaving some physicians concerned that AI is about to evict them from their offices and clinics.
Con: Automation bias and anti-AI bias issues persist
Some have raised concerns that clinicians might become overly reliant on these technologies as they become more common in healthcare settings. However, physicians continue to maintain a healthy skepticism around AI, and challenges like automation bias aren’t new: Strategies exist to prevent it.
Research shows that most patients don’t trust their health systems to use AI responsibly. In fact, patient trust in AI varies by use case, with 44% pointing to automation use cases and 40% stating they’d like AI to be used to provide diagnostic support. Patient trust in AI also varies across generations, with only 12% of baby boomers trusting the use of AI in healthcare, compared with 32% of millennials.
But whether patients and providers are comfortable with the technology or not, AI is advancing in healthcare.
AI privacy and security challenges
AI in healthcare presents a whole new set of challenges around data privacy and security. These challenges are compounded by most algorithms requiring access to massive data sets for training and validation.
Con: AI increases the risks of data breaches and hacks
Shuffling gigabytes of data between disparate systems is uncharted territory for most healthcare organizations, and stakeholders no longer underestimate the financial and reputational perils of a high-profile data breach. Most organizations are advised to keep their data assets closely guarded in highly secure, HIPAA-compliant systems. In light of an epidemic of ransomware and knock-out punches from cyberattacks of all kinds, chief information security officers have every right to be reluctant to lower their drawbridges and allow data to move freely into and out of their organizations.
Storing large data sets in a single location makes that repository a very attractive target for hackers. AI models are also at risk for adversarial attacks, in which bad actors aim to alter images or other data points to make AI models draw incorrect conclusions. These attacks can cause models to make a wrong diagnosis, opening up the potential for major patient safety issues.
Con: AI regulation and data privacy laws still lag
In addition to AI’s position as an enticing target to threat actors, there is a severe need for regulations surrounding AI and how to protect patient data using these technologies. Experts caution that ensuring healthcare data privacy will require updating existing data privacy laws and regulations to include information used in AI and machine learning systems, as these technologies can re-identify patients if data is not properly de-identified. However, AI falls into a regulatory gray area, making it difficult to ensure that every user is bound to protect patient privacy and will face consequences for not doing so.
Con: Patient privacy could become more at risk
Patient privacy can also be at risk in health systems that engage in electronic phenotyping using algorithms integrated into EHRs. The process is designed to flag patients with certain clinical characteristics to gain better insights into their health and provide clinical decision support. However, electronic phenotyping can lead to a series of ethical pitfalls around patient privacy, including unintentionally revealing nondisclosed information about a patient.
Privacy-enhancing technologies (PETs) can help protect patient privacy and provide additional protection to clinical data. Algorithmic, architectural and augmentation PETs can all be leveraged to secure healthcare data.
Security and privacy will always be paramount, but this ongoing shift in perspective as stakeholders become more familiar with the challenges and opportunities of data sharing is vital for allowing AI to flourish in a health IT ecosystem where data is siloed.
Ethics, responsibility and oversight
The thorniest issues in the AI debate are philosophical ones. In addition to the theoretical quandaries about who gets the ultimate blame for a life-threatening mistake, there are tangible legal and financial consequences when the term malpractice enters the equation.
Con: AI makes malpractice and bias issues more challenging to define
AI algorithms are complex by nature. The more advanced the technology becomes, the harder it will be for the average human to dissect the decision-making processes of these tools. Organizations are already struggling with the issue of trust when heeding recommendations flashing on a computer screen, and providers are caught in the difficult situation of having access to large volumes of data but not feeling confident in the tools to help them parse through it.
While some might assume AI is completely free of human biases, these algorithms learn patterns and generate outputs based on the data they were trained on. If this data is biased, the model will be, too. There are currently few reliable mechanisms to flag such biases. Black box AI tools that give little rationale for their decisions only complicate the problem — and make it more difficult to assign responsibility to an individual when something goes awry.
When providers are legally responsible for any negative consequences that could have been identified from data in their possession, they need to be certain that the algorithms they use present all of the relevant information in a way that enables optimal decision-making.
Pro: Healthcare stakeholders are already combating ethical concerns
Stakeholders are working to establish guidelines to address algorithmic bias. For instance, a panel of experts convened by the Agency for Healthcare Research and Quality and the National Institute on Minority Health and Health Disparities created a framework to address algorithmic bias with five key principles:
Promote health equity in all phases of the algorithm lifecycle.
Ensure algorithms and their use are both explainable and transparent.
Engage patients and communities during all phases of the model’s lifecycle to build trust.
Explicitly identify fairness issues and tradeoffs from an algorithm’s use.
Direct users to establish accountability for fairness and equity in the outcomes of the algorithm.
The Coalition for Health AI’s draft framework for responsible health AI and the National Academy of Medicine’s draft AI Code of Conduct have recently provided some guidance for developing and deploying trustworthy AI. But these frameworks can only go so far.
Developers could unknowingly introduce biases to AI algorithms or train the algorithms using incomplete data sets. Regardless of how it happens, users must be aware of the potential biases and work to manage them. Bias in AI is a significant negative that developers, clinicians and regulators are actively trying to change.
Ensuring AI develops ethically, safely and meaningfully in healthcare will be the responsibility of all stakeholders: providers, patients, payers, developers and everyone in between. There are more questions to answer than anyone can even fathom. But unanswered questions are the reason to keep exploring — not to hang back.
The healthcare ecosystem has to start somewhere, and “from scratch” is a good place to start. It’s an exciting, confusing, frustrating, optimistic time to be in healthcare, and the continuing maturity of AI will only add to the mixed emotions of these ongoing debates. There might not be any clear answers to these fundamental challenges at the moment, but humans still have the opportunity to take the reins, make the hard choices and shape the future of patient care.
Editor’s note: This article was updated in March 2025 to include new research data and to improve the reader experience.
Anuja Vaidya has covered the healthcare industry since 2012. She currently covers the virtual healthcare landscape, including telehealth, remote patient monitoring and digital therapeutics.
Hannah Nelson has been covering news related to health information technology and health data interoperability since 2020.