As consumers, we tend to give away our health information for free on the Internet, like when we ask Dr. Google “how to treat a broken toe.” But a Pew Research Center survey found that many people are uncomfortable with the idea of doctors using artificial intelligence (AI) to make diagnoses based on analysis of medical data.
So how concerned would consumers be if they learned that in the name of innovation, vast amounts of medical data is being uploaded to AI-powered models for analysis?
This is a question healthcare leaders may want to ask themselves, especially given the complexity, hassle, and liability involved in uploading patient data to these models.
what is the problem
As the use of AI in healthcare and healthcare research becomes more mainstream, the risks associated with AI-powered analytics increase and the potential for loss of consumer trust increases.
A recent survey by Fierce Health and physician social network Sermo found that 76% of physician respondents use general-purpose large language models (LLMs), such as ChatGPT, for clinical decision-making. Ta. These publicly available tools provide access to information such as potential drug side effects, diagnostic support, and treatment plan recommendations. You can also collect physician notes from patient interviews in real time via ambient listening. This is an increasingly popular approach to reduce administrative burden and allow physicians to focus on treatment. In both cases, mature practices for incorporating AI technologies are essential, including using LLMs as fact-checking and exploration points rather than relying on them to provide answers to complex care questions.
However, there are signs that the risks of leveraging LLMs for care and research require further attention.
For example, there are significant concerns regarding the quality and integrity of patient data input into AI models for analysis. Most healthcare data is unstructured and captured in open note fields in electronic health records (EHRs), patient messages, images, and even scanned handwritten text. In fact, half of healthcare organizations say less than 30% of their unstructured data is available for analysis. There are also discrepancies in the types of data that fall into the “unstructured data” bucket. These factors limit the overall picture of patient and population health. It also increases the likelihood that AI analysis will be biased and reflect data that underrepresents certain segments of the population or is incomplete.
Additionally, regulations surrounding the use of protected health information (PHI) have prevented some researchers and analysts from using all available data, but the significant costs of data storage and information sharing have also increased. This is a major reason why most medical data in particular is underutilized. Comparison with other industries. So too are the complexities involved in applying advanced data analytics to healthcare data while maintaining compliance with healthcare regulations, including those related to PHI.
Currently, medical leaders, clinicians, and researchers find themselves at a unique inflection point. AI has great potential to drive innovation by leveraging clinical data for analysis in ways the industry could only imagine just two years ago. The power of AI in healthcare is proving to go beyond “doctor”, with one in six adults using an AI chatbot at least once a month for health information and advice. Masu. “Google” is essential to ensuring consumer trust in these efforts while protecting what matters most to patients, such as privacy and the integrity of their health data. The challenge is to be creative in approaches to AI-powered data analysis and utilization while maintaining compliance with regulations surrounding health data.
Take appropriate action towards AI analysis
As the use of AI increases in the healthcare sector, modern data management strategies include sophisticated approaches to data protection that place the consumer at the center while meeting the fundamental principles of effective data compliance in an evolving regulatory environment. A new approach is required.
As AI innovation accelerates, here are the top three considerations for leaders and researchers in protecting patient privacy, compliance, and ultimately consumer trust.
1. Keep consumer trust in mind. Rather than simply responding to data privacy and protection regulations, consider the impact of your efforts on the patients your organization serves. When patients trust your ability to safely and securely leverage their data for AI innovation, it not only helps establish the level of trust needed to optimize your AI solution, but it also helps patients trust your ability to leverage their data for AI analysis. It also leads to sharing. This is essential for building AI. Individualized care plan. Currently, 45% of healthcare industry executives surveyed by Deloitte are prioritizing efforts to build consumer trust so that they feel more comfortable sharing their data and making it available for AI analysis.
One important step to consider to protect consumer trust is implementing robust controls over who accesses and uses data, and how it is used. This core principle of effective data protection helps ensure compliance with all applicable regulations. It also strengthens an organization's ability to generate the insights needed to achieve better health outcomes while ensuring consumer buy-in.
2. Establish a data governance committee for AI innovation. The successful use of AI in a business context depends on a variety of factors, from assessing the risks involved to the maturity of data practices and customer relationships. Therefore, data governance committees should include not only health IT experts but also clinicians and experts from a variety of disciplines, from nurses to population health experts and revenue cycle team members. there is. This ensures that the right data innovation projects are executed at the right time and that your organization's resources can best support them. You can also involve all key stakeholders in determining the risks and benefits of using AI-powered analytics and how to establish adequate data protection without unnecessarily hindering innovation. . Rather than “scoring your own work,” consider whether outside experts can provide value in determining whether adequate protections are in place.
3. Reduce the risks associated with re-identifying sensitive patient information. It is a myth to think that simple anonymization techniques, such as removing names and addresses, are sufficient to protect patient privacy. In reality, data that was supposed to be anonymized is often combined through sophisticated re-identification techniques deployed by bad actors. This requires more sophisticated approaches to protect data from the risk of re-identification while it is at rest. This is an area where a generalized approach to data governance is no longer appropriate. The key strategic question for organizations is: How does our organization deal with re-identification risks, and how can we continually assess these risks?
While healthcare organizations face some of the biggest hurdles to effectively implementing AI, they are also poised to implement some of the technology's most life-changing applications. By addressing the risks associated with AI-powered data analysis, healthcare clinicians and researchers can better utilize available data and ensure consumer trust.
Photo: steved_np3, Getty Images

Timothy Nobles is Integral's Chief Commercial Officer. Prior to joining Integral, Nobles served as chief product officer at Trilliant Health and head of product at Embold Health, where he developed advanced analytics solutions for healthcare providers and payers. He has over 20 years of experience in data and analytics and has held leadership roles at innovative companies across multiple industries.
This post is published through the MedCity Influencers program. Anyone can share their perspectives on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to learn how.