Where does AI stand in terms of research and patient care?
Overall, AI is actively used and developed for both research and care. AI technology has proven useful in areas rich in large amounts of data, namely imaging or general functions of hospital operations. AI can sometimes help healthcare nuts and bolts work better. However, broader clinical integration is still heterogeneous today.
In terms of care, it brings potentially new insights in several ways. One is initial identification through AI-assisted monitoring of patients at risk of degradation. There are also ways that help patients and providers connect while keeping their providers in a loop. This is not a situation where you are pretending to be your doctor. Rather, it is a set of systems being studied in pilots that help providers better deal with patients. The physicians involved in each step will respond more quickly, effectively and accurately.
On the research front, new possibilities are seen explosively explosively, especially as entry barriers have been dramatically reduced through improved technologies. One example of AI Advanced accelerated research is a program and AI tool called Alphafold, which is excellent at predicting protein structures. These structures are useful in areas such as drug discovery and allow for efficient computational screening of potential drugs.
In particular, in the research environment, the use of acceleration in predicting variant effects of AI is observed, examining various points in which individuals differ in genomes to predict biochemical effects. In the long run, we can imagine situations where we are trying to identify which variants of the genome contribute to rare diseases. Predicting the effect of each variant may reveal important clues. Ultimately, this could lead to a better diagnosis and treatment of these conditions.
What are some examples of what you see coming into this space both nearby and long term?
In the short term of things, he says there is a lot of growth in what is called a “quiet tool.” They sit within existing systems, helping to remove pressure on clinicians, reduce friction, and improve relationships between providers and their patients.
AI scribes are one example. Instead of having to look at the computer while the provider is taking notes, wouldn't it help to be able to focus on the patient while the provider meets and looks at the patient? What if AI could draft a summary to ensure that providers are able to ensure that the most important elements are included?
Another area is inbox management. Help providers respond more quickly and comprehensively to patients. It may not be flashy, but at the same time, it has improved human connection. For such systems, it is very important that the first draft generated with AI support is a solid starting point. Otherwise, we added additional work to modify the computer in addition to connecting with the patient. Yanjun Gao, a faculty member in our department, has worked to improve such a system, making it even better for patients and healthcare providers.
AI and imaging have been fundamentally altered over the past decade and a half thanks to the advent of deep learning algorithms. These algorithms are beginning to reach the point where imaging is important to support providers.
In the long run, I think we can see tools that can help in the previous identification of a particular disease, including interventions that are difficult to detect early, but are applied at the earliest stages. Intervention of these diseases more quickly through and more quickly will help extend the extra quality year. Nevertheless, these approaches need to be proven through clinical trials.
AI is great at combining complex data from multiple sources. At this point, we mainly discussed data from patient encounters, electronic health records, or imaging, but there are also opportunities to use data from other systems. Think about wearables such as watches and phones along with genomics.
How do health sciences and healthcare AI differ from other fields?
It comes down to the approach. In health sciences, AI needs to be evaluated rigorously. This type of rating is not a headline in terms of hype. Rather than exceeding the concerns of potential users, such systems are developed together with patients and healthcare providers, and ultimately based on human physiology and understanding of illness.
Tell BennettWork of Development of a new pediatric sepsis criteria That's a good example. Yes, we use cutting-edge technology, but in a carefully and thoughtful way. It is also designed to work in the most data-rich settings of cutting-edge children's hospitals, where children are being taken care of.
What do you think about managing hype around AI?
We are trying to make Cu Anschutz a place where rubber meets the road. We are not caught up in over-appreciation or hype. We must constantly focus ourselves and ask: “Is this leading to advancing research and a better understanding of new treatments and health?” It is fundamentally a different challenge in health care. If there is an error in the generated photos of cats, the effect is not very high. If there is an error in patient care, it is another ball game.
What are the challenges involved in building AI datasets in healthcare?
Health data is fragmented and standardization requires real effort. For example, the Sepsis Project has compiled over 3.5 million pediatric records across five countries and three continents. That's not a trivial thing.
Cu Anschutz has built Health Data Compass, the first research data warehouse on Google Cloud. This and associated infrastructure allow for large, high-quality, well-governed datasets to support research and care innovation.
At the same time, it is a balance between pursuing innovation and completely locking down data. These technologies require the establishment of these norms and expectations. It is important for each academic medical center to finesse “Yes, we need to make technology available on the frontline without putting patients and students at risk.”
How does Cu Anschutz handle privacy and data concerns?
The stewardship here is substantial, as privacy is non-negotiable. Following the Federal Health Insurance Portability and Accountability Act (HIPAA) and research regulations, identification and consent all form this conversation. The data that identifies patients is protected – and we treat it that way. At Cu Anschutz, Health Data Compass manages access based on use cases, and the tiers of identified datasets are controlled to personal health information with limited use.
Our research is conducted within a secure environment with surveillance and role-based access. For us, it's not just about compliance, it's about gaining trust. For example, one person cannot access all available data. Access requires the buy-in of the people involved in the regulations.
How do you keep AI as a tool and allow it to become a crutch?
That is absolutely important. We want to reduce friction, but we're not critical thinking. There is an area where humans are superior, and we want to support it and not replace it.
The criteria for sepsis I mentioned before are a great example. It does not replace the clinician's judgment. They always know their patients best and provide additional integrated information that can support their decision making.
Also, it is important for medical students to come up with these techniques to maintain objectivity. Last year I taught a class Shawn DavisMD, PhD, professor in the Faculty of Biomedical and Information Sciences, and our approach was to teach students to question the AI models being used. How are AI models trained, how are they evaluated, what data is used, and when should they trust it? These questions should be in front of your mind. We have tried to remind them that things look good in technology demonstrations, but as a future provider, you should always think about the rigour of deployment in a care environment.
What makes you most worried in this field? What's the most exciting?
One-Size-Fits-All does not work in healthcare. Not only do you need to pay attention to model performance on the first day, but bias, transparency and long-term ease of use. Importantly, this extends to our data. You need to make sure it is representative. For example, if you are building an AI system that is based solely on data collected at an academic health center and you don't bring data from a community hospital, you are worried about building an AI tool. Or, in practice, they may be inaccurate in a way that is difficult to detect.
Meanwhile, I'm excited to see AI go from the lab to bedside. Especially when it leads to fewer previous diagnoses, burnout, or adverse events. Being in this field is an exciting time, working with clinicians to bring more attention to patients and realise their dreams.