Morgan Ribeiro: Welcome to Counsel That Cares. This is Morgan Ribeiro, the host of the podcast and a director in the firm’s healthcare practice. On today’s episode, we are exploring the quickly evolving regulatory landscape for the use of artificial intelligence (AI) across various healthcare settings. Over the last year, we’ve seen a number of states enact regulations that create guardrails as to how AI is implemented for healthcare organizations. And most recently, the One Big Beautiful Bill Act or H.R. 1 was passed and signed into law by President Trump. And the act includes a prohibition on a moratorium that would have prohibited state regulations for 10 years. And this follows a number of federal movements to better define the best approach to ensuring innovation continues without harming or discriminating (against) patients. So, with that introduction, I would love to share our guests for today’s episode. John Vaughan is a healthcare policy attorney who sits in the firm’s Century City office, Dan Silverboard is a healthcare regulatory attorney who sits in the firm’s Atlanta office, and Sarah Starling Crossan is a public affairs advisor in the firm’s Washington, D.C., office. So, a lot of various perspectives that we’re going to be able to provide here in this discussion, and welcome to the show everyone.
Dan Silverboard: Thank you, Morgan.
Morgan Ribeiro: Great, well, I would love to, before we jump into the meat of our discussion, have everyone quickly introduce themselves and share with our audience a little bit more about your expertise as it relates to the evolution of artificial intelligence in the healthcare space. So John, I will start with you.
John Vaughan: So yes, I’m John Vaughan. I’m a partner in the Century City office, and I joined the firm coming from industry where I’ve worked on number of frontier models with the manufacturers of AI platforms, particularly their uses in healthcare. And prior to that, I worked in medical device companies and pharmaceutical companies. And so my practice focuses on helping manufacturers determine what the regulatory requirements would be, what their go-to-market strategy is going to be and how do they also stay on the right side of federal and state and other regulations as they bring their products to market.
Morgan Ribeiro: Wonderful. Dan?
Dan Silverboard: Yeah, thank you, Morgan. So I’m Dan Silverboard, and I am a partner in the healthcare practice group at Holland & Knight in the Atlanta office. My practice is more a traditional compliance and transactional practice where I assist healthcare providers in business transactions and regulatory compliance. In the context of artificial intelligence, my work includes assisting providers, insurance companies and digital health companies comply with laws pertaining to the use of AI. And that includes helping to design AI governance plans and implementations of AI guardrails. So I’m also helping to lead the firm’s efforts to track all of these laws on our Health AI Navigator tool. Thank you, Morgan.
Morgan Ribeiro: Yeah, absolutely. And Sarah?
Sarah Starling Crossan: Hello. Yes. Happy to introduce myself. As Morgan mentioned, my name is Sarah Starling Crossan, and I am a public affairs advisor here at Holland & Knight. I have been at the firm for a little bit over a year and a half. I specialize in federal health policies, both legislative and regulatory, as they pertain to care coordination, healthcare system transformation, rare disease and digital health, which have been the primary areas of focus since the start of my career. So, I have been acutely familiar with the role that artificial intelligence and machine learning have played in those areas. I come from the physician trade world, where I spent most of my time in CMS land, and I’ve been involved in patient advocacy for over 10 years. I also served on the Senate Health, Education, Labor and Pensions Committee, which has jurisdiction over several of Health and Human Services’ agencies, including FDA, CDC and the NIH.
Morgan Ribeiro: Awesome, thank you all for those introductions. I think that’s helpful to know sort of the perspective that you all are coming at this discussion from. So let’s jump in. AI, as we’ve talked about in the introduction, AI offers incredible potential in the healthcare industry for revolutionizing the space, from enhancing diagnostics and treatment to streamlining administrative tasks. So really thinking about it from kind of the clinical side, as well as the kind of operational and administrative side, and all of the ways it could really improve some of the issues that we’ve continued to struggle with as an industry for a long, long time. The growing field of AI in healthcare has prompted both state and federal governments to begin enacting regulations aimed at ensuring patient safety and promoting fairness and protecting privacy and the rapid pace of AI innovation. It also, with all of this opportunity, presents some unique challenges for regulators trying to strike a balance between fostering this progress while also maintaining the patient-centered ethics. So I would love to turn to you, Dan, just first, you mentioned that you’re tracking a lot of this activity. Can you provide our listeners with an overview of what we’ve seen over the last, say, two to three years as it relates to regulation of AI in healthcare?
Dan Silverboard: I could try. I mean, this is probably a 40-minute answer that I’ll try to condense into a couple minutes. But let’s start with the one-sentence answer. The one-sentence answer to this question is that the federal government really is still figuring it out. They’re figuring it out how to regulate artificial intelligence in healthcare while the states have moved forward to regulate areas where they foresee risks from AI and healthcare delivery and financing. So let’s drill down to that a little bit.
So starting at the executive branch of the federal government, going back a few years, we can start with President Biden’s executive order in October of 2023 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” That executive order required federal agencies to make recommendations (on) how to promote AI innovation, but also how to mitigate potential risks, including risks to such things as consumer privacy and civil rights, among other things, think algorithmic discrimination as it is a key word. And in healthcare, the EO had some very specific action items for healthcare agencies to undertake. So for example, it required HHS to develop a strategic plan to determine and make recommendations for legislation on how artificial intelligence can be responsibly used in the coverage determination process and quality improvement, benefits administration and all other areas of healthcare delivery and financing. And HHS actually adopted a strategic plan in January of 2025, but it really is probably gathering dust on the shelf right now because President Trump, on the day he took office, rescinded that executive order and issued his own executive order asking federal agencies to take a fresh look at how to facilitate U.S. leadership in AI. And the result of that is this artificial intelligence action plan and corresponding executive orders, which I believe Sarah will discuss in further detail. And it is worth noting that, although Biden’s executive order was rescinded, there are still some Biden-era, let’s call it Biden-era regulations about the use of AI in healthcare that are still on the books. So for example, there’s still CMS requirements for how Medicare Advantage organizations can use AI in the claims review process.
So with respect to Congress, in the last session of Congress, there were 50 hearings held on artificial intelligence. There were numerous bills introduced in both chambers that would have impacted the use of AI in healthcare. But in the end, Congress did not enact any legislation, and it still has not. I think the consensus is really – and this is bipartisan – is that Congress is still really studying artificial intelligence to understand the benefits and risks involved in the healthcare industry. The Senate Finance (Committee} Chairman, Mike Crapo, had a good quote on this where he said, “Congress needs to get over its learning curve before trying to change or create loss.” So that’s where we are with Congress.
And then we get to the states. I think in 2024, something like 31 states adopted legislation impacting AI. And we can get into this later in the podcast, but I would note that the states are mainly enacting legislation in areas of healthcare that they have traditionally had jurisdiction over. So the operations of commercial insurance, the standards and practices of healthcare providers, think the board of medicine regulating the doctors and physician assistants, for example, that’s where it’s coming up. And again, Sarah will discuss this further, but Congress, of course, considered this moratorium on state AI legislation in the One Big Beautiful Bill Act, but that ultimately did not survive. So state legislation on artificial intelligence and healthcare as we sit today is relatively safe. So I think that’s a pretty healthy summary to get us started here.
Morgan Ribeiro: That is awesome. So while there’s no single overarching federal AI regulation for healthcare, several agencies play a role in overseeing the technology’s use, and that of course continues to evolve, too. But John, can you fill us in on the federal agencies and the role they are currently playing in governing the use of AI in healthcare?
John Vaughan: Certainly. I think there are three categories of federal agencies to think about. First is HHS, the Department of Health and Human Services, finalized rules under Section 1557 of the Affordable Care Act last year that prohibited discrimination in healthcare programs. Those rules do still apply to AI-based tools, so if a hospital uses an algorithm to support clinical decisions, that system has to make sure that it complies with civil rights protections, and that it doesn’t introduce or perpetuate bias based on race, sex, age, disability or other protected classes. Earlier this year, the president in January issued an executive order rescinding the Biden Administration’s interpretation of that rule. So while the rule stands, the interpretation of that rule that the Biden Administration issued in guidances has been rescinded. But nonetheless, it’s still a factor that anyone who manufactures or uses these products needs to consider when they’re using AI tools to make clinical decisions or CDS tools.
The second is that HIPAA, which most folks are familiar with, still applies. HIPPA is the Health Insurance Portability and Accountability Act, and that protects PHI, including the use of PHI in AI-based systems that access or generate patient data. So folks who are in this space need to think about de-identification, tokenization, data security (and) auditability under HIPAA because that still remains sort of good law.
And the third federal agency that’s interested in regulating AI is the ONC, which is the Office of National Coordinator for Health IT. They issued guidance emphasizing transparency, interoperability and data set quality. And I think we should talk about data set quality a little later in this podcast. But each of those three sort of pillars for ONC regulatory oversight still remain. And regardless of administration, AI is dynamic and constantly changing. And there is going to be, in the next couple of years, a lot of change in this space. And it’s going to be a lot of change that different jurisdictions are going to make different decisions about how they will regulate AI, its health uses of AI in their jurisdiction. And so it’s going to be an interesting five to 10 years to come as AI ramps up in the healthcare space and regulators start to regulate at the state and federal and international levels.
Morgan Ribeiro: Great. OK, so Sarah, the recently passed budget reconciliation bill, or One Big Beautiful Bill Act, which was signed by President Trump on July 4, I think there’s a lot of attention in that bill in the healthcare space, in particular around kind of the Medicaid components of it. There’s obviously a number of other things that I feel like are getting a lot of the attention, but there was a controversial provision of the bill passed by the House that would have imposed a 10-year moratorium on state and local regulation of AI systems. This moratorium was removed in the Senate version, which was ultimately signed, and therefore it is not in effect. But can you tell us what that means and more about the removal of that provision? I think our clients would be interested to know like, where do we go from here and what does that mean for them as they’re kind of implementing various different AI software capabilities into their systems.
Sarah Starling Crossan: Yeah, absolutely. I’d be more than happy to. I think a good place to start is by sort of breaking down the political and social underpinnings of the bill. And I also want to note that certain iterations of the proposal also included a $500 million investment to modernize federal health IT infrastructure. But supporters of this bill, of this provision, portion of this bill argued that this proposal would prevent sort of what we’re beginning to see now, a patchwork of state-level regulations that could ultimately stifle innovation and create compliance challenges for businesses operating in multiple jurisdictions. Critics, including several lawmakers, which I’ll talk about in a bit, expressed concerns that the moratorium would hinder an individual state’s ability to address harms for AI, particularly as it relates to relevancy for the state, such as deep fakes and discrimination and prior authorization. The example that I’ll use is Senator Marsha Blackburn, who is a second-term Republican senator from Tennessee. For context, Tennessee just passed the state’s first AI legislation last year, which addressed deep fakes as they applied to musicians, relevant for Music City. The bill ultimately protects individuals’ digital likeness from unauthorized AI use. So Senator Blackburn was really at the helm of these efforts to strike the provision from the reconciliation bill. And the provision would have, practically speaking, restricted funding for states that chose to defy the moratorium and move forward with implementing AI policies at that state level. There was some back and forth on reducing the 10-year moratorium down to a five-year. Those conversations mostly had between Senator Blackburn and Senator Cruz, but ultimately Senator Blackburn’s efforts alongside two other Democrats, Senator Cantwell and Senator Markey alongside Senator Collins, Republican from Maine, it was successful. They were able to successfully strike the moratorium from the reconciliation bill, and it was struck from the bill in the 99 to 1 vote.
Morgan Ribeiro: So I think just in follow up to that Sarah, we’ve seen the White House release an AI action plan. There’s not a ton that’s specifically called out for healthcare, but Is there anything kind of noteworthy from that plan that you would share with our listeners?
Sarah Starling Crossan: Sure. I’ll just say, and it is very recent, so myself as well as other members of our team are still digging through it, but what I will say is that this is a broad indication of the direction of the administration on several priorities, including healthcare. Healthcare is only mentioned, I think, three times explicitly, but it does talk a lot about science, so implications for research. A lot remains to be seen. There are not explicit deadlines given in this action plan. And so the timeline of what this will look like and when directions to establish certain governance structures or otherwise remain to be seen with this. So I think that really this is a bit of a wait and see, and we will be able to determine kind of the true impacts at a future date. And this is kind of coinciding with the release of several executive orders, which were released subsequent to the release of the AI action plan, were three executive orders, which focus on export infrastructure and removal of what is deemed DEI as it relates to AI ML, model development and deployment. So I think when we’re considering this action plan and the slew of executive orders, it’s really essential for developers, deployers and ultimately end users of these technologies – that’s patients, clinicians, researchers – should be engaging with their local and state lawmakers as proactively as possible, because that’s really where we’re going to see the action here.
Morgan Ribeiro: Awesome. Dan, back to you, you know, in the absence of a comprehensive federal framework, states are increasingly enacting their own laws to regulate AI in healthcare, and it’s addressing various aspects of the technology’s use. I was curious if you might be able to just sort of bucket or categorize some of the key areas that these laws are focused on.
Dan Silverboard: Sure. So as I mentioned at the outset in the introduction, states are legislating in areas where they have typically had jurisdiction. So the buckets are really like the regulation of insurance companies and the clinical practice of healthcare professionals. I mean, there’s some exceptions to that. Colorado, for example. But that’s where the bulk of the legislation is.
So take insurance, for example, where we have seen really the most activity and where the states actually seem the most aligned is on the topic of how insurance companies can use AI in the claims review process, in particular, prior authorization requests. Red states and blue states have all adopted laws – not all, but a lot – have adopted laws that effectively prohibit insurance plans from using AI alone to reject or deny a prior authorization request. A healthcare professional has to be the sole decision maker in that process. So for example, California, Illinois, Texas, Arizona, Maryland, Nebraska, Oklahoma and my great state of Georgia have all enacted laws with those restrictions in place. Where there’s some variation, some states like Colorado and California and Maryland have the added component of requiring insurance companies to ensure that their use of AI does not result in algorithmic bias or discrimination. And really, it places a duty of care on the insurers to fulfill that mandate.
So with respect to providers and the use of AI in clinical practice, there’s not as many laws. But the laws that are there focus on disclosure requirements and to some extent set a standard of care for the use of AI. So for example, this new Texas law, the Texas Responsible Artificial Intelligence Governance Act, which was signed by Governor Abbott at the end of June, requires providers to disclose their use of AI to patients at the time of treatment. And New Mexico has a similar disclosure requirement. Mississippi actually goes a step further and requires a physician to obtain the patient’s informed consent prior to using artificial intelligence in clinical practice. So by contrast, California doesn’t have any kind of disclosure requirement like that. It’s much thinner, I would say. It just requires providers to disclose when generative AI is used in patient communications. But by no means are there laws that prohibit providers from utilizing certain kinds of artificial intelligence. Having said that, there is sort of a standard of care that emerges when looking at these laws, which is that a provider can use artificial intelligence to treat a patient, provided that they review the recommendations of artificial intelligence and exercise their own professional discretion and following those recommendations. And again, kind of a common theme is that the healthcare provider has to be the ultimate decision maker.
Morgan Ribeiro: Dan, are there any other states besides Texas that are worth calling out just based on the legislation that we’ve seen passed over the last year or so?
Dan Silverboard: I think it varies based on stakeholder in terms of if you want to, if we were trying to classify states by which is easiest or which is hardest, I mean, clearly the states that haven’t adopted legislation on AI, like Florida, for example, would be a more flexible state. But when I think about like states that where there’s regulatory burden – let’s call it that – on the use of artificial intelligence, I think a little bit about Colorado, which requires developers and deployers of artificial intelligence to ensure that their use does not result in algorithmic discrimination. And as part of that duty, requires them to undertake periodic impact assessments to ensure that result. So in terms of, we’re talking regulatory burden, I would probably point to Colorado as the, quote, “strictest” at this time.
Morgan Ribeiro: I’m getting a little bit in the weeds here. Yeah, John, go ahead.
John Vaughan: Because I sit in California, I just wanted to also discuss that there are two implications or trends that you could see with the lack of federal preemption. Number one, California has passed, I think, five or six laws that pertain to healthcare uses of AI in some way, shape or form. And so then they’ve got, the attorney general gave guidance on how the California attorney general was looking to enforce existing law against AI. So I think, you know, two other things I would just say is that with federal preemption off the table for the next decade or so, and the proliferation of state regulation of AI, you’re going to see larger jurisdictions such as California, Florida, Texas, New York, Georgia, Illinois, they’re each going to make different policy decisions that are going to be something that particularly providers are going need to think carefully about when they adapt new AI technologies to their networks, because regulations in California are likely to be different than regulations in Texas that are likely to be different in regulations in Florida. And so, folks need to really be thinking, as Dan talked about, these state laws, not only about what has been passed, but what is coming down the road for each of these jurisdictions, because there’s a lot going on and states are going to be very eager to regulate this, and so it’s going to be a bit of a challenge for providers that have practices in a number of different states.
Dan Silverboard: I kind of wanted to open up a question to the group that I was thinking about, which is that, let’s pretend for a moment that the moratorium in the One Big Beautiful Bill Act had passed. What of these healthcare laws at the state level would have been preempted? And I come at this from the perspective of, the states are regulating things that they have traditionally regulated that the federal government just does not, like the practice of medicine and requirements around that and then the operations of insurance companies. What are your guys’ thoughts about what would be preempted?
John Vaughan: There were a couple of lines in one of the draft bills, and I don’t know, Sarah, I don’t have them at hand here, that would have prohibited states from making any regulatory decisions with respect to AI. We’ll stop for the next decade, right? So no new regulation. And that would’ve had the effect, I think, of making sure that the agencies that we talked about, like HHS and ONC, that they would probably take a more leading role in interpreting what regulations were required for healthcare applications of AI. And without having that, it’s hard to know exactly what would have been preempted. But if you take a step back, the interesting thing for lawyers, and I think the painful thing for a lot of providers and healthcare systems, is going to be that in the absence of any federal preemption regime for the next decade, states are very free to make their own policy decisions and bigger states are probably going to make policy decisions that bring smaller states along with them. And so it’s going to create an interesting patchwork of regulatory decision-making. And I think that that is going to be interesting and a real challenge for most of our clients over the next decade.
Morgan Ribeiro: Sarah, anything you’d add to that?
Sarah Starling Crossan: Sure. I mean, I’ll just reiterate what I kind of spoke to earlier, which is that the provision in its final iteration before it was pulled back or before it was voted out of the Big Beautiful Bill, the effect would have been that states that were deemed as defiant to the moratorium and who were implementing these AI policies, legislation at a state level, would be potentially restricted from funding. So I think that that’s really the important part there, is that in theory, a state could potentially – and John, correct me if you have a different read of this – but the states could in theory create legislation, but enforcing it or implementing this legislation is where they would risk being restricted in funding.
John Vaughan: Yeah, that’s right. If the state, broadly speaking, passed a regulation that had an effect on interstate commerce and that the federal government had preempted that category, then yes, you would see that state regulation either paused or overturned. So it would have been just a different environment than we’re in right now. And I think you can argue that, as Sarah, as you mentioned, the administration put out sort of a guidance document that I would call directional about what the administration’s priorities are for AI over the term of this administration. And when you put that all together, it’s just a very different environment post-Big Beautiful Bill than it would have been had there been a moratorium. And so you’ll see a lot of innovation, but you’re also going to see a lot of regulation of very discreet things. So in California, for instance, there was a law about AI and clinical communications that was passed, Physicians Make Decisions Act, Neural Data Privacy Exemptions. And so when you look at some of these little minor categories, that’s where this can get really complicated very, very quickly for clients. Because California is a very big state. You can’t ignore that law. But there’s an administrative burden to implementing all these regs. And as a California lawyer, traditionally, California has relied on the fact that it is the largest state in the union to drive policy for the rest of the country. And now you see other states that are becoming just as populous as California, like Florida and Texas,. with their own regulatory regimes and their own priorities. And so they’re less willing to say, OK, California requires this, therefore we’re going to go along with it. So it is going to be a very interesting couple of years. After this administration, the next administration is going basically inherit some patchwork of heavy state regulations, some states that have not gotten around to regulating as heavily. And you will see AI getting developed and regulated outside of the country. And so all these different trends are going to have unknown effects on the rates of adoption, enforcement and just the practical compliance burden of adopting some of these platforms for a lot of our clients.
Morgan Ribeiro: OK, so John, you just mentioned something, which is a great segue into my next question, we’ve been talking a lot about what’s happening at a federal and state level in the U.S., but how is what’s happening in the EU and other countries going to then sort of dovetail, and what’s happening here?
John Vaughan: It’s such a great segue, Morgan, right? Because as the United States has taken more of a laissez-faire attitude towards these platforms, again, it is difficult for any developer to come up with multiple different platforms for the same purpose. So if you look at the EU as driving regulation and AI policy very much the same way that they are the world leader in privacy, whether you like it or dislike it, the EU has a lot of power. And so as you mentioned, Morgan, the EU passed an AI Act in March 2024, which I believe is the first comprehensive legal framework for artificial intelligence. And it classified or stratified AI systems by risk. And any AI system used in healthcare is automatically deemed high-risk under the European Union AI Act. So that triggers a bunch of requirements for manufacturers or for users, depending on who you’re talking to in the EU, broadly speaking, that manufacturers are required to conduct conformity assessments. A lot of folks would know what that looks like. That’s like a CE marking type thing, so that the EU wants to see certification of conformity for assessments before there’s market access. The AI Act also mandates transparency and explainability mandates for manufacturers as they adopt things in healthcare. Human oversight. There has to be human oversight over any AI system. And you see that reflected, by the way, in the California approach to a lot of these issues. California adopted a very GDPR-like privacy law ourselves out here. And similarly, we’re also making sure that a lot of these laws have to have human oversight, so that in California – we talked about the (Physicians) Make Decisions Act – that a physician, a healthcare provider, needs to be the one making a decision with respect to patient care for AI platforms. In the EU, I think there’s going to be a directional difference where the EU is very interested in bias monitoring and mitigation. And this administration reinforced that they are not as interested in an expansive definition of what bias monitoring and mitigation might be. And then the final difference between the European Union and the U.S. federal or state approaches is that there is mandatory registration in an AI database for all manufacturers in the European Union. The most important thing for a U.S. audience – and I imagine that most of the folks who are listening to this podcast are going to be U.S. domiciled companies or U.S. citizens – is that EU law applies extraterritorially. So EU AI law applies, no matter where you live in the world. If you’re directing a product into Europe, you must be in compliance with these laws and these regulations. And so again, that’s why you see it’s an interesting time for folks, because manufacturers are likely to be guided by more conservative jurisdictions, like a California, like the European Union, as they make decisions. And in the absence of an overarching federal policy, let’s just call it a vision for uses of healthcare AI, you’re going to see manufacturers likely to adopt some of these more conservative standards, which paradoxically, may throttle some of the innovation that the administration clearly hopes will occur here in the United States. But we’ll just have to wait and see how this all works out.
Morgan Ribeiro: Yeah, I mean, it’s complex, and I think for our clients, let’s just say it’s a healthcare system that has physical locations or facilities in multiple states and sort of tracking all of that. The three of you have taken the lead, as Dan mentioned in his introductory comments, but you guys have taken the lead with the support of many others across the firm and will continue to track this with the launch of our Health AI Navigator, which is a resource for our clients and others to stay up to date on the latest federal and state legislation and regulation. For healthcare organizations and developers, understanding and adhering to this evolving landscape is crucial for responsible AI adoption. It’s to me as sort of, you know, not an expert in this field, it seems rather overwhelming to be able to keep up to date and up to speed with just each state and kind of its different, you know, regulatory requirements and guardrails that have been put around this. But given the absence of a federal moratorium and the fact that we could potentially see 50 or so different sets of regulations, I’d be curious just to hear from each of you, your tips for our healthcare clients. And again, like several of you have mentioned, that looks different depending on who the stakeholder is. Are you a provider, a payer, a digital health company, an investor who has invested in some of these digital health platforms that have AI components to them? Are you a pharma device? It really varies based on your business line. But would love to just get some input from you all some practical guidance that we have for clients. It’s one thing to kind of follow the policy, but then how does that apply to their actual operations in their business? So John, I’ll start with you.
John Vaughan: Sure. I’ve got about three or four things that I think anyone who’s involved in this space should be thinking about. Number one, build an AI governance program early. Regardless of whatever turn you see in the regulatory environment, if you see litigation on any of these issues, most important thing for someone utilizing AI tools in healthcare is to have a point of view on compliance, a point to view on governance, and stick with it until the law tells you otherwise, right? I think that’s really important. A lot of folks dive into this space without thinking about the appropriate level of governance they need.
The second thing is that you should be running bias audits continuously on all of these programs. And by bias, I encourage folks to think expansively about bias, not just about, when we think about it, sort of civil rights-type issues about bias against, you know, gender or race or sexual orientation all are important. But I’m also thinking step back, bias in general, like what is your platform doing? What is it coded to do? What do your engineers think it’s doing? And is it getting that particular job done? And you don’t want to – if you have a platform like this – exclude communities that have distinct healthcare profiles or distinct healthcare interests because otherwise you’re developing a tool that’s not going to serve all of your patients.
Third, I would prioritize transparency with patients and providers. I saw some data recently suggesting that roughly half the country is excited about AI adoption, roughly half of the country is quite concerned about AI adoption. But that 50-50 split is even within parties. So basically, you see even amounts of Democrats, even amounts of Republicans, even amount of independents, all are on either side of this, right? So this issue right now is not sort of right- or left-coded. Folks are either concerned or they’re embracing these technologies. And so it’s really important to prioritize transparency, particularly with patients. So folks, understand what you’re doing, what the program does, what its limitations are and how this data will be used.
And the last thing that I think everyone should be thinking about is investing in data quality. I’ll give you an example. If you are developing an AI program for a drug discovery where you’re trying to figure out whether a particular molecule can work with different patient populations that haven’t been studied, well, you need to have all patient populations in that database. Otherwise, you’re going to miss things, right? So really important to understand where your data comes from. Is it high-quality data? And does that data, the full data set that you need in order to make good conclusions because the AI is not going to be aware. If there are other data sitting out there that might’ve changed its assessment, right? AI’s excel at pattern recognition. In order to allow an AI to excel at pattern recognition, you need to give them as much data as possible that’s relevant so that it can detect those patterns.
Morgan Ribeiro: That’s awesome, thank you. Dan?
Dan Silverboard: John hit the nail on the head with having an AI governance plan in place and really making sure through periodic auditing that the artificial intelligence platform that’s being utilized is functioning as intended. I think that’s critical. I think it’s also just important to remember we’re sort of in early days here with artificial intelligence laws and regulation, right? I mean, the federal government has not even adopted any comprehensive legislation on AI. So I think continuing to track the laws and regulations, at least periodically, to ensure that if you’re a provider, you’re on top of what’s happening in your state, if you are a digital health developer, making sure that you understand the limitations at the state level and the federal level, it’s just very imperative because the law is quickly evolving. I’d say monitoring the law, making sure your artificial intelligence product is working as intended. If you’re outsourcing AI, making sure that you have your vendor contract reps and warranties around the compliance with law and having required auditing are just some kind of key guardrails. I would also add that the National Institute of Standards and Technology has a pretty good AI risk mitigation framework on its website. That can be used to help formulate an AI governance plan and policies in place. I think that’s a pretty good tool to start from. I think those are my recommendations.
Morgan Ribeiro: Awesome. Sarah, anything to add to that?
Sarah Starling Crossan: Yeah, I think that John and Dan covered a lot of it, but I think what I would just reemphasize is that in the development/deployment of these technologies that we think about the end user. Dependent on what the technology is, that will vary, right? But I think that, you know, in all stages of this process, that we are considering the needs and comfort levels and trust and all of those important things when we’re developing and deploying these technologies. And then I would just add that kind of echoing on or adding to what John said about data, that not all data is good data. And just because something is sitting in a database somewhere doesn’t mean that it’s accurate, trustworthy or useful. So I would just add on to what John said and say, make sure your data is clean and you’re using your data for what it is intended for and that it applies to what you are trying to achieve.
Morgan Ribeiro: Awesome. Well, I think all of that is really helpful guidance for our clients. I know this will not be the end of this conversation. I imagine we will have subsequent podcasts and other pieces of information that we push out to clients just as this landscape continues to evolve, both at the federal and state level and even maybe internationally. Appreciate everyone’s time today. Anything else you guys want to add?
John Vaughan: I would just say that, you know, just wanted to remind everyone that Dan led the charge in this effort, but the Health AI Navigator is a live, constantly updated cross-disciplinary resource. And that helps our clients keep up with that.
Morgan Ribeiro: Thank you very much. Great to chat with you all today.
Dan Silverboard: Thanks everybody.