• Our Partners
  • CarePolicy
  • HomeCareConsulting
  • Digit9X
  • Home
  • Assisted Living
  • Elderly
  • Home Care Agency
  • Home Care Worker
  • Home Nursing
Menu
  • Home
  • Assisted Living
  • Elderly
  • Home Care Agency
  • Home Care Worker
  • Home Nursing
Home » False Claims Act Insights – An FCA Perspective on Artificial Intelligence in the Healthcare Industry
Elderly

False Claims Act Insights – An FCA Perspective on Artificial Intelligence in the Healthcare Industry

adminBy adminAugust 26, 2025No Comments36 Mins Read
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Reddit
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link


This transcript has been auto generated

00;00;00;00 – 00;00;21;29

Jonathan Porter

Welcome to another episode of Husch Blackwell’s False Claims Insights podcast. I’m your host, Jonathan Porter. All right, so AI is all the rage, right now. Some people have predicted a lot about how AI will change our world. You know, some are predicting a revolution. Some are predicting a bubble. And the jury’s still out on exactly what the lasting impact of AI will be.

00;00;21;29 – 00;00;49;15

Jonathan Porter

But I think it’s safe to say that the promises of AI are tremendous. And of all the industries that could be revolutionized by AI, I think healthcare really could be number one. AI could significantly improve the practice of medicine. Early studies are just fascinating to read. In theory, if you give AI full access to all the patient data and build a system around it, patients could receive like, amazingly better outcomes.

00;00;49;18 – 00;01;12;05

Jonathan Porter

AI could spot things that doctors missed. They could prompt doctors with helpful next steps. It could take some of the more burdensome parts of the practice of medicine off of doctors shoulders. This could be really big, but it could also lead to some real enforcement problems for the health care industry for every promise of AI and healthcare, there’s a past cautionary tale that could help show the pitfalls that we’re supposed to be looking out for.

00;01;12;06 – 00;01;39;24

Jonathan Porter

And so on today’s episode of the podcast, we’re talking about AI in healthcare. Both the promise of AI and the enforcement risks. Joining me to talk about AI in healthcare is my friend Andy Sobczyk. Andy is a vice president with the Coker Group, a well known advisory firm for health care providers. Andy is called in by physician groups and health systems when they need a consult on strategic or operation or some other issues.

00;01;39;26 – 00;01;58;29

Jonathan Porter

And so we’re calling on Andy today to help us out with this topic of AI in healthcare. On today’s episode of podcast and Andy, we should start by alerting our listeners that, there’s a bit of, like, mutually assured destruction going on in the background of today’s episode because, Andy, people may not know Andy and I are college buddies.

00;01;59;06 – 00;02;21;01

Jonathan Porter

And so, Andy, I know you’ve got stories about me, and I know I’ve got stories about you. And our deal here is that neither of us is going to tell those stories on the air today. And so I think, Andy, what we should do instead, I think we’ve agreed that we’re going to tell our listeners that we used to spend our Friday nights in college leading robust discussions about health care policy at the fraternity house.

00;02;21;01 – 00;02;30;25

Jonathan Porter

So, Andy, I don’t know that people are going to buy that, but let’s try to stick with that story for now. But seriously, Andy, thank you for joining the podcast and telling our listeners about this really interesting topic, AI and health Care.

00;02;31;02 – 00;02;47;18

Andy Sobczyk

Thanks for having me, Jonathan. Agree to keep those stories in the past and focus on the topic today, the Friday evening chats about health care policy and, Saturday afternoon gatherings to talk about legislation is. Yeah. Remember those fondly?

00;02;47;20 – 00;03;02;08

Jonathan Porter

Absolutely. No one’s buying that. But let’s see if we can do so. Andy, let’s start with the big picture one. So one of the big ways that I previewed in the open that I could be just awesome is if you just turn over a bunch of information to it, and you let I guide the practice of medicine.

00;03;02;08 – 00;03;27;13

Jonathan Porter

So people call this clinical decision support and some other things. There’s been a bunch of really interesting studies that show the amazing outcomes that patients can have. One of the big ways that it can have really cool outcomes is in the field of radiology, where if you add an AI radiology element to clinical decision support, there could be all these new ways that patients can receive better diagnoses or treatment plans.

00;03;27;13 – 00;03;35;01

Jonathan Porter

And so Andy starts out by telling us about how I could be used by health care providers in like a clinical decision support way.

00;03;35;04 – 00;03;59;05

Andy Sobczyk

Absolutely. And I think even just taking a quick step back in terms of like the adoption curve here, you know, there’s surveys from like the AMA and other sources that are trying to figure out how are systems and practices adopting this. Just last year, the AMA survey had about two thirds of physicians saying that they’re using AI in a meaningful way, which is pretty crazy because before it was a pretty low number.

00;03;59;05 – 00;04;19;25

Andy Sobczyk

And then you got hospitals and health systems, you know, into the 25% to 30% using AI in like a systematic way. Health care tends to lag pretty far behind. But I’d say these last few years, the speed of adoption and just the interest, it’s gone from like, CEOs, it’s on their radar to almost an imperative for certain areas.

00;04;19;25 – 00;04;43;04

Andy Sobczyk

And so it’s it’s very fascinating, I think, on this particular topic with the radiology piece. It’s certainly the biggest one in terms of like FDA approved platforms and devices. Yeah. I saw something where it’s about 70% of what they’ve approved in the radiology space and the benefits not just here, but I’d say kind of across the board with these use cases.

00;04;43;04 – 00;05;21;26

Andy Sobczyk

A lot of these are tied to saving the physician or the clinician time, making a decision faster, ensuring that they can meet the demands of the job. There’s a physician shortage in the country right now, and a lot of hospitals and health systems are grappling with that. And so not only can can this technology enhance a radiologist in terms of like finding things that they may not have found, but if you take a radiologist and the AI, you know, platform working together, they almost do more in certain situations found, and they can do just as much as, say, two radiologists, and it can help them prioritize urgent cases a lot better than just somebody sitting

00;05;21;26 – 00;05;38;26

Andy Sobczyk

in a room reading scans and finding like that critical one after about 100 or 200 of them. And so I think a lot of this is you’re saving time on the clinician side. There may be situations where you’re finding something that they otherwise wouldn’t. So like an average radiologist is going to benefit tremendously, you know, from something like this.

00;05;38;26 – 00;06;01;08

Andy Sobczyk

It may not quite improve like the very best in the field, but if you bring everybody up to that level in the field, I think the hope and the promise for patients is pretty impressive and pretty exciting. On the other end, you get a mistake in there. The algorithms can have certain biases that show up, maybe patterns that they’re detecting that aren’t real patterns and things of that nature.

00;06;01;08 – 00;06;11;08

Andy Sobczyk

And so curious about your thoughts on what you would want to worry about or be focused on to protect yourself as like a health system or a practice when you’re pursuing something like this?

00;06;11;11 – 00;06;47;23

Jonathan Porter

Yeah. Good question, Andy. And I think the top enforcement risk when it comes to like clinical decision support and health care is the practice fusion example from a few years ago. So this was a big $145 million global resolution with DOJ in 2020. I just fusion ran an EHR system that offered clinical decision support for physicians. The issue there is they were taking what was later deemed to be kickbacks from pharmaceutical companies in exchange for, like extra alerts or prompts to physicians to prescribe or administer more of the company’s products.

00;06;47;23 – 00;07;13;16

Jonathan Porter

And the reason that this got on enforcers radars was the pharmaceutical product was opioids. And so opioid manufacturers are paying this clinical decision support model to do extra pings, extra pings, reminding doctors, hey, give your patient more opioids. I think the allegations DOJ was talking about when it came to that was that they even let the pharmaceutical company craft like the language that was being used in the alarms.

00;07;13;16 – 00;07;36;27

Jonathan Porter

And so I think the big high level enforcement risk there is don’t take money from people that could gain from the alerts that you’re going to have. That’s a big no no. But in terms of other like enforcement risks in this area, and I think in general, letting AI analyze everything and suggest potential diagnoses, tests to drive a diagnosis, treatment plans, that’s all fantastic for patients.

00;07;36;27 – 00;07;57;25

Jonathan Porter

I love all of that. But you can’t let dollars get in the way of how the treatment goes. We’re going to talk more about optimization in a little bit, but the big thing is you can’t really game the system in order to make something that’s better than treatment plans. And then when it comes to radiology in particular, I think it’s one thing to let radiology help guide a radiologists decisions.

00;07;57;25 – 00;08;16;15

Jonathan Porter

And then the radiologist bills for that. But you can’t just phone it in. You can’t just turn it all over to AI and then go to Bermuda and hang out and just keep billing Medicare. That would be a problem as well. So there’s a bunch to be aware of. But the promise is amazing when you’re guiding clients. And when I’m guiding clients, I mean, number one is patient outcomes could be phenomenal.

00;08;16;15 – 00;08;25;27

Jonathan Porter

And so let’s keep that going. And so Andy, given the sort of the risks and rewards, what should health systems be thinking about when it comes to integrating AI for clinical decision support?

00;08;26;00 – 00;08;43;01

Andy Sobczyk

I think you find a lot of the pertinent concerns, you know, takeaway wise, I think a physician on the other end of a platform will not want somebody not reviewing. I know the same way, like in certain parts of the country, in certain states, physicians are reviewing all the notes for like an app. Do you want to have that backing?

00;08;43;01 – 00;09;01;06

Andy Sobczyk

Because there’s a liability for the entity and the doctor themselves. And so I think that’s very important. But then also like in terms of operationalizing this, like involving those stakeholders in the process and not just like, say dropping it on them, they’re logging in one day and all of a sudden they’ve got a digital partner that they didn’t know about.

00;09;01;08 – 00;09;07;25

Andy Sobczyk

And so I think all of those things will help entities that are interested in this, do it the right way and make sure they’re covered.

00;09;08;00 – 00;09;32;04

Jonathan Porter

Thanks for any good wisdom there. So next let’s talk about patient facing care like your chat bot. So I think that there are certain areas like behavioral health where the promise here could be phenomenal, where patients who need support in sort of non doctor hours could get someone to talk with. And so I think there’s a lot of promise when it comes to patient facing care communication systems run by AI.

00;09;32;06 – 00;09;38;12

Jonathan Porter

But any tell us about that patient facing care with AI and what potential adopters should be thinking about the healthcare realm.

00;09;38;15 – 00;10;13;12

Andy Sobczyk

Yeah, this is an interesting one. I think that two thirds of physicians adopting the technology here recently, I think this is one of the areas where there’s a lot of use cases. A lot of people are trying it out relative to like a risk spectrum. You know, you’ve got the clinical decision making front end where it’s like that’s probably on the higher end, certain applications of the chat bots and using AI to address patient concerns that don’t need a doctor or don’t need a provider, I think that’s a little lower on the risk scale and can do a lot to save organizations time and waste, you know, messages that might fall through the cracks if

00;10;13;12 – 00;10;36;08

Andy Sobczyk

you’re just dealing with it with like a pool of nurses or Mas or whoever. And so applying AI in those cases, from the clients we work with has been very positive. I think it’s very common that it’s like we’ve come a long way from like Clippy in the word document. We, you know, when you talk to to these technologies, I can actually look at a patient chart while there’s a comment and provide answers to certain questions.

00;10;36;10 – 00;10;57;18

Andy Sobczyk

You know, I think from my considerations, you certainly want to make sure the guardrails are in there for like, you don’t want the technology to be addressing things that are escalating a little too much on the decision making scale. You want to make sure there’s support staff, nurses and whoever that’s involved. And kind of overseeing that. You certainly don’t want it to just freewheel in a lot of cases.

00;10;57;18 – 00;11;19;16

Andy Sobczyk

But in terms of saving time and providing a better experience, patients that call and just need a simple answer and have to hold on a line on a phone tree or have to wait 2 to 3 days because somebody and basket is really full and they can’t get around to it. I think that makes for a better experience across the board, and in some cases, it could get patients to their appointment into the care that they need a lot quicker.

00;11;19;19 – 00;11;43;26

Andy Sobczyk

And so, you know, to the theme of the day, the promise of it is tremendous in terms of like patient information, how much is accessed by the technology and how much organizations allow the technology to speak to that and make decisions or guide patients. I think it’d be interesting to hear on your end of the fence here. You know, where organizations need to be wary or be aware of potential pitfalls.

00;11;43;29 – 00;12;07;26

Jonathan Porter

Yeah, I’m with you, Andy. The promise here is phenomenal. I’m really excited about the promise, especially in behavioral health where there are particular needs at particular hours. And I read a study a year or two ago that said that AI’s bed manor is actually better on average than most doctors, which is fascinating. You wouldn’t think that I would have better bedside manner the doctors, but apparently, you know, at least one study said that it does.

00;12;07;28 – 00;12;31;16

Jonathan Porter

That said, I can’t help but thinking these stories that we’ve heard over the last few years of just horror stories of chat bots gone wrong, what comes to mind is like Air Canada. You had a chat bot that just made up a bereavement policy just out of thin air, and told someone to abide by it. There were stories, New York City that the city government there chat bot last year was giving advice to people that didn’t comply with the law.

00;12;31;18 – 00;12;51;28

Jonathan Porter

There was a story recently about the National Eating Disorders Association that removed its chat bot after the bot started telling people bad advice that would have been harmful to their health, harmful to their eating disorders. And so there are just too many horror stories here to ignore. I’m sure AI is going to get better at some point, but do you really want to be the next headline?

00;12;51;28 – 00;13;09;28

Jonathan Porter

I mean, think of the catastrophic things that could happen for a hospital. I don’t think any of my clients want to be in the news for some chat bot gone wrong. And that’s not even like an unfortunate thing. But I think any time you get bad PR, you’re increasing your enforcement risk. And so there’s two real ways that could happen.

00;13;09;28 – 00;13;30;13

Jonathan Porter

One, if you do get investigated, DOJ a lot of times thinks health systems are and physician groups have really good processes in place. And you’re going to get sort of a presumption. I think if you’re viewed as a clown and from a PR standpoint, you’re going to lose a lot of that sort of goodwill. And the other way is, you know, it’s not unheard of for bad media to lead to investigations.

00;13;30;15 – 00;13;54;27

Jonathan Porter

U.S. attorneys, they read the newspaper, and then they go to a blind AUSA and say, this hospital down the road sounds like they’re run by people who don’t know what they’re doing. Go check them out. That happens. And so even though an embarrassing chat bot thing doesn’t instantly equal false claims act problems, it’s just not a good idea to me to run out some system that you aren’t very sure is going to be really good.

00;13;54;27 – 00;14;02;14

Jonathan Porter

So, Andy, I assume since all these people are pushing these systems out, that these problems that we’re hearing in the news have been resolved. You think that’s right in.

00;14;02;14 – 00;14;30;08

Andy Sobczyk

Terms of like they’re mitigated in the future and we’ve gotten it, you know, probably not fully. I think that people that are organizations, practices, hospitals that are considering this, I think it’d be good to be mindful of what they’re going to feed data wise, where they’re going to feed the systems. I think they would want to have kind of their house in order in terms of like, okay, we have escalation guidelines, we have clinical information or guidance that’s validated by our top physicians.

00;14;30;08 – 00;14;51;11

Andy Sobczyk

And, you know, that stuff is helping to feed the algorithm, so to speak, as opposed to just dropping it in as like a plug and play solution. I think the governance of these things is very important to consider. And so I think you can probably run into trouble if it is just, somebody tries to plug and play and let the algorithm or the large language model kind of run wild with it.

00;14;51;11 – 00;15;12;13

Andy Sobczyk

And so I think just being mindful of how you can tailor these things, police them to a certain extent. One of my colleagues that works pretty closely in this space is seeing like, other large language models, policing the main large language model, where it’s kind of wild. Do you think if you have like, AI auditing, I in this case, I don’t know, we need to go that extreme in every situation.

00;15;12;13 – 00;15;28;12

Andy Sobczyk

But I think the concept of like, you need to still monitor it. You need to make sure it’s grounded on good policies and procedures and guidance that you give to your staff today, and not just as like an executive making the decision and thinking you to walk away from it. And the problem solved, because that’s certainly not the case.

00;15;28;15 – 00;15;53;16

Jonathan Porter

Yeah, all good points and I hope you’re right because again the promise is phenomenal. And so hopefully there’s good ways for us to make sure that it’s being implemented the right way. So any one of the things that I think is probably most of interest to the healthcare commute to healthcare providers is I, as the drafter of Clinical notes for physicians, I know approximately zero physicians who went to med school so that they could spend all their time drafting notes.

00;15;53;18 – 00;16;13;18

Jonathan Porter

Doctors hate sitting there after a consult, sitting in a hallway of a health system, typing out notes as quickly as they can. None of them like it, but I seems to be a potential solution for them where they can use AI to help craft the notes that they require. Do so. Andy, tell us a little bit about AI as a drafter of notes.

00;16;13;21 – 00;16;36;05

Andy Sobczyk

Yeah, I think this is like the ambient listening world of a device in the room somehow between the clinician and the patient that can understand the conversation, provide input on the clinical note, either populating a template or freeform what’s happening. And then there are some cases where it can even turn it into a bill. And suggest coding levels for it.

00;16;36;05 – 00;17;00;18

Andy Sobczyk

And so again, a lot of promise here and physicians that we’ve discussed this with organizations that have piloted it. I think there’s more optimism and more positive results than not. There’s a lot of stuff out there, about 80 to 100% of physicians that do this. In certain cases. They cite that their cognitive load, or the amount of time they spend outside of the exam room on notes has gone down significantly.

00;17;00;23 – 00;17;18;25

Andy Sobczyk

But to help work life balance, I think just being a doctor today is a lot harder than it has been in the past, probably since like the dawn of the electronic medical record. I think there’s been just a lot that’s been piled on. Documenting and having all this transparency is great, but then it comes with risk. It comes with more burden on the provider.

00;17;18;25 – 00;17;42;18

Andy Sobczyk

And so I think a physician that is trying to see like 20 patients a day now versus before, it’s just harder. And so these types of technologies are very well received on that side. I think in terms of implementing it again, I think it’s how you do it. There’s some downsides in terms of like if you’re in your doctor’s office, there’s a lot that you don’t say, like there’s questions where your physician will ask you something and you’ll just nod your head.

00;17;42;20 – 00;18;15;24

Andy Sobczyk

And so it’s like you almost need to turn into like the flight attendant for the emergency exit row of like, I need a verbal yes or no to this question. There’s a lot that we don’t appreciate about those conversations that maybe these algorithms aren’t always picking up. And so it’s not again, the end all be all. But I think in terms of saving time for physicians and their support staff and potentially efficiency on the back end in terms of teeing up a claim to submit, you know, to a payer are down the road, I think a lot of waste can be removed from the system with this, and I think our providers could maybe have

00;18;15;24 – 00;18;34;28

Andy Sobczyk

less of a burnout. Problem has been very well documented and published in recent years, and I think this could be something that supports it. Very promising. But then again, a lot more information being fed into these algorithms, into these models as a result of this. So probably more risk to on the back end.

00;18;35;00 – 00;18;56;13

Jonathan Porter

Yeah I think that’s right. And what’s interesting to me is this is very similar to the radiology point that we made earlier. You can’t just set this up and then not paying attention to it. So you got to put eyes on the note after I drafts it. Because as you said, there’s a chance that it misses something. There’s a chance that there’s some nonverbal communication that happens and it’s not going to understand the answer.

00;18;56;15 – 00;19;14;19

Jonathan Porter

And then you’ve got this weird thing in your note that it’s not going to know what to do with, or it’s going to interpret some silence as something that neither the physician nor the patient intended that to be. So you just got to look at it. Just make sure it’s accurate. And that’s where I think CMS is starting to voice some concerns with this being implemented improperly.

00;19;14;22 – 00;19;35;07

Jonathan Porter

They put out an alert. Maybe it was an alert, but they put out something last year that said, hey, just a reminder, for claims to be valid, the providers record has to have sufficient documentation to verify the services performed. And you got to show that they were compliant with CMS policies and for the level of care billed. And so they also said insufficient documentation.

00;19;35;12 – 00;19;58;11

Jonathan Porter

That means you didn’t do your job. And so the question to me is at what point is whatever AI generates sufficient. And I think that turns on just accuracy. It’s one thing to say, yeah, just get it. All right. But if you tweak that a little bit to say and by the way, add all these things so that I could build a level five, I think that’s where you get in some problems, but I think there are good ways to do it and it’s going to save.

00;19;58;14 – 00;20;24;05

Jonathan Porter

And your point about how we have a physician shortage right now, this is a big solution to that, because you’re going to make doctors way more, way more efficient. So Andy, turning now to thinking about how one of the other big problems with healthcare is just patient flow. So there’s been a lot of studies about how we can improve patient flow, and that I could actually be very useful in predicting patient flow and helping healthcare deal with capacity management the like.

00;20;24;05 – 00;20;27;04

Jonathan Porter

So any tell us about how I could help in that area.

00;20;27;07 – 00;20;48;08

Andy Sobczyk

Absolutely. I think this is a patient experience impact. This is an impact on scheduling staff and even on providers like mistakes that get made, like a patient gets put on their schedule for the wrong reason and it’s a wasted visit. The provider’s unhappy, the patient’s unhappy. And so some of the big platforms, you know, have their own tools for this.

00;20;48;08 – 00;21;22;28

Andy Sobczyk

There are add on tools that can do things like predict no-show rates based on the history of a patient or a type of patient, and a schedule template. There are situations like an infusion centers, you know, there’s schedules that are very complicated, and they rely on a lot of different pieces of a process. So like an infusion schedule could rely on what an oncologist says is the treatment protocol that patient may need to get scheduled and it needs to be connected to a lab visit, or it needs to be connected to the physician visit that’s going to interpret the lab, then see the patient, then they can start, and then they have to layer that

00;21;22;28 – 00;22;03;03

Andy Sobczyk

into dozens of other infusions that happen during the day. And so for years and a long time, those working in infusion centers had to play some pretty complicated Tetris games to get that schedule right and shift things around to make sure it flowed well. There’s certain programs and capabilities now that can take data from the previous schedules and start to suggest templates in a dynamic fashion that would be efficient, that align with resources, not just the physicians or the labs, but also the pharmacists that have to they have to mix the medications to be sure to get that right and allow for the maximum appointment capacity space, because those barriers prevented infusion centers in a

00;22;03;03 – 00;22;25;03

Andy Sobczyk

lot of cases from seeing as many patients as possible. So some have to wait longer times to get in or they’re getting in. But the patients are spending a 6 or 7 hour day for that appointment. And so these types of predictive technology solutions for the schedules can really help on the patient experience side. And then also on the provider side to where they know that their appointments are going to be made correctly.

00;22;25;05 – 00;22;50;04

Andy Sobczyk

The staff, you know, they’re not going to make a mistake if it’s done. The right way and you’re going to get the right appointment at the right time. And so I think this is another one that the risk isn’t as heavy on, like the patient information or the clinical decision making side. But in terms of like efficiency for a health system and rooting out the waste and making sure they’re kind of optimizing the practice and the medical group and the services they’re providing.

00;22;50;04 – 00;23;10;02

Andy Sobczyk

I think these types of solutions are really important and will be very prevalent in the future, because it’s a problem that’s been around for a long time and has been very rudimentary in how practices and health systems have solved it. And I think this could really get things to the next level here and support that physician shortage. As we were saying, you just kind of do more with less as a theme across the industry.

00;23;10;02 – 00;23;13;19

Andy Sobczyk

And I think things like this can help you even that playing field.

00;23;13;21 – 00;23;29;08

Jonathan Porter

Yeah, absolutely. Andy, there’s a ton of positives that could come out of this. And I don’t think in general we talk about capacity management. No one’s going to say that you submitted a false claim because you didn’t manage capacity properly. That’s just not a real enforcement resource. Where you could have some problems is if you implement this the wrong way.

00;23;29;08 – 00;23;56;11

Jonathan Porter

And what comes to mind is a $22 million settlement that the University of Miami entered into a DOJ from 2021, where they had just preset tests. So if you came in to the University of Miami for this particular thing, you automatically got certain tests run. And the issue there was there were coverage determinations that said, we don’t pay for tests unless a physician orders them and says that they’re medically necessary.

00;23;56;11 – 00;24;15;13

Jonathan Porter

And they said, you can’t sort of make that determination across the board. And so when you’re thinking about like turning over patient flow and next steps for patients, you’ve got to have some position because there are certain things that LCDs and NCDs really do say you’ve got to have a physician ordering these. And so you just got to be aware of what those things are.

00;24;15;13 – 00;24;32;04

Jonathan Porter

But I think there’s ways for you to craft your system around that just to mitigate against that risk. But in general, one of the things that I think could create another time enforcement risk here, Andy, is let’s say you’ve got an imaging center that’s really slow, and all of a sudden you’re saying, well, let’s push more people into our imaging center.

00;24;32;06 – 00;24;51;14

Jonathan Porter

I think DOJ and HHS, OIG would look at that and say, well, you’re tweaking what you think medically necessary is based on capacity of your imaging center. I think they could have a big concern with Overutilization if you’re not letting medical necessity drive it. If instead you’re saying, we’ve got too much capacity in this, let’s push things over.

00;24;51;16 – 00;24;53;07

Jonathan Porter

So what do you think about that, Andy?

00;24;53;09 – 00;25;19;19

Andy Sobczyk

Yeah, it sounds like you wouldn’t want it to turn into a situation where there’s steerage that otherwise wasn’t occurring. Like it would be great implemented in a situation where if you’re a big health system and you have ospital and one place and another one 40 minutes away, that’s not too difficult to get to. If you’re a patient and you’re managing the capacity where one hospital has an imaging center that’s really full, but the other one’s available, and the algorithm or the platform helps to find that quicker appointment make for a better patient experience.

00;25;19;19 – 00;25;42;29

Andy Sobczyk

You’re then capturing volume that your physicians determine was necessary and was being sent to those sites of care, but you otherwise weren’t capturing it. That seems like a positive. But then if it’s like, oh, the large language model or whatever technology solution you have is going to all of a sudden determine what is medically necessary for an imaging study, and it becomes over utilization.

00;25;43;02 – 00;25;49;10

Andy Sobczyk

That seems like a bad pathway to go down from a risk standpoint. So yeah, I’d agree. And that’s a great consideration. Yeah.

00;25;49;10 – 00;26;02;21

Jonathan Porter

And I think that’s just the way that you roll it out. There’s ways for you to avoid that. But I guarantee you we’ll see someone mess that up at some point in the next decade. But that’s why you got to think about these things before you roll them out. So, Amy, the last topic that I want us to cover is billing optimization.

00;26;02;21 – 00;26;19;19

Jonathan Porter

So there’s 11,000 CPT codes out there and no one’s an expert in all of them. And so there’s a lot of discussion about how we could use AI to help figure out what can be billed, what should be billed. And so, Andy, tell our listeners a little bit about how I could help with billing optimization.

00;26;19;21 – 00;26;45;22

Andy Sobczyk

I think this is a very interesting one, particularly for providers, as it relates to like just the dynamic with insurance in the last 5 to 10 years, the rate of denials that providers are experiencing has gone up significantly, depending on the source, you’ll see it in the 12 to 15% range. Now, in the past, we used to work with clients and say you want to be at 5% or less, and so it’s what the bar has been raised for.

00;26;45;22 – 00;27;13;15

Andy Sobczyk

How much is now providers can expect to be denied. And until now, I think the insurance companies have had a lot of these AI tools. They’re the ones that are using them to deny claims or review claims more efficiently and faster. And now I think there’s a lot more in the hands of providers to where they can bill and submit claims more efficiently, even more effectively, if they are programed to have the most recent rules.

00;27;13;15 – 00;27;41;12

Andy Sobczyk

You know, because the rules are always changing relative to, you know, every year the fee schedule comes out and there’s new considerations for what’s billable and what’s not. The payers have their own guidelines for that. And so if the technology solutions can keep up with that and maintain it, and it can be automated to where this could potentially prevent, there’s a lot of offshoring right now of these functions, because health systems and practices just can’t afford the number of staff that they need onshore to get this done.

00;27;41;12 – 00;28;06;22

Andy Sobczyk

And so there’s been a lot of offshoring of these solutions. This type of technology could, you know, potentially bring a little more control into their hands to where it’s automated, it’s being done faster, but it’s not somewhere that’s so distant. It’s something that they can control and implement themselves. I think we see a lot of adoption of this on the provider side in terms of like, I want to get paid what I’m owed from the insurance companies, Medicare and others.

00;28;06;24 – 00;28;32;07

Andy Sobczyk

I want it done faster. You know, I want to be able to appeal denials faster because that process can be very cumbersome for just a staff person to handle. Just submitting an appeal, following up, following up, appealing again takes a lot of hours. And so I in this area is helping provider organizations kind of offset that dynamic with insurance and the reimbursement environment to where they’re kind of on the losing end of it.

00;28;32;08 – 00;28;53;24

Andy Sobczyk

But just the pool of money was getting smaller, and it was getting harder to capture those dollars that you were owed. And so I think this technology in the hands of the providers can help balance that and potentially keep some organizations afloat longer than previously able to. I think on the flip side of like, okay, what do we need to consider risk wise?

00;28;53;27 – 00;29;28;25

Andy Sobczyk

There are situations where these platforms can predict propensity to pay. So like they can flag patients or certain patients that may have a lower propensity to pay a bill. And so that may then trigger processes to try to collect more aggressively or even deny care. That could be like the worst case scenario. And so, like situations could arise where the patients that are flagged that have a propensity to pay less could be viewed as targeting or redlining, or doing something to exclude a population of patients based on what an algorithm or a technology platform said.

00;29;29;03 – 00;29;49;20

Andy Sobczyk

And so that may run counter to, say, the mission of the organization. And that could get, I think, some folks in trouble, both from like, as you mentioned, a bad headline perspective, but also more serious consequences there. And so I think just being aware of the paths that the technology can take you and then kind of having a strategy to mitigate is very important in this area as well.

00;29;49;25 – 00;30;15;06

Jonathan Porter

Yeah. And I think the timeline there that you gave is right. The payers have been using this for a long time. And so a lot of this on the healthcare provider side is reactive. Healthcare providers are reacting to what payers have been doing and sort of getting ahead of their issues. And so I think that’s just a smart thing to do, where I think you could get into trouble in this area and is if you’re using it to say, hey, whatever AI model we’re using while we’re helping patients, why don’t you also keep in mind we’re business.

00;30;15;06 – 00;30;42;26

Jonathan Porter

And so if there’s something lucrative that we could be doing for patients that you think is medically necessary, go ahead and prompt our physician and say, this is a thing that you could do. So that I think would be a really interesting result is if you use it not just to help patients, but to maximize your profits. There’s a lot of very smart people operating health systems out there that I’m sure could think about that and say, we could build a profit generator model into our overall AI.

00;30;42;29 – 00;30;44;11

Jonathan Porter

I think that could be interesting. Any.

00;30;44;14 – 00;31;05;15

Andy Sobczyk

Yeah. And I think part of the key there is like ensuring that you have transparency to the rules and those rules that evolve and change on an annual basis, sometimes even more frequently on the payer side, that that that would need to be programed or at least kept up with on the technology side because it can be easy to get that wrong.

00;31;05;18 – 00;31;15;24

Andy Sobczyk

And then, of course, you have the temptation to say either going to up code or force or offer services that otherwise wouldn’t be approved. I think a lot of rabbit holes on that front.

00;31;15;29 – 00;31;32;11

Jonathan Porter

Yeah. Well, Andy, thanks for joining the episode today. Again. AI and healthcare is awesome. I think there’s so much potential here, all the ways that I could improve healthcare in our country and how there are risks and how they should be thinking about those risks. So, Andy, thanks for coming on the podcast and telling our listeners all about this.

00;31;32;13 – 00;31;48;18

Andy Sobczyk

Appreciate the opportunity is a great discussion. I don’t think anything is slowing down in this space. So stuff that we talk about today may have a totally different angle in you know, a couple of months or a couple of years. So it’s really, I think, important to stay on top of that and continue the dialog. So thanks for having me

00;31;48;18 – 00;31;49;21

Andy Sobczyk

and appreciate it.

00;31;49;23 – 00;32;09;07

Jonathan Porter

To close, the healthcare industry, which is the main industry that receives false claims like scrutiny, the health care industry is at a pivotal moment in terms of enforcement. DOJ moved a bunch of attorneys into healthcare roles earlier this year. There were a record number of qui tams filed last year, and there have been some pretty critical FCA trials in recent months.

00;32;09;07 – 00;32;32;15

Jonathan Porter

And so enforcement is on the rise, and some of that enforcement is different than what health care enforcement used to look like. I’m seeing new types of investigations in healthcare providers, new and kickback statute theories being pushed in. So I can’t help but feel like we’re at a boiling point for enforcement of the health care industry. So buckle up, because we’re going to keep talking about these issues on this podcast.

00;32;32;15 – 00;32;51;15

Jonathan Porter

We’ll continue to take a hard look at enforcement issues impacting health care and other industries regulated by the False Claims Act. And so subscribe or follow us, and we’ll keep bringing you our thoughts on the this boiling point to come. Until then, thanks for listening and we’ll see you next time.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
admin
  • Website

Related Posts

False allegations legislative insights – FCA perspective on artificial intelligence in the healthcare industry | Husch Blackwell LLP

August 25, 2025

Top 10 AI Prompts and Use Cases and in the Healthcare Industry in Miami

August 25, 2025

The Complete Guide to Using AI in the Healthcare Industry in Killeen in 2025

August 25, 2025

Comments are closed.

Top Posts

False Claims Act Insights – An FCA Perspective on Artificial Intelligence in the Healthcare Industry

August 26, 2025

How To Unlock A Windows PC Without The Password?

January 14, 2021
7.2

Best Chanel Perfume of 2024 – Top Chanel Fragrance Worth Buying

January 15, 2021

Is It Safe to Use an Old or Used Phone? Report Card

January 14, 2021
Don't Miss

Cuts to ICB nurse leaders ‘risk patient safety’, RCN warns

By adminAugust 22, 2025

Digital Edition: Cuts to ICB nurse leaders ‘risk patient safety’, RCN warns …

England’s chief nurse orders crackdown on sexual misconduct

August 22, 2025

Stoma care nurses call for recognition of ‘silent workforce’

August 22, 2025

Nurse vacancy opens on one of UK’s most remote islands

August 21, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

About Us
About Us

Welcome to HomeCareNews.us, your trusted source for comprehensive information on home healthcare services. Our mission is to empower individuals and families by providing accurate, up-to-date, and insightful information about essential home care services in USA.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

False Claims Act Insights – An FCA Perspective on Artificial Intelligence in the Healthcare Industry

August 26, 2025

False allegations legislative insights – FCA perspective on artificial intelligence in the healthcare industry | Husch Blackwell LLP

August 25, 2025

Top 10 AI Prompts and Use Cases and in the Healthcare Industry in Miami

August 25, 2025
Most Popular

False Claims Act Insights – An FCA Perspective on Artificial Intelligence in the Healthcare Industry

August 26, 2025

How To Unlock A Windows PC Without The Password?

January 14, 2021
7.2

Best Chanel Perfume of 2024 – Top Chanel Fragrance Worth Buying

January 15, 2021
  • Home
  • About Us
  • Advertise with Us
  • Contact us
  • DMCA Policy
  • Privacy Policy
  • Terms & Conditions
© 2025 HomecareNews.US

Type above and press Enter to search. Press Esc to cancel.