This article is part of the HHCN+ membership
As more home health care providers rely on AI tools to both advance clinical practices and streamline operations, the importance of strong safety measures is becoming increasingly apparent.
Companies like First Choice Home Health & Hospice and Team Select Home Care have developed AI policies that address potential concerns such as patient data, cybersecurity, and HIPAA compliance. Without such sensitive policies developed with state-specific regulations in mind, organizations risk financial penalties.
Angelo Spinola, chair of home health, home care and hospice at law firm Polsinelli, sees an increase in home care clients of all types reaching out for consultation when developing AI policies in general.
“Whether it's a franchise system, an independent (company) or part of a larger strategy, this is something that everyone is grappling with,” he told Home Health Care News.
Team Select Home Care has implemented several AI-related policies. CEO Fred Johnson said many of these policies focus on patient data, privacy and HIPAA.
“AI companies will in some cases request unfettered access to the EMR, but this is risky and could potentially expose highly sensitive patient information,” he told HHCN. “Our policy is to never allow unrestricted EMR access. We employ data minimization principles to ensure (patient health information) remains protected based on HIPAA and industry best practices. Patient names are never shared (to external partners) and our tools run entirely outside of the EMR.”
Phoenix-based Team Select Home Care serves medically complex children and homebound seniors. The company operates in 16 states and is one of the nation's largest pediatric long-term care providers.
Cybersecurity is another major focus of Team Select Home Care's various policies, as implementing AI tools can make businesses more vulnerable and expand their attack surface.
“There are so many risks in cyber, and this is one of the few that could disrupt our business,” Johnson said. “When developing AI policy, data is really like the first hurdle to get through in terms of availability, who can see it, what can they see, how well is it protected, etc. If you can't get past that hurdle, you can't take the next step and do anything.”
Another key element of Team Select Home Care's policy is the guarantee that the company will never use AI tools to replace a clinician's medical judgment or decision-making.
Like Team Select Home Care, protecting patient health information and HIPAA compliance were top priorities for First Choice Home Health & Hospice when implementing AI technology. Beau Sorensen, chief operating officer at First Choice Home Health & Hospice, said the company is avoiding the use of popular large-scale language models (LLMs) like ChatGPT when dealing with sensitive patient information in this pursuit.
Founded in 1996 and based in Orem, Utah, First Choice Home Health & Hospice serves the Wasatch Front region. In addition to core home health and hospice services, the provider offers a variety of Medicare Part B services, including outpatient care and Clinical Laboratory Improvement Amendments (CLIA) testing services.
The company also has a policy that all AI tools related to its core business of patient care must be scrutinized.
“AI needs to go through a vetting process where it is tested against existing solutions to determine if it is of high enough quality, if it is delivering good results, and if it is ultimately capable of serving patient populations and serving clinicians better than existing solutions,” Sorensen told HHCN.
Mr. Sorensen emphasized the importance of having policies that are constantly evolving.
“Policy manuals should not be written in stone from the hand of God on Mount Sinai,” he said. “It has to be something we constantly look at and evaluate and say, ‘Is this policy still working for us?’ If not, what changes do we need to make?”
Thorough compliance
If a home care company is in the process of developing an AI policy, Spinola says the provider will need to ensure it complies with disclosure regulations.
“Different states have different requirements,” he says. “If you are using an AI monitoring device or someone is working with an AI agent, you may need to notify clients and caregivers so they understand that it is an AI agent and consent to the use of these programs.”
Spinola also explained that some states have laws governing what entities are allowed to do with AI tools.
“In some cases, human oversight and review may be necessary,” he says. “In some cases, it may simply be that an AI tool cannot do something that a human cannot do from a discriminatory perspective, which could have a negative impact on certain protected categories of individuals,” he said.
Without a thorough AI policy, there are several ways home-based providers could find themselves in trouble.
“There could be HIPAA violations, privacy violations and cybersecurity issues,” Spinola said. “Failure to comply with these rules, failure to notify or obtain consent, may result in various penalties associated with several state laws.”
When President Donald Trump took office, he rescinded Joe Biden's 2023 executive order aimed at addressing AI-related risks. AI oversight measures are primarily driven by state rather than federal law.
States frequently update their laws regarding the use of AI. This means that providers must closely follow rapidly evolving regulations.
Spinola noted that some providers that operate in multiple states model their entire compliance programs after states with more stringent laws and regulations.
Ultimately, Johnson said, AI policies must be a top priority to maintain compliance and patient safety.
“AI should not be a shadow IT function,” he said. “We need to maintain very strong, structured governance similar to what you would expect for clinical security and cybersecurity.”
