Healthcare organizations feel vulnerable to AI-induced cybersecurity incidents.
More specifically, organizations are encountering problems due to lack of guidelines and control over AI use.
According to a report released last month by IBM, the healthcare industry is seeing the most expensive violations of any sector. The average healthcare data breach costs $7.42 million.
The report also discussed an increase in violations related to “Shadow AI” where employees use AI tools that are not approved by their organization. Many organizations do not track their use.
Limor Kessem, global leader in cybercrisis management at IBM Consulting, told Chief Healthcare Executive® that organizations need to pay more attention to Shadow AI.
Organizations need to protect people using “what they vet in uncontrollable ways,” she says. And she says this is a problem with the healthcare sector.
“I think this shadow AI is a big deal because people tend to do it without thinking, just want to speed up their work, just get things done, and that's what happened,” Kessem says.
“Even if you type or paste it, you just uploaded a lot of company data,” she says. “It's gone and trained AI systems somewhere, not asking employers, the security team doesn't know about this. They didn't really realize what you just did. These issues, those actions, led to a very vast data breaches.”
Such actions are understandable and do not come from a malicious place, Kessem admits. But that can lead to trouble.
“People are trying to make their work more productive, work faster and become more efficient,” she says. “And then, if you don't know and have a guardrail in terms of what you can use and how you can use it, it can be very dangerous.”
An IBM report stated that one in five (20%) had experienced a Shadow AI-related violation. And those violations can be more expensive.
Organizations with higher levels of Shadow AI saw an additional $670,000 for the cost of the violation.
Furthermore, violations related to Shadow AI increased personal information exposure by 65% and intellectual property data exposure by 40%.
“These seemingly almost innocent actions that someone can take can be a very expensive mistake for an organization,” Kessem says.
So, what does the healthcare system do? Kessem says they need to inform workers about AI tools that are approved for use.
“The way to solve that is to do a risk assessment accordingly and then start assigning tools that the company will allow or ban,” Kessem says.
However, organizations need to go home with their employees.
“We are not permitted to over-notify us of that and use it every other day what we are allowed to use,” she says. “Encourage employees to use what they can use and make sure everyone knows.”