Garg adds that the leap from theoretical AI to operational implementation is huge. In other words, moving beyond AI demos to where these systems deliver real ROI, from finance to staff and patients, requires understanding how well the tools work, whether there is sufficient human oversight, whether escalations are built in, and how to manage what the AI itself cannot address.
Possible side effects
Garg said one of the drawbacks of AI governance committees is that they may only focus on AI at the point of consideration. However, healthcare systems and models are dynamic. Everything is constantly changing, so monitoring mechanisms need to be continuously in place.
Another drawback often lies in the multidisciplinary approach. The benefits of diverse perspectives are immense, but their voices can slow down innovation. “What you also don’t want is for it to be so interdisciplinary that everyone has 100 reasons to say no, but no one is actually responsible for saying yes,” he says. This is where a deeply rooted AI governance framework comes into play. “We need a little bit of clarity to say we need to understand the value, the workflow impact, the safety, and make decisions moving forward based on those use cases,” he added.
