Summary: As artificial intelligence (AI) becomes more integrated into health care, ensuring its safe implementation and use is critical to protecting patients and improving outcomes. Dean Sittig, PhD, and Hardeep Singh, MD, MPH, have published guidance in the Journal of the American Medical Association offering a framework for health care organizations to monitor and manage AI systems. The recommendations emphasize governance, clinician training, transparency with patients, and processes for assessing and mitigating risks.
Key Takeaways:
- Robust Governance Is Crucial: New guidance says health care organizations should establish committees with multidisciplinary experts to oversee AI system deployment, ensure adherence to safety protocols, and monitor system performance.
- Transparency Builds Trust: Clinicians should receive formal training on AI use and risks, while patients should be informed when AI contributes to their care decisions to foster confidence and trust in the technology.
- Proactive Risk Management: Organizations must rigorously test AI systems in real-world settings, maintain inventories of AI tools, and implement processes to safely deactivate systems in the event of malfunctions.
As artificial intelligence (AI) becomes more prevalent in health care, organizations and clinicians must take steps to ensure its safe implementation and use in real-world clinical settings, according to an article co-written by Dean Sittig, PhD, professor with McWilliams School of Biomedical Informatics at UTHealth Houston and Hardeep Singh, MD, MPH, professor at Baylor College of Medicine.
The guidance was published in the Journal of the American Medical Association.
“We often hear about the need for AI to be built safely but not about how to use it safely in health care settings,” Sittig says in a release. “It is a tool that has the potential to revolutionize medical care, but without safeguards in place, AI could generate false or misleading outputs that could potentially harm patients if left unchecked.”
Drawing from expert opinion, literature reviews, and experiences with health IT use and safety assessment, Sittig and Singh developed a pragmatic approach for health care organizations and clinicians to monitor and manage AI systems.
“Health care delivery organizations will need to implement robust governance systems and testing processes locally to ensure safe AI and safe use of AI so that ultimately AI can be used to improve the safety of health care and patient outcomes,” Singh says in a release. “All health care delivery organizations should check out these recommendations and start proactively preparing for AI now.”
Some of the recommended actions for health care organizations are:
- Review guidance published in high-quality, peer-reviewed journals and conduct rigorous real-world testing to confirm AI’s safety and effectiveness.
- Establish dedicated committees with multidisciplinary experts to oversee AI system deployment and ensure adherence to safety protocols. Committee members should meet regularly to review requests for new AI applications, consider their safety and effectiveness before implementing them, and develop processes to monitor their performance.
- Formally train clinicians on AI usage and risk, but also be transparent with patients when AI is part of their care decisions. This transparency is key to building trust and confidence in AI’s role in health care.
- Maintain a detailed inventory of AI systems and regularly evaluate them to identify and mitigate any risks.
- Develop procedures to turn off AI systems should they malfunction, ensuring smooth transitions back to manual processes.
“Implementing AI into clinical settings should be a shared responsibility among health care providers, AI developers, and electronic health record vendors to protect patients,” Sittig says in a release. “By working together, we can build trust and promote the safe adoption of AI in health care.”
ID 129856922 © Dzmitry Ryzhykau | Dreamstime.com