Leading healthcare organizations, including OpenAI and 13 other AI firms, have committed to safe and trustworthy AI development under the Biden-Harris Administration. Announced on December 14, these voluntary commitments aim to harness the substantial benefits of large-scale AI models in healthcare while safeguarding sensitive patient information and minimizing associated risks.
A total of 28 healthcare providers and payers, such as CVS Health, Stanford Health, Boston Children’s Hospital, UC San Diego Health, UC Davis Health, and WellSpan Health, have signed these commitments. Their goal is to address skepticism around AI’s reliability and safety in medical settings, especially as generative AI technologies like ChatGPT gain prominence.
Prior discussions about AI in healthcare mainly focused on its potential to diagnose diseases early and discover new treatments. However, a recent survey by GE Healthcare revealed that 55% of clinicians believe AI is not yet suitable for medical applications, with 58% expressing distrust in AI-generated data. Among clinicians with over 16 years of experience, this skepticism rises to 67%.
The participating organizations aim to shift this perception through their commitments, emphasizing the need for coordinated care, enhanced patient experiences, and reduced clinician burnout. As stated in their commitment document, “AI represents a once-in-a-generation opportunity to enhance the healthcare system, notably in early cancer detection and prevention.”
To build user confidence, these organizations have pledged to align their projects with the Fair, Appropriate, Valid, Effective, and Safe (FAVES) AI principles outlined by the U.S. Department of Health and Human Services (HHS). This alignment will help mitigate biases and risks, ensuring that solutions are effective in real-world applications.
The companies plan to establish trust through transparency and a comprehensive risk management framework. A key element is transparency about whether users are engaging with AI-generated content that hasn’t undergone human review. The risk management framework will involve thorough tracking of AI applications and proactive measures to address potential harms in various healthcare settings.
“We will implement governance practices, including maintaining a list of applications using frontier models and ensuring a robust framework for risk management,” the organizations shared.
Moreover, while focusing on current implementations, the organizations are committed to ongoing research and development (R&D) in health-centric AI innovation, maintaining necessary safeguards. They intend to utilize non-production environments and test data to prototype new applications, ensuring privacy compliance, and ongoing monitoring for fair and accurate performance.
Additionally, they will address issues related to open-source technology and work toward training their teams on the safe and effective development of frontier model applications.