INNOVATION
The Joint Commission and Coalition for Health advance unified AI safety standards to guide fast, confident adoption across healthcare
19 Nov 2025

A powerful shift is unfolding across US healthcare as the Joint Commission has teamed up with the Coalition for Health AI to develop national guidelines for medical artificial intelligence, responding to rapid adoption of digital tools across US hospitals and growing concern over their reliability.
The organisations said they aim to produce a shared framework that health systems can use to judge whether clinical algorithms are safe and effective. Hospital executives have warned that uneven oversight leaves them unsure which technologies can be trusted as AI becomes more embedded in patient care and administrative work.
AI systems now assist with diagnosis, automate documentation, and support operational tasks. While providers welcome efficiency gains, advisors to the project said many institutions feel they are operating without a compass as new tools enter the market. One advisor said health systems want to innovate while avoiding preventable harm, and that clear guidance has been lacking for years.
The partnership plans to draft national standards and set up a voluntary certification programme to assess how models are built, tested, and deployed. A representative from the Coalition for Health AI said the aim was to give hospitals a clearer path to responsible use. Supporters argue that common evaluation methods could speed adoption and strengthen confidence among clinicians and patients.
Some experts have questioned whether strict rules could place smaller developers at a disadvantage if they lack the resources to meet evolving requirements. Others noted that the rapid pace of AI development means any certification process must keep up to remain relevant. Even so, there is broad agreement that health systems need a stronger structure to assess new tools as automation becomes more common in clinical settings.
The initiative has drawn interest from clinicians, hospital leaders, and technology companies. Its recommendations are expected to influence purchasing decisions and shape how developers design future products. The groups hope the effort will give hospitals a more predictable path for adopting AI while reducing risks linked to untested systems.
If the programme succeeds, it could form the basis of a national approach to governing medical AI, offering a balance between progress and protection as the technology spreads across the healthcare sector.
19 Nov 2025
17 Nov 2025
12 Nov 2025
10 Nov 2025

INNOVATION
19 Nov 2025

INVESTMENT
17 Nov 2025

TECHNOLOGY
12 Nov 2025
By submitting, you agree to receive email communications from the event organizers, including upcoming promotions and discounted tickets, news, and access to related events.