Dive Brief:
- The Coalition for Health AI on Friday released draft frameworks on how it will certify artificial intelligence quality assurance labs and provide information about AI models.
- CHAI, which aims to set guidelines for responsible AI use in healthcare, plans to verify that labs testing the models aren’t financially connected with developers, can create adequate testing data, and have the technical infrastructure and staff, among other requirements, CEO Brian Anderson told Healthcare Dive.
- The nonprofit also released details on one way labs can report their results. The draft CHAI Model Card is a standard template that aims to provide health AI buyers more information before making a purchase, like intended uses, targeted patient populations, maintenance requirements, and known risks and biases.
Dive Insight:
Founded in 2021, CHAI aims to hash out the technical details behind the safe and effective adoption of AI in healthcare — a serious concern for experts, lawmakers and regulators who worry the emerging technology could introduce errors or biases that worsen existing health disparities.
The coalition is made up of nearly 3,000 health systems, professional organizations, technology providers, startups and other healthcare companies.
Part of CHAI’s plan includes a network of quality assurance labs that test models against AI standards and validate their performance.
“This is something that happens across literally every other sector of consequence,” Anderson said. “You don’t get into a car without that car being tested by independent entities. You don’t get into a new airplane without it being tested. […] These are all things that we take for granted. We don’t have it in AI. We certainly don’t have it in health AI.”
Under the draft framework, labs will have to demonstrate they don’t have conflicts of interest and that they can pull together high-quality and diverse testing datasets. They’ll also need to show they can test for characteristics like clinical robustness and transparency, as well as metrics like bias and usability to be certified, Anderson said.
Those labs could then put out report cards — detailed documents that lay out the model’s testing — as well as the Model Cards, which CHAI calls a “nutrition label” for people researching AI during the procurement process.
The Model Cards also dovetail with the HTI-1 rule, a regulation finalized late last year to establish transparency requirements for AI products certified by the newly renamed Assistant Secretary for Technology Policy/Office of the National Coordinator for Health Information Technology. The rule requires developers of clinical decision support and predictive tools to share information like intended uses, inappropriate uses or settings and known risks.
CHAI is now seeking feedback on the draft frameworks. The coalition plans to release the final certification process and Model Card design in April 2025.