The Coalition for Health AI (CHAI) is staking its claim as the industry’s best option for creating meaningful and acceptable AI standards
As the drumbeat grows for AI governance, one of the biggest questions is whether the healthcare industry or the federal government should take the lead.
An almost-year-old coalition of health systems and tech companies is addressing that question with plans to work with federal regulators.
The Coalition for Health AI (CHAI), which formed in April 2023 as a collective featuring Stanford, the Mayo Clinic, Vanderbilt, Johns Hopkins, Google, and Microsoft and now comprises more than 1,300 members, recently unveiled its first CEO and board of directors. The group also announced plans to collaborate with both the National Health Council and health standards organization HL7 to “craft a comprehensive framework for the deployment and management of artificial intelligence (AI) within healthcare settings.”
"AI is transforming the practice of medicine in ways that seemed unimaginable just two years ago,” CHAI co-founder John Halamka, MD, MS, president of the Mayo Clinic Platform and chair of the group’s new board of directors, said in a press release. “CHAI will bring together policymakers, technologists, healthcare providers, health plans, and a range of stakeholder advocates to develop guidelines and frameworks for evaluating AI. We have a shared mission to empower clinicians and patients with AI tools that maximize benefit and minimize harm."
[Also read: Will 2024 Be The Year of AI Governance?]
The partnership is the latest and largest effort to date to create guardrails around the fast-growing phenomenon of AI in healthcare. Health systems and hospitals are launching AI programs by the hundreds to address administrative and clinical pain points, but many worry that they’re going too fast too soon.
The Biden Administration has been pushing for a combined approach to AI governance, with an October 2023 Executive Order that puts the Health and Human Services Department and the Office of the National Coordinator for Health IT (ONC) at the forefront of regulatory efforts and a December 2023 final order that calls for more rules around transparency by the end of this year.
CHAI’s efforts have drawn support from US Food and Drug Administration (FDA) Commissioner Robert Califf, who spoke about AI at CES 2024 in January and addressed CHAI at a meeting earlier this week.
“As part of our AI strategy, the Agency is collaborating with public/private partners to develop a framework for assessing the potential risks and benefits of healthcare AI—this issue is too large to be contained within the FDA,” he told the group. “We’re also developing guidelines for the responsible deployment and ongoing monitoring of AI-driven health care solutions, including those using both adaptive and generative AI methods. The aim is to adapt general AI regulation and standards where needed to the unique characteristics of the health care sector. For instance, general AI regulations often stress the importance of accountability and transparency, which are also crucial in the health care domain due to the sensitive nature of health-related data.”
But Califf also said he worried that health systems “do not have the infrastructure and tools to make the most important determinations about whether an AI application is ‘effective’ for health outcomes.” He listed two conditions for proper review of AI tools:
- First, the industry and federal regulators will need to monitor AI tools continuously, since the technology is constantly evolving and the threat of drift over time. “With the proliferation of AI applications and the fact that they evolve over time, it is unclear how the performance of the models will be monitored at the scale that will be needed,” he noted.
- Second, AI governance must include complete follow-up of the population affected by the AI tool. “The lack of an interoperable national approach to enabling follow-up of patients leads to a situation in which none of our health systems have a systematic ability to do the requisite monitoring of the model except for the duration of an acute care hospital admission,” Califf pointed out.
Califf also sounded a note of caution on “effectiveness” metrics for AI evaluation, saying he worried those analyses might focus on a health system’s financial interests rather than whether the technology can improve clinical outcomes.
Aside from appointing MITRE co-founder Brian Anderson, MD, senior advisor for clinical trial innovation at ARPA-H and an associate professor of biomedical informatics at Harvard, as CHAI’s CEO, the group named a board of directors, led by Halamka, and advisory boards focused on health systems and providers, patient and community advocacy, the healthcare industry, start-ups, and the government.
Eric Wicklund is the associate content manager and senior editor for Innovation at HealthLeaders.
KEY TAKEAWAYS
AI development and adoption in healthcare is moving faster than attempts to govern the technology.
The Biden Administration has developed an AI strategy and put federal regulators in charge of developing guidelines for healthcare, and is asking the industry to work together on those guardrails.
The Coalition for Health AI (CHAI), a year-old organization comprised of 1,300 partners, is pushing to lead the industry’s efforts to establish governance.