Federal officials say 28 provider and payer organizations have signed on to voluntarily adhere to federal guidelines around the responsible and ethical use of AI in healthcare
As questions arise over who should be in charge of AI governance, the Biden Administration is focusing on collaborating with some of the biggest health systems and payers.
The administration this week unveiled voluntary pledges from 28 organizations “to help move toward safe, secure, and trustworthy purchasing and use of AI technology.” The announcement, coming on the heels of President Biden’s November 30 Executive Order on AI, sets the stage for what’s expected to be lively debate over whether the federal government or the healthcare industry should set the ground rules. Many within healthcare, still hurting from the thorny rollout of electronic medical records and “meaningful use” criteria, are arguing that the industry should be able to police itself.
The administration’s response, authored by National Economic Advisor Lael Brainard, Domestic Policy Advisor Neera Tanden, and Arati Prabhakar, director of the Office of Science and Technology Policy, is focused on working together.
“The commitments received today will serve to align industry action on AI around the ‘FAVES’ principles—that AI should lead to healthcare outcomes that are Fair, Appropriate, Valid, Effective, and Safe,” they wrote. “Under these principles, the companies commit to inform users whenever they receive content that is largely AI-generated and not reviewed or edited by people.”
“They will adhere to a risk management framework for using applications powered by foundation models—one by which they will monitor and address harms that applications might cause,” the three advisors continued. “At the same time, they pledge to investigating and developing valuable uses of AI responsibly, including developing solutions that advance health equity, expand access to care, make care affordable, coordinate care to improve outcomes, reduce clinician burnout, and otherwise improve the experience of patients.”
Those organizations voluntarily committing to that framework are Allina Health, Bassett Healthcare Network, Boston Children’s Hospital, Curai Health, CVS Health, Devoted Health, Duke Health, Emory Healthcare, Endeavor Health, Fairview Health Systems, Geisinger, Hackensack Meridian, HealthFirst (Florida), Houston Methodist, John Muir Health, Keck Medicine, Main Line Health, Mass General Brigham, Medical University of South Carolina Health, Oscar, OSF HealthCare, Premera Blue Cross, Rush University System for Health, Sanford Health, Tufts Medicine, UC San Diego Health, UC Davis Health, and WellSpan Health.
“We have collaborated with these innovative providers and payers to define a set of voluntary commitments to guide our use of frontier models in healthcare delivery and payment,” Paul Uhrig, Bassett Health’s chief legal and digital health officer, said in a LinkedIn posting shortly after the announcement was made.
“We applaud the efforts to convene a diverse group of healthcare organizations to coalesce around landmark voluntary commitments that will be fundamental to the future of AI and allow us to responsibly advance the use of these technologies for the benefit of those we serve," added Sanford Health President and CEO Bill Gassen. "As the largest rural healthcare provider in the country, we were honored to help lead this effort on behalf of our patients, two-thirds of whom live in rural communities in America's Heartland. It has been energizing to collaborate over the last several weeks with colleagues across the healthcare ecosystem on a framework that reflects our shared commitment to harnessing large-scale AI and machine learning models safely, securely and transparently. Such swift progress following the signing of President Biden’s executive order on AI underscores our collective acknowledgement of the myriad ways in which these technologies could help to improve healthcare quality, access, affordability, equitable outcomes, patient experience, clinician well-being and industry sustainability – likely in ways that we cannot fully anticipate today. Protecting our patients who place their trust in us is paramount as we move forward. We look forward to continuing to work with industry leaders, elected officials and the Administration on these critically important efforts.”
Those organizations have pledged to:
- Develop AI solutions to optimize healthcare delivery and payment by advancing health equity, expanding access, making healthcare more affordable, improving outcomes through more coordinated care, improving patient experience, and reducing clinician burnout.
- Work with their peers and partners to ensure outcomes are aligned with fair, appropriate, valid, effective, and safe (FAVES) AI principles.
- Deploy trust mechanisms that inform users if content is largely AI-generated and not reviewed or edited by a human.
- Adhere to a risk management framework that includes comprehensive tracking of applications powered by frontier models and an accounting for potential harms and steps to mitigate them.
- Research, investigate, and develop AI swiftly but responsibly.
The administration is also highlighting pledges secured earlier this year from more than a dozen technology companies, including Microsoft and Google, to toe the line on developing and using AI responsibly.
“We must remain vigilant to realize the promise of AI for improving health outcomes,” Brainard, Tanden, and Prabhakar wrote. “Healthcare is an essential service for all Americans, and quality care sometimes makes the difference between life and death. Without appropriate testing, risk mitigations, and human oversight, AI-enabled tools used for clinical decisions can make errors that are costly at best—and dangerous at worst. Absent proper oversight, diagnoses by AI can be biased by gender or race, especially when AI is not trained on data representing the population it is being used to treat. Additionally, AI’s ability to collect large volumes of data—and infer new information from disparate datapoints—could create privacy risks for patients. All these risks are vital to address.”
As outlined in Biden’s Executive Order, the federal government’s efforts to govern AI are being led by the Health and Human Services Department. Alongside HHS, other departments have taken action on AI concerns, including the National Institutes of Health (NIH), US Food and Drug Administration (FDA), Office for Civil Rights (OCR), and Centers for Medicare & Medicaid Services (CMS).
“The private-sector commitments announced today are a critical step in our whole-of-society effort to advance AI for the health and wellbeing of Americans,” the three advisors wrote. “These 28 providers and payers have stepped up, and we hope more will join these commitments in the weeks ahead.”
Eric Wicklund is the associate content manager and senior editor for Innovation at HealthLeaders.
KEY TAKEAWAYS
The Biden Administration is taking an aggressive approach to AI regulation, securing commitments from healthcare providers, payers, and tech giants to develop and use the technology responsibly.
Some experts within the healthcare industry are calling for the industry to develop its own rules on AI, following the botched rollout of EMRs.
Biden Administration officials say the government needs to collaborate with the healthcare industry and others to ensure the guidelines are universally recognized and followed.