Federal regulators are scrambling to create guidelines for the ethical use of AI in a number of industries. Will healthcare collaborate or stake its own claim to governance?
As we head into the new year, the hot topic on every healthcare executive’s minds is AI. And one of the biggest questions surrounding the technology centers on who will regulate it.
The Biden Administration set the tone this past October with an Executive Order that places much of the federal regulatory burden on the Health and Human Services Department and the Office of the National Coordinator for Health IT (ONC), a position held by Micky Tripathi. HHS then set the schedule with a final order in December that calls for more transparency in AI tools used in clinical setting by the end of the coming year.
While much of the action so far focuses on the technology vendors who are designing AI tools, health system leaders are keeping a close watch on how the federal government will affect their use of the technology. Many health systems are developing and using their own tools and platforms and pledging to maintain ethical standards in any clinical applications.
“We have a culture of responsibility that goes alongside agile innovation,” Ashley Beecy, MD, FACC, medical director of AI operations at NewYork-Presbyterian and an assistant professor of medicine at Weill Cornell Medical College, said in a HealthLeaders interview earlier this year, prior to Biden’s Executive Order. “Health systems have a unique opportunity” to establish their own standards for the proper use of AI.
Tarun Kapoor, MD, MBA, senior vice president and chief digital transformation officer at New Jersey-based Virtua Health, says healthcare organizations have the clinical background needed to develop effective and sustainable AI governance. They know how it’s going to be used in healthcare, and can focus on the nuances that federal regulators might miss.
“We have to get a lot better at [regulating AI] because we’re the ones using it,” he says.
Like many health (if not all) health systems using AI these days, Virtua Health has a policy that any AI services have a human being in the loop, meaning no actions are taken on AI-generated content until they’ve been reviewed by at least one flesh-and-blood supervisor. At this stage, when most projects are trained on back-office tasks, that’s a safe bet; but when the technology works its way into clinical decision-making, that additional step may be critical.
“Always put physicians in front of those decisions,” says Siva Namasivayam, CEO for Cohere Health, a Boston-based company that focuses on using AI to improve the prior authorization process. He says the technology should be used to enhance the physician’s role—what he calls “getting to the yes factor—rather than replacing it.
“We never use AI to say no,” he adds.
But who gets to make those decisions? The Biden Administration wants to be part of that chain of command, and is setting its sights on a collaborative environment, having secured voluntary pledges from more than three dozen health systems, payer organizations, and technology vendors to use AI responsibly. The agreement centers on a new catchphrase for ethical use: FAVES, which stands for Fair, Appropriate, Valid, Effective, and Safe.
The healthcare industry, still smarting from having electronic medical records forced on them before they were really ready for adoption, is playing nice for now. But in many hospitals, the C-Suite is facing pressure to take command of AI governance and make it an industry priority.
“You govern yourself at a level higher than the law,” says Kapoor.
He notes that health systems like Virtua Health are being very careful in how they use the technology, and not just green-lighting any potential use.
“Just because you can say anything and create your own [projects] doesn’t mean I’m going to let you say anything and do them,” he points out.
Kapoor says healthcare providers will understand the flaws in AI technology and the risks they present better than anyone outside the industry. And health systems like Virtua Health are addressing these challenges with steering committees that comprise not only clinical leaders but those in finance, IT, legal, and operational areas of the organization.
[Read also: Are Health Systems Mature Enough to USE AI Properly?]
Arlen Meyers, president and CEO of the Society of Physician Entrepreneurs, a professor emeritus at the University of Colorado School of Medicine and Colorado School of Public Health, says the industry has to step up and show leadership at a time when AI governance is still in flux. He notes hundreds of healthcare organizations have created dedicated centers of excellence for AI, and some have vowed to develop ethics and standards of use. Consumers, as well, could get into the act, helping to form an ‘AI Bill of Rights’ for patients.
“Right now, nobody trusts the government or the industry to regulate this,” he says. “When you look at who should be regulating what … the industry should be setting the guardrails.”
This next year will be pivotal in establishing governance for AI, as more and more health systems use the technology and push the boundaries beyond administrative use and into clinical applications. While the Biden administration is looking to fast-track regulation through HHS and the ONC, many wonder whether the healthcare industry will wait that long, or let a federal agency propose the first rules.
Others are wondering what it will take to create regulations that will work. One look at the current debate over interoperability and data blocking standards makes it clear that just because rules are created doesn’t mean they’ll be readily accepted.
“In the end, you follow the money,” says Meyers, who anticipates that healthcare and government will have to come to some sort of agreement to create something long-lasting. “That’s how the [rules] will be made.”
“We have to get a lot better at [regulating AI] because we’re the ones using it.”
— Tarun Kapoor, MD, MBA, senior vice president and chief digital transformation officer, Virtua Health
Eric Wicklund is the associate content manager and senior editor for Innovation at HealthLeaders.
KEY TAKEAWAYS
The Biden Administration has tasked the HHS and ONC with developing standards for AI use in healthcare by the end of 2024, and is highlighting collaboration with key health systems, payers, and vendors.
Many healthcare organizations have been creating centers of excellence and/or steering committees to guide their use of the technology.
Some healthcare experts say the industry should be setting its own standards, as they’re the ones using the technology and they have a better understanding of the flaws and potential problems.