With healthcare organizations embracing AI at a frantic pace, health system leaders need to get in front of adoption and make sure new programs are carefully reviewed and vetted
Healthcare organizations need to plan carefully when setting up a review committee for AI strategy, even incorporating a few skeptics to make sure they’re getting the full picture of how the technology should and shouldn’t be used.
That’s the takeaway from the recent HealthLeaders AI NOW virtual summit panel. The panel, Plotting an AI Strategy: Who Sits at the Table?, featured executives from Northwell Holdings, Ochsner Health, and UPMC and offered advice on how to manage AI within the healthcare enterprise.
Jason Hill, MD, MMM, Ochsner Health’s chief innovation officer, said a review committee should ideally consist of between seven and 12 members. It should include the CFO or someone within that department “who understands what ROI is,” someone representing the legal and compliance teams, a medical ethicist or bioethicist, a behavioral science expert, and clinicians and technology experts.
“We don’t really want to get just ‘new shiny things syndrome’ … and so be very sure that you’ve got someone who’s a little bit of a contrarian,” he said.
Marc Paradis, vice president of data strategy at Northwell Holdings, expanded on that idea, saying a committee should have a rotating “10th person,” who would look at an AI program or project from the opposite angle.
“In any given meeting,” he said, “it’s someone’s turn to be the contrarian. It’s someone’s turn to be the alternative thinker. It’s someone’s turn to be asking ‘What if’ or ‘Why not’ or to be kind of trying to poke those holes in the group think that can very easily occur. … It helps everyone begin to develop some of those critical thinking skills ... to get a more robust conversation going.”
“The people who sit at the table need to establish a good sense of transparency and communication,” added Chris Carmody, chief technology officer and senior VP of the IT division at UPMC. “We have to make sure we’re communicating about what’s happening and how people can effectively use the tools that are available to them.”
AI governance within the healthcare organization is a crucial topic, especially with the fast pace of AI development in the industry. Many hospitals are struggling to understand whether they’re ready to test and use the technology, along with what steps they need to take to make sure their clinicians and staff know how to use AI and their programs are monitored to prevent misuse or errors.
That extends to vendor partnerships as well. All three panelists warned that many companies are claiming to have AI tools or AI embedded into their technology because that’s the big thing now, and what they’re offering isn’t really addressing a care gap or concern. Executives need to make sure a new product isn’t creating new problems where none existed before (especially in security) and isn’t doing something the health system is already doing on its own.
“How does this new LEGO piece fit into our technology ecosystem?” Carmody asked.
He also noted that UPMC has what he calls “our Avengers or our Justice League,” comprised of a group of skilled architects that review technology before the health system decides whether to buy it.
Paradis pointed out that health systems have to rethink how they govern the technology, balancing the benefits against the possibility of mistakes being made.
“My personal take on this is I think we have to recognize that this is a brand new technology, [and] we don’t know what we don’t know,” he said. “It’s going to make mistakes. It will do strange things. It will surprise us in ways that we did not expect, both in a very good way and in a very bad way.”
“The appropriate thing to do from a leadership standpoint is to step up and say to the community at large: These are the guiding principles, this is what we believe, this is how we are rolling it out, [and] these are the guardrails,” he said. “Something will inevitably go wrong somewhere along the way and what we commit to you is when something goes wrong, we will bring everyone who is affected by that to the table at that time to … figure out how that never happens again and to improve the system overall.”
Paradis noted there is always a certain amount of danger in launching a new tool or technology, but there can also be harm in holding back a technology that has the potential to improve healthcare and save lives.
“We have to remember that we are on the very shallow part of this growth curve in terms of … what these tools can do, and what we don’t want to do is—and I’m very worried this is going to happen from a regulatory standpoint—what we don’t want to do is be so concerned that we completely shut down and stop AI,” he said. “We just have to be open and honest about it.”
Eric Wicklund is the associate content manager and senior editor for Innovation at HealthLeaders.
KEY TAKEAWAYS
A panel of healthcare executives at the recent AI NOW summit discussed how a health system or hospital should govern AI strategy, including creating a governing committee.
Executives should think carefully about who sits on this committee, including people from various departments but limiting the total number to about a dozen.
This committee should include at least one skeptic or contrarian, and should understand there needs to be a balance between letting the technology work and holding back out of fear of the unknown.