A recent HealthLeaders AI NOW panel discussed how the technology is being applied to clinical care
Health systems and hospitals are seeing specific benefits from deploying AI technology in clinical care, according to executives taking part in a panel at the recent HealthLeaders AI NOW virtual summit.
While much of the so-called “low-hanging fruit” has so far been tied to back-end and administrative tasks, AI tools have been used with considerable success in radiology, where the technology can pick up details in images that can improve diagnoses. And AI is also being used in places like the Emergency Department, inpatient care, and population health programs.
“Financial ROI is important, but it’s not the only factor that health systems should be focusing on,” said Jared Antczak, chief digital officer at Sanford Health.
The rapid pace of development for AI tools in healthcare is tied to the potential for the technology to solve a wide variety of healthcare’s biggest problems, but without a good enterprise-wide strategy in place, some organizations are launching projects with an uncertain ROI and putting pressure on executives to find value after the fact. Advocates suggest launching small AI programs at first with a defined ROI, especially in areas where the value is clear.
In other words, think before you act.
“AI is not and should not be a strategy in and of itself,” Antczak said. “It’s a potential tool that can be used to solve a problem. But really tools are enablers of strategies, not strategies by themselves. We need to avoid the trap of doing technology for the sake of technology and really leverage technology to create value in people’s lives.”
“Knowledge is expanding faster than our ability to assimilate it and apply it effectively,” he added. AI is “a powerful tool that can sift through the noise and the information overload and really help clinicians by lifting up the things that matter.”
Albert Karam, vice president of data strategy analytics at the Parkland Center for Clinical Innovation, a research institute allied with Dallas-based Parkland Health, said the health system is using a predictive AI tool in Emergency Departments at Parkland Hospital and University of Texas Southwestern Medical Center to assess patients’ mortality over the next 12 to 72 hours, to determine when patients are scheduled for surgery.
“The idea here is that the orthopedic surgeons … use that information to decide whether to take [those patients] into surgery,” he said. “If things are looking a little bit grim … they might … try and get some of those metrics better before taking them in.”
Karam said the score developed by the AI tool is comprised of many data sources and updated hourly.
“It’s one of the life-and-death models [that is] a little bit morbid but incredibly useful,” he said. “They were literally having yelling matches in the hallway between the orthopedic surgeons and some of the other surgeons to decide whether or not to take them into surgery, and that has completely gone away.”
Another AI tool, focused on evaluating a patient for sepsis risk, was introduced in the inpatient setting, Karam said. It worked so well that executives in oncology and OB-GYN asked to have it reconfigured for their departments as well, and just recently the tool was reconfigured again to address whether sepsis is present in a patient on admission in the ED.
Antczak said Sanford Health has several predictive AI tools in use with clinical applications, addressing such issues as risk of colon cancer and chronic kidney disease.
“We’ve developed a number of different algorithms around disease state progression and anticipation to really enable our clinicians and our patients to potentially intervene sooner,” he said.
“Sometimes that word ‘healthcare’ is a bit of a misnomer,” Antczak added. “Really, we’re in the business of sick care. We wait until people are sick, and then we react, and we treat them and try to keep them well. We try to keep disease from progressing. But really if we want to become healthcare providers, we need to get further upstream. We need to look at ways to prevent disease from progressing to begin with, and that’s really where I think … AI can help us.”
Karam noted that programs focused on clinical outcomes often take longer to show ROI, which can be a challenge for a health system looking to contain costs.
“Some of the ROI analysis that we do is in lives impacted and lives saved even though we know that this is going to cost more dollars and cents up front,” he pointed out.
In addition, both he and Antczak noted, it takes a while to properly plan and develop an AI program.
“I don’t think people realize to successfully launch and do appropriate quality assurance on these models, it does take a significant amount of time,” Karam said.
It takes “about a year from ideation to starting a pilot,” he said. “And then we’ll pilot that model in one or two departments for another 4-6 months or so before rolling it out to the whole hospital. So about the minimum amount of time from ideation to implementation, even at a pilot level, is anywhere from a year to a year and a half. Which is not a fast turn-around, but there are so many checks and balances, so much with that data governance.”
And finally, the value of using AI in clinical care has to be measured against the risk. Many healthcare organizations are still trying to figure out how to use AI correctly, with the understanding that bad data or prompts can create bad outcomes—including, potentially, patient harm.
“Everyone is trying to identify where the guardrails are,” said Antczak, who notes Sanford Health used a tiered structure to identify risk in AI programs. Both he and Karam said it’s essential to balance any risky AI programs with human review. In any case where AI impacts a patient, they said, someone other than the technology has to make that final decision.
“It’s absolutely the final decision of the clinician or the nurse,” Karam said.
The Los Angeles health system has launched XAIA, an AI-enhanced VR app designed for use with the new Apple Vision Pro headset
A health system pioneer in the use of AR and VR technology is launching a new VR app for mental health—to be used with the new Apple Vision Pro headset.
Cedars-Sinai, which has been using AR and VR for several years for a variety of treatments, last week unveiled the XAIA (eXtended-reality Artificially Intelligent Ally) app, giving users what the Los Angeles-based health system calls an “immersive therapy session led by a trained digital avatar, programmed to simulate a human therapist.”
Healthcare organizations have long experimented with AR and VR in areas like labor and delivery, pain management, pediatric care, neurological care (including concussion diagnosis and treatment), and behavioral health. The form factor holds promise for both inpatient and home use, and as an educational tool as well as a clinical tool.
“Apple Vision Pro offers a gateway into Xaia’s world of immersive, interactive behavioral health support—making strides that I can only describe as a quantum leap beyond previous technologies,” XAIA co-founder Brennan Spiegel, MD, MSHS, a professor of medicine, director of health services research at Cedars-Sinai and a pioneer in researching and using the technology, said in a press release. “With XAIA and the stunning display in Apple Vision Pro, we are able to leverage every pixel of that remarkable resolution and the full spectrum of vivid colors to craft a form of immersive therapy that’s engaging and deeply personal.”
Cedars-Sinai’s strategy here is to connect its new app with Apple’s latest consumer-facing technology, marrying consumer marketing with clinical use cases. XAIA was created by Spiegel and Omer Liran, MD, a psychiatrist at Cedars-Sinai, and is licensed by the health system for commercial sale through a spinoff company created by Spiegel and Liran called VRx Health.
The app is designed to take the user into a “spatial environment,” such as a beach or meadow, where an AI-enhanced avatar programmed to simulate a human therapist guides the user through a variety of treatments, including meditation and deep breathing exercises.
Last year, Spiegel and his team tested XAIA on 14 patients living with moderate anxiety or depression. The results of the study, published in the online journal Nature, indicated patients “described the digital avatar as empathic, understanding, and conducive to a therapeutic alliance.” Though some still preferred a human therapist.
“Virtual reality (VR) employs spatial computing to create meaningful psychological experiences, promoting a sense of presence,” Spiegel and his team explained in the study’s abstract. “VR’s versatility enables users to experience serene natural settings or meditative landscapes, supporting treatments for conditions like anxiety and depression when integrated with cognitive behavioral therapy (CBT). However, personalizing CBT in VR remains a challenge, historically relying on real-time therapist interaction or pre-scripted content.”
“Advancements in artificial intelligence (AI), particularly Large Language Models (LLMs), provide an opportunity to enhance VR’s therapeutic potential,” they added. “These models can simulate naturalistic conversations, paving the way for AI-driven digital therapists.”
The research is still a work in progress, and the researchers said the app should be used to augment human counselors rather than replace them. The study noted that XAIA sometimes questioned a patient too much, as a less experienced therapist might do, or reverted to explaining coping mechanisms rather than further probing why a patient was struggling. In addition, the app also occasionally recommended a treatment without going into detail on why it would work.
“These results provide initial evidence that VR and AI therapy has the potential to provide automated mental health support within immersive environments,” Spiegel said in a separate press release supporting the study. “By harnessing the potential of technology in an evidence-based and safe manner, we can build a more accessible mental healthcare system.”
“The prevalence of mental health disorders is rising, yet there is a shortage of psychotherapists and a shortage of access for lower income, rural communities,” he said. “While this technology is not intended to replace psychologists—but rather augment them—we created XAIA with access in mind, ensuring the technology can provide meaningful mental health support across communities.”
With healthcare organizations embracing AI at a frantic pace, health system leaders need to get in front of adoption and make sure new programs are carefully reviewed and vetted
Healthcare organizations need to plan carefully when setting up a review committee for AI strategy, even incorporating a few skeptics to make sure they’re getting the full picture of how the technology should and shouldn’t be used.
That’s the takeaway from the recent HealthLeaders AI NOW virtual summit panel. The panel, Plotting an AI Strategy: Who Sits at the Table?, featured executives from Northwell Holdings, Ochsner Health, and UPMC and offered advice on how to manage AI within the healthcare enterprise.
Jason Hill, MD, MMM, Ochsner Health’s chief innovation officer, said a review committee should ideally consist of between seven and 12 members. It should include the CFO or someone within that department “who understands what ROI is,” someone representing the legal and compliance teams, a medical ethicist or bioethicist, a behavioral science expert, and clinicians and technology experts.
“We don’t really want to get just ‘new shiny things syndrome’ … and so be very sure that you’ve got someone who’s a little bit of a contrarian,” he said.
Marc Paradis, vice president of data strategy at Northwell Holdings, expanded on that idea, saying a committee should have a rotating “10th person,” who would look at an AI program or project from the opposite angle.
“In any given meeting,” he said, “it’s someone’s turn to be the contrarian. It’s someone’s turn to be the alternative thinker. It’s someone’s turn to be asking ‘What if’ or ‘Why not’ or to be kind of trying to poke those holes in the group think that can very easily occur. … It helps everyone begin to develop some of those critical thinking skills ... to get a more robust conversation going.”
“The people who sit at the table need to establish a good sense of transparency and communication,” added Chris Carmody, chief technology officer and senior VP of the IT division at UPMC. “We have to make sure we’re communicating about what’s happening and how people can effectively use the tools that are available to them.”
AI governance within the healthcare organization is a crucial topic, especially with the fast pace of AI development in the industry. Many hospitals are struggling to understand whether they’re ready to test and use the technology, along with what steps they need to take to make sure their clinicians and staff know how to use AI and their programs are monitored to prevent misuse or errors.
That extends to vendor partnerships as well. All three panelists warned that many companies are claiming to have AI tools or AI embedded into their technology because that’s the big thing now, and what they’re offering isn’t really addressing a care gap or concern. Executives need to make sure a new product isn’t creating new problems where none existed before (especially in security) and isn’t doing something the health system is already doing on its own.
“How does this new LEGO piece fit into our technology ecosystem?” Carmody asked.
He also noted that UPMC has what he calls “our Avengers or our Justice League,” comprised of a group of skilled architects that review technology before the health system decides whether to buy it.
Paradis pointed out that health systems have to rethink how they govern the technology, balancing the benefits against the possibility of mistakes being made.
“My personal take on this is I think we have to recognize that this is a brand new technology, [and] we don’t know what we don’t know,” he said. “It’s going to make mistakes. It will do strange things. It will surprise us in ways that we did not expect, both in a very good way and in a very bad way.”
“The appropriate thing to do from a leadership standpoint is to step up and say to the community at large: These are the guiding principles, this is what we believe, this is how we are rolling it out, [and] these are the guardrails,” he said. “Something will inevitably go wrong somewhere along the way and what we commit to you is when something goes wrong, we will bring everyone who is affected by that to the table at that time to … figure out how that never happens again and to improve the system overall.”
Paradis noted there is always a certain amount of danger in launching a new tool or technology, but there can also be harm in holding back a technology that has the potential to improve healthcare and save lives.
“We have to remember that we are on the very shallow part of this growth curve in terms of … what these tools can do, and what we don’t want to do is—and I’m very worried this is going to happen from a regulatory standpoint—what we don’t want to do is be so concerned that we completely shut down and stop AI,” he said. “We just have to be open and honest about it.”
Healthcare organizations that embrace AI need to first decide who is in charge.
On this week's episode of HL Shorts, we hear from Jason Hill, Innovation Officer at Ochsner Health, one of our expert panelists during the recent HealthLeaders AI NOW Virtual Summit. In the session "Plotting an AI Strategy: Who Sits at the Table?," Hill explains the three types of AI now being used in healthcare—and why each type of technology requires a different type of governance.
The new rule, announced today, enables healthcare providers to use audio-visual telemedicine platforms to evaluate new patients for methadone treatment programs
Healthcare organizations looking to get a handle on the opioid abuse epidemic can now use telemedicine to extend opioid treatment programs (OTPs) to the home.
The announcement marks the first time in 20 years that HHS has revised its rules to expand treatment options. Healthcare organizations have long been restricted in how they use telemedicine and digital health tools for substance abuse treatment, which often require in-person services that hinder patients who face barriers to access.
“This final rule represents a historic modernization of OTP regulations to help connect more Americans with effective treatment for opioid use disorders,” Miriam E. Delphin-Rittmon, PhD, the HHS Assistant Secretary for Mental Health and Substance Use and the leader of SAMHSA, said in an accompanying press release. “While this rule change will help anyone needing treatment, it will be particularly impactful for those in rural areas or with low income for whom reliable transportation can be a challenge, if not impossible. In short, this update will help those most in need.”
Other aspects of the final rule that aid in treatment expansion include making permanent a pandemic-era waiver that allows providers to prescribe take-home doses of methadone; allowing nurse practitioners and physician assistants to order medications for treatment programs (where states allow); removing the requirement that a patient have a history of addiction for at least a year before entering a program; expanding access to interim treatment; and “promoting patient-centered models of care that are aligned with management approaches for other chronic conditions.”
The federal rule continues a nationwide effort to address substance abuse—and, in a larger context, behavioral health issues—through new programs that take into account both the nationwide shortage of qualified providers and barriers to access, including social determinants of health.
“At HHS, we believe there should be no wrong door for people who are seeking support and care to manage their behavioral health challenges, including when it comes to getting treatment for substance use disorder,” HHS Deputy Secretary Andrea Palm said in the press release. “The easier we make it for people to access the treatments they need, the more lives we can save. With these announcements, we are dramatically expanding access to life-saving medications and continuing our efforts to meet people where they are in their recovery journeys.”
The rule doesn’t make all the restrictions disappear. It specifies that providers can use telemedicine to evaluate a new patient for entering methadone treatment but not for prescribing methadone.
Prescribing rules are still very tricky in substance abuse treatment. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008 prohibited the online prescription of scheduled drugs, though it did call for a process by which providers could register with the US Drug Enforcement Agency to prescribe some controlled drugs via telemedicine without first needing an in-person evaluation. The DEA never set up that process, despite intense lobbying from the American Telemedicine Association and others to do so.
With the pandemic, HHS established a number of waivers aimed at expanding access to telehealth and digital health, including allowing for virtual prescriptions. Those waivers ended last year with the federal Public Health Emergency, but Congress voted to extend many of them until the end of 2024. The DEA has extended its waiver until the end of the year as well as it works to come up with new, permanent rules to prescribing by telemedicine.
A Kaiser Permanente study of ambient AI scribes used to capture doctor’s notes and enter data into the EHR finds that they are improving the doctor-patient experience, but doctors still need to edit their notes
Ambient AI scribes designed to transcribe patient-physician encounters into the EHR may hold promise in reducing clinician workloads, but they aren’t there yet.
That’s the conclusion drawn from a recent study of more than 3,000 clinicians at the northern California-based Permanente Medical Group (TPMG) who used the technology in late 2023. The study, appearing online today in NEJM Catalyst Innovations in Care Delivery, finds that the AI tool did accurately represent the conversation between doctor and patient, but there was still a significant amount of editing that had to be done.
“Ongoing enhancements of the technology are needed and are focused on direct EHR integration, improved capabilities for incorporating medical interpretation, and enhanced workflow personalization options for individual users,” the study team, comprised of eight Kaiser Permanente researchers and executives, concluded. “Despite this technology’s early promise, careful and ongoing attention must be paid to ensure that the technology supports clinicians while also optimizing ambient AI scribe output for accuracy, relevance, and alignment in the physician–patient relationship.”
While automation and AI technology have been around for several years, the rapid advances of new forms of the technology have created a stir in several industries, including healthcare. AI and large language model (LLM) tools have the potential to not only handle administrative and back-office processes, but reduce workloads and stress for clinicians and staff by handling time-consuming and computer-driven tasks. Ambient AI scribes, for example, are designed to capture conversations and input data into the EHR, giving clinicians and staff the opportunity to interact with patients more freely instead of typing words into a laptop or trying to recall the gist of the conversation later.
While not the first study, the Kaiser Permanente study is one of the largest to test the technology in a clinical setting. It gives healthcare executives valuable insight into where the technology stands now, and what needs to be done to make it more effective.
According to the study, some 6,000 Kaiser Permanente clinicians have been using software-based medical dictation technology for at least two years. In August 2023, TPMG launched a two-week pilot with 47 physicians using an AI scribe; based on positive reactions from the physicians, the organization then secured licenses for 10,000 physicians and staff across several settings.
According to researchers, 3,442 physicians used that tool in the first 10 weeks of implementation for 303,266 encounters, with almost 100 physicians using the tool more than 100 times and one doctor using the tool for 1,210 encounters. Overall, the tool was used more than 19,000 times a week in seven of the 10 weeks studied.
In studying how clinicians and their staff used the technology, the research team identified four aspects of ambient AI scribes that would facilitate effective use:
Facilitate engagement by demonstrating growing and sustained adoption of ambient AI by number of clinicians and percentage of patient encounters across diverse specialties and settings.
Aim for effectiveness by reducing the burden of documentation within and outside of direct patient encounters.
Enhance the physician–patient relationship by increasing the amount of time physicians spend interacting with patients by improving engagement and reducing time spent interacting with a computer.
Maintain documentation quality by developing approaches to assess and safely use ambient AI technology capabilities in transcription and summarization.
And at the end of the study, the team listed four takeaways:
Ambient AI scribes “show early promise” in reducing the burden on clinicians to take notes and spend extra time entering that data into the EHR.
Both clinicians and patients said the technology improved the care experience, and some clinicians called the technology “transformational.”
While a review of AI-generated transcripts resulted in an average score of 48 out of 50 in 10 key factors, that doesn’t mean they can replace clinicians. There were inconsistencies, and clinicians still had to review the notes and make corrections “to ensure that they remain aligned with the physician-patient relationship.”
“Given the incredible pace of change, building a dynamic evaluation framework is essential to assess the performance of AI scribes across domains including engagement, effectiveness, quality, and safety.”
The research team also noted that AI technology is evolving quickly.
“The approaches to robustly evaluate the quality and safety of AI technologies, including tools such as large language models, remain incompletely defined,” they said. “The underlying algorithms and relevant regulations are also continuing to evolve rapidly, which will necessitate ongoing benchmarking, evaluation, and monitoring as the technology improves and vendors bring new software to market. Adoption rates and usage patterns are also expected to change as new user groups and application domains are identified and tested.”
With that in mind, the study offered advice for other healthcare organizations aiming to evaluate ambient AI scribes.
Find clinical champions to overcome barriers to adoption and create a culture that embraces innovative ideas.
Starte with a limited pilot involving a small number of clinicians, then scale up to a regional or larger-scale pilot with “opportunities for clinician and patient feedback that result in ongoing improvement that is tangible to stakeholders.”
Develop monitoring and benchmarking processes “that offer proactive assessment of the tools and their impact on meaningful goals.”
The Tennessee-based health system has migrated its data to a FHIR-based platform and now plans to use AI to address administrative and clinical efficiencies.
Community Health Systems has announced a collaboration to develop generative AI programs on Google Cloud.
The Tennessee-based health system, comprising 71 hospitals and more than 1,000 healthcare sites across 15 states, announced today that it has completed migration to a FHIR-based clinical data platform on Google Cloud.
“The goal of this migration extends well beyond modernizing our data infrastructure,“ Miguel Benet, MD, MPH, FACHE, CHS’ senior vice president of clinical operations, said in a press release. “By building a secure foundation to take advantage of new innovations in AI, we’re able to streamline our clinical providers’ workflow and advance the way we deliver patient care.”
Tech giants like Google, Microsoft, and Amazon are partnering with health systems and hospitals to develop enterprise-level AI programs, combining the data storage and analysis capabilities of the former with the clinical and administrative expertise of the latter. In December, Google unveiled a new suite of healthcare AI models called MedLM, built off the Med-PaLM 2 large language model introduced earlier in the year, as well as an early iteration of its next-gen generative AI model called Gemini.
One of Google’s biggest partners is HCA Healthcare, also based in Tennessee, which has been piloting Ai technology in Emergency Departments (through smartglasses) and to help nurses with documenting patient encounters.
“We’re on a mission to redesign the way care is delivered, letting clinicians focus on patient care and using technology where it can best support doctors and nurses,” Michael J. Schlosser, MD, MBA, FAANS, HCA’s senior vice president of care transformation and innovation, said in a press release. “Generative AI and other new technologies are helping us transform the ways teams interact, create better workflows, and have the right team, at the right time, empowered with the information they need for our patients.”
CHS is looking to build off its centralized data depository on Google Cloud’s health data platform to improve interoperability and drive real-time data analysis. The health system also plans on using Vertex AI and other large language models to target both administrative and clinical efficiencies, even pairing AI with Google Maps to give patients personalized resources in their communities.
With version 2.0 now supporting FHIR-based exchange, Mariann Yeager of the Sequoia Project says the final draft of standards for nationwide interoperability should be unveiled by the end of March.
Healthcare organizations with a vested interest in interoperability should be taking a close look at version 2.0 of the Trusted Exchange Framework and Common Agreement (TEFCA), which now supports FHIR-based exchange.
The government-supported effort to create nationwide interoperability standards has been more than two years in the making, coming out of the 21st Century Cures Act. This past December, five healthcare organizations were the first to be certified as Qualified Health Information Networks (QHINs), giving them the standing to support data exchange.
Yeager says the biggest take-away from version 2.0 is federal recognition of FHIR (Fast Healthcare Interoperability Resources), the HL7 standard that defines how healthcare information can be moved between disparate platforms.
“The most important thing for people to understand is that version 2.0 was revised to support FHIR-based exchange,” she told HealthLeaders. “There are new use cases to support healthcare operations and public health. The other thing is it does permit health systems that participate in TEFCA-based exchange to connect to multiple QHINS, to the extent that they support multiple data sources.”
Yeager also said she expects more conversation around health systems that appoint another entity to exchange healthcare data.
Writing in the HealthITbuzz blog earlier this month, Chris Muir and Alan Swenson of the Health and Human Services Department’s Office of the National Coordinator for Health IT (ONC) said the unveiling of five QHINs and the release of TEFCA version 2.0 “continue the momentum” toward a nationwide interoperability platform this year.
“In the short-term, ONC and the TEFCA RCE anticipate ‘facilitated FHIR’ exchange beginning to be implemented as part of TEFCA exchange as early as the first quarter of calendar year 2024 connected to the release of Common Agreement Version 2,” they said. “As in Version 1, Version 2 of the Roadmap describes facilitated FHIR exchange in which Qualified Health Information Networks (QHINs) provide the network infrastructure to support FHIR API-based exchange between TEFCA Participants and Subparticipants from different QHINs.”
“Specifically, if a TEFCA Participant or Subparticipant wants to obtain a patient’s data using FHIR, they will go to their QHINs to determine who has the patient information,” Muir and Swenson continued. “Patient discovery will take place through the QHIN-to-QHIN interaction, including discovery of the FHIR endpoints for those that have the patient data. The initiating Participant or Subparticipant will then directly (i.e., without going through the QHIN) and securely query each of those endpoints.”
Yeager says she’s excited to see data exchange scaled up to a national level.
“There are different ways in which FHIR is being used,” she said, noting that TEFA had support content exchange and is now embracing native FHIR. “We’re talking about … facilitating FHIR-based exchange with each other. What that enables is nationwide scale. This is an unprecedented opportunity in the US to support FHIR-based exchange at such scale.”
The five QHINs, MedAllies, the eHealth Exchange, Epic Nexus, Health Gorilla, and the KONZA National Network, have been exchanging data since TEFCA officially went live in December. Yeager says “several others” are going through the process to become designated QHINs and other healthcare organizations are preparing to take that route as well.
“They really see FHIR as an important functionality,” she said of the first QHINS.
Aside from gathering information through the public comment period, Yeager says the Sequoia Project will be scheduling public information webinars as well as targeted feedback sessions over the next several weeks to prepare the final version.
Muri and Swenson of the ONC said there are more goals ahead.
“Looking forward, the updated Roadmap describes two more phases of FHIR implementation beyond facilitated FHIR exchange,” they wrote in the blog. “The next phase, QHIN-to-QHIN FHIR Exchange, [will] enable QHINs to leverage FHIR-based exchange for exchange between QHINs while continuing to support non-FHIR approaches within the QHINs’ internal networks.”
“The last phase, End-to-End exchange, would permit a Participant/Subparticipant to seamlessly exchange FHIR data between themselves and other network members through the QHINs and multiple other intermediaries both within a QHINs’ network and through the TEFCA-governed network,” they added.
Yeager expects interoperability to be an ever-evolving process.
“TEFCA is really going to be evolutionary,” she said. “We will definitely be learning as we go, learning and adjusting. … You learn by putting things into practice.”
A new law allows Garden State health systems to expand their Hospital at Home programs to include Medicaid patients and those on private insurance
Health systems in New Jersey are now able to expand their Hospital at Home programs to patients in Medicaid and private insurance, thanks to a new state law.
The Hospital at Home Act, which was passed by the state Legislature and signed by Governor Phil Murphy in September 2023 and enacted into law on January 23, establishes a state Hospital at Home permitting process through the New Jersey Department of Health that is consistent with the Centers for Medicare & Medicaid Services’ Acute Hospital Care at Home Program.
Executives at Virtua Health, which launched its Hospital at Home program two years ago and now offers services through five of its hospitals in the southern part of the state, hailed the new law. Aside from introducing patients in the state’s NJ Family Care and Medicaid programs to the service, the law enables the health system to work with private payers to cover the program.
“We are excited to see Hospital at Home expand in New Jersey through this legislation, and we believe our state can serve as a template for the rest of the country,” Michael Capriotti, MBA, senior vice president of integration and strategic operations for Virtua Health, told the Gloucester City News earlier this week. “It is important that we continually innovate to create the best possible experiences and outcomes for our patients.”
More than 300 health systems and hospitals across the country are following the guidelines set by the CMS program, which includes a waiver, put in place during the pandemic in 2020, that allows the healthcare organization to qualify for Medicare reimbursement. That waiver is due to expire at the end of this year, and supporters are lobbying both Congress and CMS to make that waiver permanent.
The program targets patients who would otherwise be admitted to the hospital, creating a home-based care management plan that includes often-multiple daily visits by care teams, virtual care services and remote patient monitoring. Some programs have added ancillary services to address social determinants of health, imaging and tests, and pharmacy and rehab needs.
New Jersey is one of the first state to establish specific state guidelines for the program.
According to Virtua Health, the health system has enrolled more than 900 patients, representing more than 60 different medical conditions, in the program.
According to a recent national study of the program by researchers at Mass General Brigham—one of the first health systems to launch the program—the Hospital at Home concept has reduced the mortality rate for patient who would otherwise be hospitalized; it has also reduced the escalation rate (returning to the hospital for at least 24 hours) and rehospitalization rate within 30 days of discharge.
“Home hospital care appears quite safe and of high quality from decades of research — you live longer, get readmitted less often, and have fewer adverse events.” David Levine, MD, MPH, MA, clinical director for research and development for Mass General Brigham’s Healthcare at Home, said in a press release. “If people had the opportunity to give this to their mom, their dad, their brother, their sister, they should.”
Federally qualified health centers (FQHCs) are using telehealth and digital health tools to improve access and erase care siloes for millions of underserved Americans
Federally Qualified Health Centers (FQHCs) are often the first point of contact for underserved populations seeking access to care. And often that first impression can make all the difference in accessing care that improves outcomes.
At Kenosha Community Health Center, that first contact is now handled by a nurse who can quickly and efficiently funnel the patient to the right care provider.
“We’re seeing a higher volume of patients with more complex needs, so it’s important that we make this as efficient as possible,” says Mary Ouimet, the Wisconsin-based health center’s CEO. “When you have more than 450 calls a day, that can be a bottleneck.”
Kenosha, part of the Pillar Health network, is one of several FQHCs to collaborate with Conduit Health Partners on nurse triage services. And that’s part of an even larger trend of FQHCs, rural health centers (RHCs), and assorted community health clinics outsourcing some services and using telehealth and digital health technology to alleviate those bottlenecks that keep patients from accessing the care they need.
There are an estimated 1,400 FQHCs and more than 4,400 RHCs in the US, according to the Health and Human Services Department’s Health Resources and Services Administration (HRSA), which supervises funding for those providers. They, along with look-alike (LAL) organizations, provide care and resources for more than 30 million Americans, many of whom can’t afford or access care at a hospital, health systems, or primary care provider.
With the Centers for Medicare & Medicaid Services (CMS) loosening the purse strings on Medicare and Medicaid coverage, these providers are embracing new technologies to improve access to care and resources. At Kenosha, that means instituting a digital nurse triage service that channels the right patients to the right care.
“This is an essential function of the health center,” says Ouimet, who estimates that 100-150 incoming calls a day are now connected to Conduit Health nurses. “These are nurses at the other end who can work with [patients] to coordinate care. The average call time is reduced, and we’re improving time to treatment and bed scheduling. It’s just better care.”
In Massachusetts, meanwhile, an organization serving the commonwealth’s 52 community health centers covering more than 300 sites and 1 million patients is using HRSA grant funding to maintain a technology platform that keeps track of when and where patients receive care. The platform, developed by Bamboo Health, sends real-time notifications to care teams when a patient visits another care provider outside the system, enabling the care team to access admission, discharge and transfer data.
Susan Adams, vice president of health informatics for the Massachusetts League of Community Health Centers, says the technology gives care teams instant digital access to information that would otherwise be siloed away, creating gaps in care that could affect outcomes. She said those care teams had to ask for paper printouts of those visits, then manually enter the data into the patient’s medical record.
“We could be at the printer all day long,” she says.
Thirteen of the Mass League’s CHCs were originally put on Bamboo Health’s platform to monitor some 400,000 patients. According to the organizations, those CHCs saw a 47% reduction in 30-day readmissions among ED patients, a 20% reduction in 30-day readmission among hospitalized patients, and a 33% increase in follow-ups within 30 days of discharge.
The Mass League is now expanding that platform to more CHCs.
“We aren’t getting all the data we need to manage these patients,” Adams says, noting care teams sometimes never learn that a patient has been hospitalized or visited an ED somewhere else unless it comes up in conversation with the patient. The more data we can put into [the patient record] the better chance we have of providing care.”
Having a complete patient record, she says, also helps with chronic care management and strategies to address social determinants of health (SDOH), key care programs that CHCs, FQHCs and other health clinics are being asked to take on.
“I think the challenge will come with managing all of these alerts,” Adams says. “But that’s a good challenge. This gives us a chance to address more care [management and] coordination goals. It’s something that we’ve been waiting a long time to do.”