Researchers find no evidence that smart homes and home health monitoring technologies "help address disability prediction and health-related quality of life, or fall prevention."
Researchers have found no evidence that smart homes and home health monitoring help prevent falls or health-related quality of life for the elderly.
While internet-linked devices and home health monitoring have the potential to improve care for the elderly, a review of existing evidence in the current issue of the International Journal of Medical Informatics found limited evidence of some benefits, but concludes that more research is needed into their efficacy.
The research comes after two disappointing studies that found little change in outcomes with some forms of home monitoring.
Still, hospitals and home health agencies increasingly favor the approach to keep track of chronically ill and recently discharged patients at risk of readmission.
The researchers identified 1,863 relevant studies and analyzed 48 with the strongest quantitative evidence.
They found no evidence that smart homes and home health monitoring technologies "help address disability prediction and health-related quality of life, or fall prevention."
Lead author Lili Liu is with the Department of Occupational Therapy at the University of Alberta in Edmonton, Canada. She says providers should not discount systems that patients favor, but should be aware of the lack of research into how well they work.
"If a family member would come to me and say 'I use the same system on my house for security as I do to monitor a family member who is living on the other side of the country,' I would say use it, if it works for you," Liu said.
Still, the review found little in terms of the type of rigorous, controlled studies needed to link health outcomes to technology, she said.
Liu said the devices and systems are flooding the market and in many cases, have not been properly studied or vetted. In Canada, provincial health agencies conduct assessments of new technologies. She suggested that hospitals do the same.
The simple push-button pendants that emerged in the 1970s to summon help have given way to monitoring devices equipped with global positioning systems and connected to cell phones.
Wi-Fi and Bluetooth technology now allow patients to send health data to providers from wearable heart monitors. Home devices allow providers to monitor glucose levels and blood pressure from afar.
Pill boxes remind patients to take medication and let caregivers know if they don't. Digital home health hubs can collect data in one place.
Still, a March UCLA study concluded that the combination of health coaching telephone calls and telemonitoring did not reduce 180-day readmissions for heart failure patients.
In another recent study, researchers found no cost or utilization benefit for patient who used mobile devices and cell phone systems for tracking for hypertension, diabetes and cardiac arrhythmias.
Joseph Kvedar, head of the Partners Center for Connected Health in Boston finds fault in both studies, pointing out only 60% of the patients in the heart failure study fully participated in the program.
'Does Not Fit Preexisting Notions of Clinical Evidence'
The traditional research model used for most clinical trials fails to include factors like "ongoing provider-patient interaction," he said in an email response to questions from HealthLeaders.
"Remote monitoring requires a different way of evaluating effectiveness, which does not fit cleanly into our preexisting notions of clinical evidence," he wrote.
"For example, if a patient is not engaged with or using the remote monitoring intervention, should that technology be deemed unsuccessful in managing a chronic illness?"
Instead, he wrote, researchers should focus on the design elements of programs and how well they engage and motivate patients to comply with treatment plans.
A report from a 2015 National Academies workshop on the future of home healthcare noted that research into home monitoring technology is increasing, but evidence of efficacy is lacking or contradictory.
And, the research is challenging because "the pace of technological advance is faster than traditional research grant cycles, so that by the time a study has been planned and funding have been acquired, the technology to be studied is outdated."
A recent survey by the Health Information and Management Systems Society (HIMSS) of 227 hospitals found that 52% already use three or more form of connected health technology and 47% plan to expand their programs over the next few years.
Organ delivery teams won't be showing up in brown trucks, but they will be using tablets and small printers to identify and track donated organs. The approach is modeled on the United Parcel Service labeling system.
The nation's organ procurement agencies, working with the federal Health Resources and Services Administration, is in the process of rolling out a new system to streamline the transfer of organs from donors to transplant recipients.
And while the organ delivery teams won't be showing up in brown trucks or wearing brown uniforms, they will be working on an approach modeled on the United Parcel Service's labeling system.
Instead of hand-written labels, some programs are now using tablets and small printers to generate electronic labels that will allow them to identify and track donated organs.
The approach was developed by the Health and Human Services Idea Lab, an organization that seems to defy the image of a slow-moving, uninventive government agency.
That was what David Cartier thought when he saw a notice looking for an entrepreneur to work with HHS on the plan. "It sure seemed like an oxymoron. It's not typically what they do."
An industrial engineer with an expertise in process design, Cartier worked for UPS for many years before turning his attention to electronic medical records systems.
In 2013, the agency set up to Idea Lab "to improve how the Department delivers on its mission. This effort was started as a response to input from the workforce and public to promote advances in organizational management."
One of its stated core beliefs: "There is a solution to every problem."
After issuing a department-wide call for project ideas, the need to improve the organ transfer system emerged. The lab had an internal team working on the challenge, but they was missing a skill set, says Julie Schneider, a program manager with the Idea Lab.
Cartier, with his supply-side logistics background was just the person they needed.
"He really understood from the get-go that if he didn't dive down into the process of organ procurement, the technology would never be accepted," she said.
Safe But Inefficient
To prepare, Cartier watched roughly 70 transplant recoveries and many transfers so he could understand how the process worked and how it could be done better.
"So many times people make big decisions about a policy or process without seeing what the issues are," he says.
Organs were being labeled by hand with the donor name and blood type. Because so many checks were built into the system, it was safe. But it was also inefficient, he said.
Several efforts to develop a bar-coding system for organs had failed in the past.
Cartier found that "everyone did it differently." In some cases, staff had been working for 15 hours, so there were readability issues and transcription errors.
Still, a plan for the implementation of the system that went out for comment noted: "Between 2012 and June 2015, labeling errors accounted for 11% and packaging/shipping errors made up an additional 11% of all voluntary safety reports. During the same period, there were 136 unique labeling and 82 unique packaging/shipping safety situations reported. At least 22 organs associated with these errors were either not recovered or not transplanted."
Speed to Prototype: 4 Months
Cartier and his team considered five different versions of the new system and within four months they had a prototype.
Key information from a national system is shared on tablet, while local staff add the information on each case. The new approach still requires some visual verification of labels, but the number of visual checks is down from 58 to 4.
HHS hopes to have staff 80% of hospitals trained to use the program by 2017. It does require some culture change, Cartier said.
"They are driven and they know what works for them," he said. "We are asking them to use a system that is different, uncomfortable, and not what they do, and that is a challenge."
By seeking feedback, however, they were able to get transplant centers to buy into the program.
As of 2015, 35 out of 58 organ procurement agencies had completed required training with the United Network for Organ Sharing, the organization that coordinates organs transfers.
At LifeSource, a Minneapolis organ procurement agency, as organs arrive, the barcode is used to "check" the organ into the hospital, according to an email from Linette Meyer, the agency's organ preservation manager.
In addition to a wrist band patient ID, the intended recipient receives a second, organ ID wrist band, with barcode technology generated during organ recovery. The bands are scanned in the operating room to ensure that the proper organ is being transplanted.
"This new process ensures greater efficiency and safety for the patients receiving these remarkable gifts of life," she wrote.
Findings are mixed on whether the sepsis itself or a pre-existing health problem are driving the elevated mortality rate, but suggest that "long-term mortality after sepsis could be more amenable to intervention than previously thought."
Sepsis, a dangerous outcome triggered by an infection, can be fast-moving, debilitating, and fatal.
Now researchers are finding that even those who survive tissue damage and organ failure caused by sepsis can have a higher risk of "late death," defined as mortality within two years of being treated for the condition.
Researchers at the University of Michigan Health System are generating new data on the causes sepsis-related of late death, but it is unclear whether the sepsis itself or a pre-existing health problem is driving the elevated mortality rate.
A study in the current issue of the BMJ suggests that pre-existing conditions alone do not account for late death. The findings suggest that "long-term mortality after sepsis could be more amenable to intervention than previously thought," according to the study.
Researchers found that compared to the patients admitted to the hospital with a non-sepsis infection, patients with sepsis had a 10% absolute increase in late death.
The study also found a 16% absolute increase in late death among sepsis patients compared to those admitted with sterile inflammatory conditions. Sepsis was also associated with a 22% absolute increase in late mortality relative to similar, hospitalized adults.
"Taken together, our findings do not refute the importance of baseline burden of comorbidity to patients' long term outcomes after sepsis. They do, however, indicate that sepsis confers an additional risk of late mortality above and beyond that predicted by status before sepsis alone," the researchers wrote.
The incidence of sepsis among hospitals patients has risen from 621,000 in 2000 to 1,141,000 in 2008, according to the latest figures available from the Centers for Disease Control and Prevention.
High Cost in Lives and Dollars
In 2009, the cost of hospital care for sepsis was an estimated $15.4 billion. Between 1997 through 2008, costs for treating patients hospitalized for sepsis increased by an average of 11.9% each year. Efforts are underway to promote early diagnosis and treatment of the condition.
In order to explore the question of whether the elevated mortality is linked to the sepsis or underlying conditions, the researchers analyzed medical records from the University of Michigan's Health and Retirement Study (HRS), a long-term national study of more than 20,000 older Americans.
They found that one in five older patients who survives sepsis has a late death not explained by pre-sepsis health status.
"This suggests that more than one in five patients who survives sepsis dies acutely within the next two years as a consequence of sepsis. Compared with patients admitted to hospital with non-sepsis infection or sterile inflammatory conditions, patients with sepsis experienced a 10% increase in late mortality—or roughly one in 10 had a late death related to sepsis," according to the study.
Sepsis is also a major cause of hospital readmissions and ongoing research is looking at risk factors. A 2015 study found 30-day readmission rates for sepsis for one in five patients in California.
Hospitals will need to invest in efforts to improve the community's social determinants of health if they want to reduce preventable illness.
When we talk about healthcare quality at hospitals, it is usually in terms of accurate diagnoses, appropriate testing, and evidence-based treatments. But, hospitals are now being asked to pull the lens back and look at quality more broadly.
The change comes with shifting Internal Revenue Service rules, the Affordable Care Act, and evidence linking life struggles such as poor housing with the risk of illness. Now, many healthcare organizations are being held responsible for the health status of their communities, not just their patients.
And we're not talking about community blood drives.
The Centers for Medicare & Medicaid Services' 2016 Quality Strategy calls for the creation of programs to address the so-called "social determinants of health," which the agency describes as "the conditions in which people are born, grow, work, live, and age, and the wider set of forces and systems shaping the conditions of daily life and improving health outcomes."
Isn't that the job of social workers or community activists? It is, but hospitals are now being asked to play a role as well—and it's not one that comes easily.
"Hospitals have always been focused on treatment and not prevention and promotion," Young says. "That's what they've been paid to do. From a cultural standpoint, that's the orientation of hospitals. "
They do not have the infrastructure—intellectual or material—to deal with community health, he says. However, they are facing a paradigm shift.
One place where they may find allies in the effort is at state and city health departments. New findings out of Yale University suggest a positive association between social services spending and better health outcomes.
With new payment models linked to population health, the study makes these partnerships more inviting and offers a metric the C-suite needs to consider.
The researchers found that residents of states with a higher ratio of social to health spending (calculated as social service and public health spending divided by Medicare and Medicaid spending) had better health outcomes on a number of measures, including adult obesity, asthma, mortality rates for lung cancer, heart attacks, and type 2 diabetes.
Lead author Elizabeth Bradley, a professor at the Yale School of Public Health, says the study, which focuses on spending rather than services, is one more piece of evidence to support the idea that social services lead to better health.
Rather than count programs or list anecdotes, the study offers "qualitative, very empirically based, heavily statistical evidence," she said.
As hospitals move toward accountable care organization models, they face the same problem that state policy makers have: How much should I be spending on social services?
"Our research would suggest a balanced portfolio of investments if you want to improve people's health," she said. "You have to take care of their nutrition and their housing or you will end up paying more for their medical care."
The changes are being driven not just by CMS, but also from the IRS, which grants tax-exempt status to the nation's nonprofit hospitals.
Hospitals are under pressure to justify their tax breaks, and a debate is underway over what qualifies as community services under the IRS rules. The discussion involves a dizzying effort to sort out IRS rules for "community benefits" versus "community building."
So, it's no wonder there's a lot of confusion. On the American Hospital Association's website, a page devoted to "tax-exempt status" lists widely varying community benefits contributions—and definition—reported by state hospital associations.
Calculations include financial assistance to needy patients, Medicaid losses, and subsidized health services, defined in Virginia as "billed clinical services hospitals provide to patients where reimbursements fail to cover hospitals' cost." They also include money for traditional community services like mobile clinics, support groups, and asthma education.
What About Social Services?
"Many hospital administrators, if they are being candid, would say: That is not our job," Young of Northeastern says. "We're are not a public health department. We treat people who we are sick. Now you're asking us to get involved in social services… We can't be all things to all people."
However, he says, there is a push to allow hospitals to include social services in their accounting of community benefits reported to the IRS.
In December, the IRS complied in part and decided that some "housing improvements and other spending on social determinants of health that meet a documented community need may qualify as community benefit."
For some communities—like West Baltimore—the link between social needs and health is so glaring that the Bon Secours Health System did its community needs assessment more than 20 years ago, says Curtis Clark, the health system's vice president of mission.
He says he thinks that approach—known as Community Works—offers lessons for more affluent communities where the issues may be underemployment and prescription drug abuse.
His advice to hospitals that are now setting out on the same journey: Don't assume you know what the community wants. In a neighborhood where street corner drug dealers were the inspiration for the gritty television series "The Wire," his team thought crime and drugs would be major issues.
Instead, residents wanted help dealing with trash and rats. Clark says "the most salient point to understand is the importance of genuine community engagement—genuine partnering, genuine co-creation of the healthy communities predicated on the dignity of the person. "
There's hope. And there's reality. On close inspection, the link between cost and quality is actually pretty fuzzy: We just don't know.
One of the incentives for improving the quality of healthcare is the notion that it will also lower costs.
Ideally, patients will have a medical home to go to instead of an emergency room.
Ideally, physicians will choose treatments wisely instead of ordering expensive, low-value scans and lab tests.
Hospitals have already reduced avoidable readmissions. That suggests they got care right the first time and equals real money. Result: Lower costs, better quality.
But on close inspection, the link between cost and quality is actually pretty fuzzy.
Some say there is little evidence to support the idea that better care will cost less. It is possible that better quality and high costs can be tackled at the same time.
We just don't know.
And, in cases where one doesn't lead to the other, it may not be realistic to think a single strategy will get the job done.
Two recent studies touch on the issue. A review of five routine, but high-volume clinical encounters, such as asthma evaluation, found no correlation between price and quality.
In another study, University of Michigan researchers looked at payments by the Centers for Medicare & Medicaid Services after the agency added a price metric to its Hospital Value-Based Purchasing (HVBP) program last year.
By adding a spending measure, the HVBP program reduced the weight of its quality measures, according to the study published in Health Affairs. As a result, the agency ended up awarding bonuses to scores of hospitals with low costs but low quality.
The HVBP issue might be something that can be fixed with a change to the formula or, as the paper suggests, a minimum quality threshold.
But the findings suggest that striking the right balance between cost and quality can be tricky.
Evidence Lacking
There has been widespread hope among health policy makers that improving quality would reduce spending, says Andrew Ryan, PhD, of the University of Michigan School of Public Health.
"Although it sounds reasonable, there is almost no evidence that that is actually true," says Ryan, one of the HA study's authors. "Particularly in the US, improving quality involves increasing spending and it often involves increasing costs to providers."
Often, those providers are not able to get reimbursed for their efforts, he says.
The relationship between cost and quality remains "poorly understood," according to a 2013 Rand Corporation review, commissioned by Robert Wood Johnson Foundation.
The researchers examined 61 studies:
One-third of the studies found an association between higher cost and higher quality.
One-third found an association between higher cost and lower quality.
One-third found no association between the two.
A smaller, more recent analysis looked at the quality and price of five measure:
Asthma evaluation
Diabetes evaluation
Hemoglobin tests
Hypertension evaluation
Creatinine tests
The study found "no consistent relationship between quality measures and price." In some cases, higher prices were associated with lower quality.
The analysis was conducted by the Healthcare Cost Institute, a coalition of insurers who pool and analyze claims data: "The takeaway is that price as a signal of quality is potentially misleading," says Eric Barrette, PhD, MA, the group's research director.
The study has its limitations. The group used state-wide quality measures, which may differ from other measures. In addition, it used claims data, not clinical data, he said.
"While I think that this is informative, it really is just one more piece of information or one more data point in the question of price-versus-quality," Barrette said.
An Abundance of Data
For consumers, sources of information about quality and cost abound, but usually not in the same place or package.
Numerous web sites, both public and private, now offer information on costs of various procedures. Several organizations, including Medicare, offer quality data on hospitals.
And physicians, in this case surgeons, have seen measures of the quality of their care teased out of the Medicare database by a group of journalists. (Some said the data reporters at Pro Publica were in over their heads; others praised the effort for filling a vacuum.)
Starting this winter, the state of New Hampshire began offering consumers information on both price and quality. But, not for the same services.
The New Hampshire Health Costs site allows user to plug in information about their insurers and copays and get an estimate on the cost of care for both medical and dental procedures. It also compares prices of prescription drugs at different pharmacies.
In terms of quality, it offers a carefully selected list of hospital quality measure, including heart attack care, patient satisfaction, and readmissions. The site also identifies the sources of data, including CMS and The Joint Commission.
Optimism
"This is the first step in trying to get quality information out there," says Maureen Mustard, the director of healthcare analytics at the New Hampshire Insurance Department.
"We're optimistic. There is a lot of work being done around quality. There will be more and more measures available, and hopefully they'll have physician information and more procedural information that will relate much better to the cost estimates that we provide."
Ryan, of the University of Michigan, is optimistic too. Medicare is trying numerous approaches and researchers are trying to figure out what works and how to scale it up.
"Hopefully, over the next five or ten years, we are going to learn from all these experiments and come to a consensus about the best way to design these programs that is consistent with the interest of Medicare, [and] is consistent with patient preferences and consistent with professional norms of healthcare providers."
Still, for hospitals, it is a time of great uncertainty, he says.
None of them really know how the shift away from fee-for-service, and the quality measures that come with it, is going to change their core business.
"There is a whole range of responses: people clinging to the old business model, people going gung ho to alternative payment models. Most hospitals are somewhere in the middle," Ryan says.
"They don't want to be left behind as the system transforms overnight. Hospitals are used to operating a certain way and generating revenue in a certain way. Many are trying to figure out how much they really need to change to excel in the new world and how much the new world is going to change what they do."
For the here and now, many have managed to reduce hospital-acquired infections and readmissions.
Ryan says that it does look like shift in the readmissions has saved money: "How much readmission actually reflect healthcare quality versus utilization or spending is a controversial issue. But in general, that has been an effective intervention and you could argue it is addressing quality and costs at same time."
We're trying to fix a profoundly dysfunctional healthcare system. We need a way to incentivize quality. To do that, we have to find a way to measure quality and safety. Why is it taking so long to get this right?
I've been at this for a long time.
I remember sitting at a Washington D.C. press conference about Medicare's release of hospital mortality rates—30 years ago. Back then, what we now know as the Centers for Medicare & Medicaid Services was called the Health Care Financing Administration (HCFA). Ronald Reagan was president. I was young. And data was small.
The effort was a first pass at measuring hospital quality. Consumer groups welcomed the move; hospitals opposed it, saying the measures didn't adjust for severity of illness. HFCA admitted it was a crude first step and needed to be refined.
They tried, but within five years, they had given up.
Last week, that scenario replayed in miniature. Hospitals complained, members of congress lobbied and CMS decided to delay its planned update of hospital star ratings.
So, the long journey to measuring the quality of care continues, and promises to stall once again, for better or worse. My Twitter stream, inbox, reading list, and, thus, this column have been filled with complaints about the relevance and volume of quality and safety measures such as mortality rates.
There are too many; they produce conflicting results; they are redundant; they vary from payer to payer; they're not based on science and they make measure compliance, such as zero readmissions, the goal, not better care. And they add to costs. Two people I talked to last week made the case that value-based payment programs should be put on hold until we figure out how to accurately measure value.
Driven by Politics, Money
Many of these complaints are valid. Still, the cliché that keeps crossing my mind is this: No good deed goes unpunished. We're trying to fix a profoundly dysfunctional healthcare system. We need a way to incentivize quality. To do that, we have to find a way to measure quality and safety.
Why has this been so hard to get right?
Let's go back to D.C. Despite their halos, many of the players in the healthcare reform debate are driven by politics and money, not good policy. Lots of people are making money off the way the healthcare system is now, including most hospitals.
Since Medicare and federal programs drive so much in healthcare, providers and their suppliers have armies of lobbyists on Capitol Hill. Their goal is to protect their share of the more than $1 trillion dollars [not a typo] that the feds will spend on healthcare this year.
Politics also played into last week's CMS decision to hold back on releasing its hospital data, data that critics say punishes hospitals that serve low-income patients. Leah Binder, the head of the Leapfrog Group, noted that 60 members of the Senate, who can't agree on anything else, agreed that they "need to be nice to hospitals. So they sent a letter and they put a lot of pressure on CMS, and that works in Washington."
'No Measures are Perfect'
The entire federal health quality enterprise has felt pressure from Congress over the years. Last summer, a version of the federal budget with no funding for the Agency for Health Quality Research made it to the House floor, but never passed. (Health policy types know how to lobby too.) And, for years, some Republican lawmakers have fought anything related to evidence-based guidelines, arguing that medical decisions best be left to doctors.
Now, it seems that objections to the proliferation of quality measures are more widespread, industry-driven, and bipartisan. Binder will have none of it. Her group chastised CMS for holding back its quality data.
"The measures are not perfect," she told me. "No measures are perfect, but they are very good. Consumers are entitled to know what we know about the relative performance of hospitals."
That is what Leapfrog, an organization started 15 years ago by large employers tired of paying high premiums, tries to deliver. It uses its own survey of hospital performance and combine that with data from the AHRQ, the Centers for Disease Control and Prevention (CDC), CMS and The American Hospital Association's Annual Survey.
Different Ways to Keep Score
This year, the Leapfrog Group took us back to mortality. The headline on its press release reads: "Selecting the Right Hospital Can Reduce Your Risk of Avoidable Death by 50%." Before you never set foot in your local B-rated hospital again, note the details. Ranking hospitals with letter grades, the report calculates that the rate of avoidable deaths per 1,000 admissions was
5.13% in "A" hospitals
5.56% in "B" hospitals
6.93% in "C "hospitals and
7.68% in "D" and "F" hospitals
That last number is what produces the 50% increase in relative value cited in the press release. If you slice off the D & F hospitals, which only represent 6.5% of all admissions—the relative odds of dying in the hospital are narrowed. But the accompanying analysis from a Johns Hopkins researchers puts another number on it: 33,495 patient deaths each year could be avoided if every hospital had earned an "A" rating from Leapfrog.
Much has been made of the fact that hospitals that do well on Leapfrog may not do well on the other widely referenced rankings from U.S. News & World Report and Medicare's Hospital Compare. So, I asked Binder why she thinks Leapfrog's rankings are better than the others. They're not, she told me, they're different. Leapfrog focuses on safety while the others look at a broader range of measures.
Her group's research found that consumers are not confused by conflicting ratings; they are used to them. On that point, hotel searches and restaurant ratings come to mind. Lots of critics disagree on the same movie. Check out the Rotten Tomatoes website sometime. Now Yelp and Facebook allow users to rate hospitals. Reportedly, they do a pretty good job.
Binder disagrees with the notion that conflicting ratings suggest that the measures are imprecise. She's also the only person I've talked to in weeks who is troubled by the growing complaints about quality and safety measures. Binder agrees that providers are burdened by measures. She says there needs to be some "strategic alignment" to make sure hospitals are not asked to account for six different measures from six different payers for the same procedure.
But is the outcry about precision or about money? She thinks that all this complaining began when payers started linking quality measures to reimbursement. She calls it a panicked reaction from providers who are worried that suddenly their pocketbooks are going to be affected by their own performance.
"I don't think it's an accident that we come to that pinnacle point, and suddenly we are hearing the backlash," she says.
Pinnacle point? Well, one pinnacle point of many, Binder says. She agrees that we are still not where we should in using measurement to reward good performance and change the way people seek healthcare. She suggests that providers step back and take a more measured response.
"We've got to [get to] a pinnacle because we have a movement for value and we have CMS behind that movement," she says. "We have a real effort to, in a meaningful way, tie payments to health systems to their performance on certain key measures."
Like mortality rates.
Binder says Leapfrog's findings are alarming. Want to another opinion? If you're looking to cross-reference the group's ratings with up-to-date data from the CMS Hospital Compare website, you'll have to wait.
The agency suggests the wait will be 3 months, not another 30 years.
There's been a "striking" rise in the number of quality measures that are publicly reported, "but no standards on how accurate or inaccurate a measure needs to be," says Peter Pronovost, MD.
Does healthcare quality measurement ensure patient safety? Sounds logical, unless you take a step back and ask whether healthcare quality measures truly measure quality.
That’s what hospitals have asked the Centers for Medicare & Medicaid Services to do. And this week, the agency stopped short of posting new hospital ratings on Hospital Compare as scheduled. The launch of the ratings has been rescheduled for July.
The delay has been welcomed by both providers and policy makers who say we just don’t know how well the measures work. Peter Pronovost, MD, is a Johns Hopkins Medicine researcher and the man behind the much-touted checklist approach to patient safety. He is the director of Hopkins’ Armstrong Institute for Patient Safety and Quality.
In an opinion piece in the current issue of JAMA, Pronovost notes that CMS and others are using publicly reported data “to make pronouncements about which clinicians and hospitals are safe and unsafe.”
Some efforts to measure quality are better than others, he writes, but none is as good as it should be. Without standards for accuracy and timeliness of data, the metrics are only as good as the data that goes into them, he wrote with co-author Ashish Jha, MD, of the Harvard School of Public Health.
As a result, Pronovost told me, healthcare lacks valid patient safety measures even though much rides on them.
“What is striking is that there has been an increase in the number of measures that are publicly reported and [in] the amount of money at risk for performance on those, but no standards on how accurate or inaccurate a measure needs to be before you are paid,” Pronovost said in a telephone interview.
It seems that efforts to identify and prevent medical errors and to ensure patient safety face some of the same challenges that have generated the outcry over the burden and benefit of quality measurement in general. Despite widespread deployment, critics argue that measures of both quality and patient safety are based on incomplete science.
And while much of the data needed to weigh quality and safety is available or within reach, the technology and related infrastructure needed to collect, validate, and analyze it isn’t, despite major investment in HIT.
Pronovost says we are currently in a period of frustrating debate where you have policymakers saying the measures are good enough and provider organizations saying they’re not, yet providers are burdened by having to comply with them.
“We’re talking past each other,” he says. “The real question is how accurate is the measure; how accurate does it need to be; and what does it costs to get more accurate data? Maybe this is the best we can do for the resources we want to a spend. “
Another question: What do we need to measure? In the JAMA piece, the two authors offer suggestions for sorting all this out. For one thing, CMS needs to root out and eliminate unreliable metrics and develop good ones.
Pronovost makes a radical suggestion: CMS needs to define standards of what makes a good measure and set accuracy requirements before implementing measures in pay for performance and public reporting. It's a little late for that, but the recent CMS action represents a pause.
What's Measured Matters
So, while the search is on for measures that matter, what is measured also matters. Research has identified the most common causes of patient safety problems for hospitalized patients: adverse drug events, hospital-acquired infections, blood clots, bedsores, falls, and surgical complications.
Pronovost notes, however, that nationally there is a validated approach to measuring quality for only one of them—hospital-acquired infections.
Dean Sittig, PhD, a biomedical informatics professor at The University of Texas Health Science Center at Houston agrees that more research is needed to validate measures. The problem is that payers and regulators can’t really admit that the measure needs to be refined if they are already using them.
"If you call for measures, you’ve got to act like the measures are perfect and we know exactly what to do with them,” he says. “If they say they are going to fund research in this area, they can’t really use the measure for a while.”
That would be fine with him.
Sittig echoes the very complaints that have been lobbed at CMS over the Hospital Compare data. “Most of the measure we have are not really for comparison across organizations, across facilities, [or] across physicians,” he says.
Look at readmissions. While not all readmissions are preventable, hospitals get penalized for them anyway. “So we have a measure that is very imprecise and when we start paying hospitals for that, they start dong all kind of crazy things to avoid readmissions,” he says.
The goal is not to get hospitals to optimize scores and ranking, but to get them to optimize quality and safety, he adds.
Health information technology could help and someday will. For example, rather than relying solely on billing data, researchers could more easily tap into richer clinical data. But challenges abound there too, including interoperability, inconsistent coding, and data validation. Currently, Siting says, many quality measure are still reported manually, not electronically.
Pronovost agrees that HIT systems are not yet up to the job of using data to improve safety. He would like to see better-integrated IT systems. “Healthcare is unique among industries in that it has spent heavily on technology and has very little to show for it,” he says.
CMS’s decision to hold off on hospital rankings might be a sign that it is willing to slow down and heed all this advice. But it seems unlikely that the practice of issuing rankings will go away. The newly empowered healthcare consumer wants to comparison shop.
Plenty of third parties–the Leapfrog Group, US News & World Report, Healthgrades, Consumer Reports–rank hospitals. The investigative reporters at ProPublica even turned CMS data into surgeon scorecards. The project drew both mostly jeers and some cheers from health policy types for taking data journalism to a new level. (Pronovost was critical. Jha called it a "step in the right direction.")
And then, there’s always Facebook and Yelp. Studies have shown that their rankings don’t fall far from the others. How’s that for validation?
Few doctors are being trained to do open gallbladder surgery, compared to the numbers of surgeons learning laparoscopic techniques. What does this mean for patient safety and quality of care?
How often does a doctor need to perform a procedure to be competent? That question usually comes up in discussions about new procedures and devices.
But, what about old procedures that are used less and less, but still need to performed every once in a while? For some types of surgery, minimally invasive procedures are now the norm.
Take the cholecystectomy, which I needed after I suddenly felt like I had swallowed a bunch of razor blades. If I had gotten ill decade earlier, I would have been on track for a hospital stay and a major operation. Instead, I spent more time in the hospital being diagnosed, two nights, than I did in surgery—zero nights.
And while the idea of having an organ yanked out of my belly button, so to speak, was kind of surreal, I was happy for the quick recovery and three tiny, now-fading scars.
Still, if something had gone wrong or if my surgeons had found an unexpected mess in there, they might have needed to convert to open surgery. My doc was old enough to have learned how to do that in medical school and has done a few since. If I'd had a younger surgeon who came up in the laparoscopic age, however, that might not have been the case.
The Decline of Open Gallbladder Surgery
A recent study in the Journal of the American College of Surgeons quantifies the decline of the open cholecystectomy. It raises the question in its title: "Who Will Be Able to Perform Open Biliary Surgery in 2025?"
Using a huge database of procedures performed at the University of Texas Health Science Center in San Antonio, researchers confirmed what they were seeing in the clinic: the disappearance of open gallbladder surgery.
They compared the use of open surgery the in the 1980s, which they call the pre-laparoscopic decade, with rates of gallbladder patients undergoing an open cholecystectomy in the 1990s (down by an average of 67%) and in 2013 (down by 92% by). Correspondingly, the average number of open cholecystectomies performed per graduating chief general surgery resident dropped from 70.4 to 22.4 and is now down to 3.6 procedures.
Still, sometimes the open procedure is called for. UT surgeon and lead author Kenneth Sirinek, MD, says that a surgeon may need to switch from a laparoscopic to open surgery after discovering that the patient has "abnormal anatomy" or has so much inflammation from acute cholecystitis that laparoscopy is not an option.
"When we are doing the open operation, we can put our hands on structures and can feel pulses and arteries," he said. "We can do some of the dissection with our fingers. When we are doing laparoscopically, we have no feedback."
At UT, surgeons are videotaping open procedures and creating a library for those want to be more familiar with the open cholecystectomy. They also suggest the use of simulation, but note that there is a lack of simulation tools.
Not So Dire
Still, it's not like there's a clamor for this training. Sirinek and other surgeons I talked to make a few points that should calm hospitals administrators who will read the above and think, "lawsuit":
If you've learned the basics of surgery and know how to remove a gallbladder, you can probably pull off an open cholecystectomy with a little practice.
If a surgeon has to convert from laparoscopic to open surgery, he or she have time to call for help from an older or more experienced surgeon.
Younger surgeons know how to handle complications laparoscopically, so demand for the open cholecystectomy will continue to drop.
H. David Reines, MD, was going over the case logs for his chief residents this week and says the shift is clear. A surgeon and director of CME for Inova Fairfax Hospital in Virginia, Reines says his residents remove about 100 gallbladders laparoscopically each year, compared to performing between three and eight open procedures.
But he says he worries a lot less about the shift than he once did. Now 69, Reines learned open surgery during his training because that was the only option.
Even after doctors started using the laparoscope, the thinking was to switch to open surgery at any sign of trouble. That is no longer true, in part, because younger doctors have gotten better at handling complications with the scope. Reines says, "Now a lot of people feel more secure continuing on laparoscopically."
What they are uncomfortable doing, he says, is open surgery. In those cases, he and others say surgeons have plenty of time to call for help.
"If you feel you are in danger of taking a piece of the liver or getting into the common duct, you can close the patient up and go ahead and send them to a hepatobiliary surgeon," he said.
Teaching Procedures vs. Principles
Douglas Smink, MD, is a surgeon at Brigham and Women's Hospital in Boston who works with both surgical residents and the hospital's STRATUS Center for Medical Simulation. He says the question of competency comes up regularly around multiple surgical procedures that are performed less and have become almost obsolete.
Open gallbladder surgery is one of them. Still, Smink doesn't think that simulation and more training is the answer. Laparoscopy is a technique. The concept surgeons need to master is removal of the gallbladder, which is sometime done during other surgical procedures, he says.
"Some of what we teach are procedures and some of what we teach are principles," he said. "So even if somebody doesn't do many open cholecystectomies in their training, they still do a fair number of (laparoscopic) cholecystectomies and they still do a fair amount of surgery."
He feels comfortable that his trainees are still getting the skills they need to be able to put those skills together if they need to do open surgery.
The fate of open biliary surgery in 2025 is better framed this way: In the push for evidence-based medicine, lots of overused or outdated treatments will emerge; some devices and procedures will fall into disuse and eventually become extinct.
The key to good outcomes and patient safety will be to ensure that doctors have the skills and tools they need to do their jobs, even as their tools and skills are changing.
Anesthesia adds risk and cost to the screening procedure, research shows, raising fresh questions about how providers weigh patient satisfaction against outcomes and profit.
New findings on risks associated with the use of anesthesia during colonoscopies and the demise of the first automated sedation device for use in such procedures add sparks to the debate over how sedation should be delivered in the endoscopy suite.
The decision to scrap Sedasys came last month. It had received FDA approval in 2013. Sedasys is a device designed to allow a gastroenterologist, rather than an anesthesiologist, to administer propofol, a powerful drug that offers heavier sedation but faster recovery than the combination of midazolam and fentanyl commonly used by gastroenterologists.
In a statement to HealthLeaders Media last week, Ethicon, the division of Johnson & Johnson that introduced and later withdrew the device, said:
"The Johnson & Johnson Medical Devices Companies are deeply committed to continuing to bring new, meaningful innovation to market that will enhance patient care and improve outcomes. There were no safety concerns that led to Ethicon's decision to exit the Sedasys business. This was a decision in line with our strategy to prioritize investments in high growth and strategic portfolio opportunities."
Although some guidelines aim to limit the use of anesthesia service to high-risk colonoscopy patients, the practice has risen significantly in recent years. And while it adds to the cost of the procedure, the practice is not limited to high-risk patients.
The use of anesthesia services for colonoscopy patients rose from approximately 14% in 2003 to more than 30% in 2009 to close to 50% in 2013, according to a series of reports from the Rand Corporation.
A study from the Rand Corporation and the Group Health Research Institute published in April in Gastroenterology, found that the risks of complications were 13% higher for colonoscopy patients who receive anesthesia services than those who do not.
"The widespread adoption of anesthesia services with colonoscopy should be considered within the context of all potential risks," researchers at University of Washington in Seattle concluded.
Some anesthesiologists warned that the Sedasys device could be dangerous if used off label. But after being turned down once, it won FDA approval and the American Society of Anesthesiologist issued guidelines for its use.
Jeffrey Apfelbaum, MD, the director of anesthesia services at the University of Chicago Medicine, was all set to try the device. He said the company demonstrated the device at the hospital and a group of gastroenterologists and nurses attended company sessions where they were trained on how to use Sedasys.
But Sedasys was pulled from the market before Apfelbaum was able to implement it at his facility.
"Anesthesiologists have a long history of embracing new technology and advances that improve patient care," he said. "This was the next, natural step in something we needed to explore."
Apfelbaum did not challenge media reports that business reasons were behind Ethicon's decision to drop Sedasys, and noted that Sedasys didn't seem to take off among providers. He observed that "the uptake of the device in the community was extraordinarily slow. "
And while Apfelbaum expressed hope that the device would improve care, he said he was unsure whether it would be less expensive than anesthesia services. Users were required to buy supplies from Ethicon, the J&J subsidiary that launched the device. It was not clear to Apfelbaum if the savings in anesthesia services would offset the costs of disposable EKG pads and pre-filled propofol cassettes.
Without support from Johnson & Johnson, providers that have been using Sedasys may not be able to continue for long. "Although we have a strong interest in continuing to use Sedasys, we are currently working with representatives of the device manufacturer, Johnson & Johnson, on a plan to phase out its use here at Virginia Mason over the next several months," said Andrew Ross, MD, section chief for Gastroenterology at Virginia Mason Medical Center.
Anesthesia and Risk
Karen Wernli, PhD, of the Group Health Research Institute in Seattle, WA, is the lead author of the study published in Gastroenterology this month linking the use of anesthesia with an increase in complications during colonoscopies.
She and her team used a large database [more than 3 million colonoscopies nationwide in adults aged 40 to 64] to compare outcomes for patients sedated by anesthesia specialists during colonoscopy to those who were not.
The researchers found that the use of anesthesia services was associated with a 13% increase complication within 30 days, including a higher risk of perforation, bleeding, and abdominal pain. Further, the risk of puncturing the wall of the colon was found to be higher by 26% in those patients who had anesthesia services and at least one polyp removed.
"The fact that there can be somewhat significant downstream consequences, even if they are rare, it is really something to consider," she said. The paper did acknowledge, that:
Although the use of anesthesia agents can directly impact colonoscopy outcomes, it is not solely the anesthesia agent that could lead to additional complications. In the absence of patient feedback, increased colonic-wall tension from colonoscopy pressure may not be identified by the endoscopist, and, consistent with our results, could lead to increased risks of colonic complications, such as perforation and abdominal pain.
Apfelbaum said he was not surprised by the finding, because doctors performing procedures on deeply sedated patients can work more quickly and a bit more aggressively and would explain the non-life threatening complications.
An editorial accompanying the study said the use of anesthesia "increases patient satisfaction and is profitable to the anesthesia community and some endoscopists, but other outcomes are no better or worse with anesthesia services."
The editorial argues that the use of "endoscopist-directed propofol is safe but continues to be impeded by legal and regulatory obstacles, local politics and policies and a virtual absence of financial incentives."
Donald Arnold, MD, chair of the American Society of Anesthesiologists' committee on quality management, said in a written statement that endoscopists "do prefer to have their patients as comfortable as possible during procedures and also be able to focus exclusively on performance of endoscopic procedures which entail unique risks. These goals have led endoscopists to increasingly request use of sedation/anesthesia techniques which requires involvement of physician anesthesiologists and other anesthesia professionals."
Complaints about quality measures are as abundant as the measures themselves. But some doctors are doing something about it. They're working to identify metrics that are "realistic and actually will have an impact on patient care."
Call it pushback, validation, or measurement science. The revolt against the volume and usefulness of outcomes measures continues.
Efforts are underway to both challenge and refine existing guidelines and requirements. And, wonks take note, providers and patients are on the job, too.
One example: The emergency department at Beth Israel Deaconess Medical Center in Boston's Longwood cluster of hospitals sees more than 50,000 patients a year. Every time a patient undergoes procedural sedation in the ED, doctors there follow up with a formal quality assurance review.
Their analyses are designed to meet a Joint Commission standard that requires monitoring and evaluation of such cases, which carry the risk that comes with sedation.
Now team, including BIDMC emergency physician Jonathan Edlow, MD, has decided to examine the utility of the review. "We are trying to find out what metrics make sense and what don't," he told me.
In a March paper in The Journal of Emergency Medicine, Edlow and his team reported that the review "offers little advantage over existing quality assurance markers." They concluded that review of high risk cases "may be useful."
Like other specialists, doctors in the field of emergency medicine are trying to become more active in the creation of quality metrics, Edlow says. They are looking for "measures that are realistic and actually will have an impact on patient care, as opposed to a lot of those [that] regulatory agencies come up with in the absences of physician input."
David W. Baker, MD, vice present for healthcare quality evaluation at The Joint Commission, said that the study is based on an erroneous assumption. The Joint Commission does not require review of all cases that used procedural sedation. The current standard says that hospitals should collect data to identify adverse events related to moderate or deep sedation or anesthesia. Baker said, “These reviews should be all about identifying opportunities to improve safety, and that’s exactly what occurred in the study.”
Baker co-authored an article in the current issue of the Journal of General Internal Medicine with Cheryl Damberg of the Rand Corporation, entitled "Improving the Quality of Quality Measurement."
The piece noted that:" All clinical specialties should define the outcomes they are working to improve for acute, chronic, and palliative care, and should develop systems to measure those outcomes."
The Joint Commission welcomes input from those testing their standards, Baker says. The two sedation-related errors identified in the study involved airway management, so BIDMC may want to look more closely in that direction, he adds.
They already do. Physician errors in airway management are already automatically reviewed by the hospital's QA committee. This brings up another complaint about measures: They can be duplicative.
"There is a tremendous amount of concern about too many measure and too many different measures, "Baker told me. "Everyone in the measurement arena has heard that loud and clear and shares those concerns."
He says the CMS/AHIP core measure are a step in the right direction. The Commission's measures are also aligned with CMS, he says, but hospitals are often required to report different measures to different payers and regulators.
That is changing. "The wheels are turning in the right direction," he says, both in terms of alignment and the effort to establish valid electronic clinical measure that will help reduce the administrative burden of data collection.
That momentum is apparent in a review of recent research, including studies published in April that include findings on the impact of case mix on readmission rates, quality measures for multiple sclerosis, and a composite quality measure for lobectomy designed by The Society of Thoracic Surgeons.
So, we've gone from measure-to-measure that, as a recent post on the Health Affairsblog asks, "measures that matter to whom?" The piece notes that: "Different stakeholders will not only have different perspectives about what measures matter to them, but even have different views on what the same terms mean."
Enter the Patient
Those stakeholders include hospitals, clinicians, payers, and patients. The current issue of Health Affairs is devoted to one of them: patients. For example, one piece talks about how PROS have been used for research, but need to be incorporated in efforts to improve care.
That resonates at Cambridge, MA-based PatientsLikeMe, where researchers are working with the National Quality Forum to streamline methods for using PROs to create PRO-based performance measures.
The organization is way ahead on distilling patient level data. Launched 2006, PatientsLikeMe was founded by two brothers after a third brother was diagnosed with ALS. They started a platform with an online forum for ALS patients and extracted data from patient conversations.
Now they count 400,000 members talking to each other about 2,500 different conditions. The PatientsLikeMe site also allows members to track and submit personal health data. So far, the company has collected more than 30 million data points, which they provide to academic and industry partners.
The idea that PROs do not offer quality data has changed, says PatientsLikeMe cofounder Ben Heywood: "We historically worked with life science companies because they realized early on the power of patient-generated data."
In the past, efforts at inclusion involved putting an individual patient representative on a panel of experts. Heywood knows they can do much better. The company's assignment for the NQF Measure Incubator is to demonstrate a system that will scientifically and at scale add patients' voices to measurement efforts, he says.
EHRs are key to streamlining measurement efforts. But, the problems of interoperability at the clinical level spill over into research. Baker at The Joint Commission said that is a major focus of his organization.
"We're still on the road to that," he said. "This is going to be a long journey over the next few years to… improve on our ability to measure processes and outcome of care using EHRs so we can minimize the burden of data collection and reporting. "
All indicators suggest that the road to high-quality measurement also promises to be a long one.