We Need to Use Better Data to Measure Quality | Health Disparities | JAMA Forum Archive | JAMA Network
[Skip to Navigation]
Sign In
JAMA Forum Archive, 2012-2019: Health policy commentary from leaders in the field
JAMA Forum

We Need to Use Better Data to Measure Quality

For some time, there has been a push for health insurance to pay for quality and not quantity. This has been a mantra of the current administration as it has begun to alter Medicare payment reform to reflect this philosophy. Not long ago, the Centers for Medicare & Medicaid Services (CMS) committed to making 50% of Medicare payments tied to metrics of quality by 2018.

Aaron Carroll, MD, MS

Of course, such decisions are predicated on the idea that we are good at measuring quality. If we are going to pay hospitals differently based on their performance, then it’s absolutely mandatory that we be able to differentiate between those who deserve higher payments and those who do not. Many are concerned that our ability to do so isn’t adequate.

Some of this concern is theoretical. About a year and a half ago, the National Quality Forum, a nonprofit, nonpartisan organization that endorses health care standards, argued that Medicare’s quality metrics were problematic because they included many things that clinicians and hospitals couldn’t control. For instance, poor patients may be more likely to be readmitted to the hospital than wealthier patients, as might patients with substandard housing or education.

This has led many to worry that safety net hospitals that care for low-income patients might, through no fault of their own, appear to be delivering lower-quality care than those that don’t. As such hospitals tend to be located in more resource-constricted urban locations, those that need the most assistance might be hurt by quality measurements that are out of their control. Last year, the Obama Administration asked an expert panel to review its quality metrics, and they found that factors like patient income and education are not accounted for by many of the measures used to judge quality and set payments.

Some have argued that by adjusting for some of these factors, we might still be able to use these metrics to measure performance accurately. Still, much of this has been theoretical.

Recently, concerns have become more empirical. In a recent issue of JAMA Internal Medicine, researchers assessed how patient characteristics are associated with hospital readmission rates. We would need to adjust for such factors if we were planning to judge hospitals accurately.

The researchers used Medicare claims linked to data from the Health and Retirement Study (HRS) to look at hospitalized patients from 2009 to 2012. Because the HRS contains much more information about patients than administrative data from Medicare claims, the authors were about to examine how 29 different patient characteristics might be related to the risk of being readmitted within 30 days.

They found that 22 of the patient characteristics were significantly associated with readmission. What is concerning is that these 22 factors were beyond standard Medicare adjustments of hospital readmission rates. In other words, Medicare does not consider them when deciding whether a hospital is achieving adequate quality.

The researchers did not stop there. They next examined how hospitals that were considered to have high and low readmission rates differed with respect to these 22 patient characteristics. They found that 17 of the patient characteristics were distributed unequally across hospitals. Even more concerning, they found that 16 of those 17 characteristics that predicted readmission were more likely to be found in the hospitals that have higher rates of readmission (and thus might be the cause of lower-measured quality).

Finally, the researchers calculated how controlling for these patient characteristics might have affected Medicare scoring of their performance. They found that if they controlled for these additional factors, above what Medicare is doing now, then the difference in readmission rates between the highest and lowest quintile of hospitals was nearly halved, from 4.4% to 2.3%.

It appears that patient characteristics outside of a hospital’s control account for a large amount of the variation seen in readmissions, above and beyond the quality of the hospital’s performance. This is something that won’t surprise those who follow the medical literature. Many other studies have found that hospital racial make up, individual patient race and socioeconomic status, insurance, education, and home environment all play significant roles in readmission risk.

The problem here isn’t just that Medicare isn’t adjusting for these characteristics. It’s that Medicare can’t possibly adjust for those characteristics. The data aren’t known.

As I have discussed elsewhere, too often we use the data that we have instead of the data we need to measure quality. Medicare adjusts its metrics using the administrative information it can find without too much effort. Trying to get more comprehensive information might make measurements more accurate, but would be expensive. Medicare is trying to save money, not spend it.

But we can’t close our eyes and hope for the best here. Paying for performance, incentivizing hospitals and health care providers to hit metrics, hinges entirely on those metrics being accurate judges of quality. Otherwise, we are pushing the health care system to change its practice in ways that might backfire. This is especially true if the metrics are inaccurate in a way that penalizes those hospitals already caring for the patients who need the most help.

Some have called for a national, standardized system of public reporting of outcomes. To this date, such calls have gone unanswered. Others have pointed to the need for much more complex, adjusted models to account more accurately for individual patients’ mental, physical, and social burdens.

It would be tragic if we discovered years later that in our attempts to improve the efficiency of the health care system we ended up making it worse. Unfortunately, it’s looking like that might happen, based both on theory and evidence.

About the author: Aaron E. Carroll, MD, MS, is a health services researcher and the Vice Chair for Health Policy and Outcomes Research in the Department of Pediatrics at Indiana University School of Medicine. He blogs about health policy at The Incidental Economist and tweets at @aaronecarroll.
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words