[Skip to Content]
[Skip to Content Landing]
November 19, 1997

The Risks of Risk Adjustment

Author Affiliations

From the Department of Medicine, Division of General Medicine and Primary Care, the Charles A. Dana Research Institute, and the Harvard-Thorndike Laboratory, Harvard Medical School, and Beth Israel Deaconess Medical Center, Boston, Mass.

JAMA. 1997;278(19):1600-1607. doi:10.1001/jama.1997.03550190064046

Context.  —Risk adjustment is essential before comparing patient outcomes across hospitals. Hospital report cards around the country use different risk adjustment methods.

Objectives.  —To examine the history and current practices of risk adjusting hospital death rates and consider the implications for using risk-adjusted mortality comparisons to assess quality.

Data Sources and Study Selection.  —This article examines severity measures used in states and regions to produce comparisons of risk-adjusted hospital death rates. Detailed results are presented from a study comparing current commercial severity measures using a single database. It included adults admitted for acute myocardial infarction (n=11 880), coronary artery bypass graft surgery (n=7765), pneumonia (n=18016), and stroke (n=9407). Logistic regressions within each condition predicted in-hospital death using severity scores. Odds ratios for in-hospital death were compared across pairs of severity measures. For each hospital, z scores compared actual and expected death rates.

Results.  —The severity measure called Disease Staging had the highest c statistic (which measures how well a severity measure discriminates between patients who lived and those who died) for acute myocardial infarction, 0.86; the measure called All Patient Refined Diagnosis Related Groups had the highest for coronary artery bypass graft surgery, 0.83; and the measure, MedisGroups, had the highest for pneumonia, 0.85 and stroke, 0.87. Different severity measures predicted different probabilities of death for many patients. Severity measures frequently disagreed about which hospitals had particularly low or high z scores. Agreement in identifying low- and high-mortality hospitals between severity-adjusted and unadjusted death rates was often better than agreement between severity measures.

Conclusions.  —Severity does not explain differences in death rates across hospitals. Different severity measures frequently produce different impressions about relative hospital performance. Severity-adjusted mortality rates alone are unlikely to isolate quality differences across hospitals.