Context
Despite widespread concern regarding the quality and safety of health care, and a Medicare Quality Improvement Organization (QIO) program intended to improve that care in the United States, there is only limited information on whether quality is improving.
Objective
To track national and state-level changes in performance on 22 quality indicators for care of Medicare beneficiaries.
Design, Patients, and Setting
National observational cross-sectional studies of national and state-level fee-for-service data for Medicare beneficiaries during 1998-1999 (baseline) and 2000-2001 (follow-up).
Main Outcome Measures
Twenty-two QIO quality indicators abstracted from state-wide random samples of medical records for inpatient fee-for-service care and from Medicare beneficiary surveys or Medicare claims for outpatient care. Absolute improvement is defined as the change in performance from baseline to follow-up (measured in percentage points for all indicators except those measured in minutes); relative improvement is defined as the absolute improvement divided by the difference between the baseline performance and perfect performance (100%).
Results
The median state's performance improved from baseline to follow-up on 20 of the 22 indicators. In the median state, the percentage of patients receiving appropriate care on the median indicator increased from 69.5% to 73.4%, a 12.8% relative improvement. The average relative improvement was 19.9% for outpatient indicators combined and 11.9% for inpatient indicators combined (P<.001). For all but one indicator, absolute improvement was greater in states in which performance was low at baseline than those in which it was high at baseline (median r = −0.43; range: 0.12 to −0.93). When states were ranked on each indicator, the state's average rank was highly stable over time (r = 0.93 for 1998-1999 vs 2000-2001).
Conclusions
Care for Medicare fee-for-service plan beneficiaries improved substantially between 1998-1999 and 2000-2001, but a much larger opportunity remains for further improvement. Relative rankings among states changed little. The improved care is consistent with QIO activities over this period, but these cross-sectional data do not provide conclusive information about the degree to which the improvement can be attributed to the QIOs' quality improvement efforts.
Health care in the United States can be improved substantially, and even people with apparently good access to care receive care that falls far short of what it could be. In the area of public health and prevention, Healthy People 20101 showed wide gaps between public health performance and actual achievements on many quality indicators, including some delivered by the fee-for-service health care system. Two years ago, a report from the Institute of Medicine showed serious problems of harm to patients from medical errors2; last year another Institute of Medicine report, Crossing the Quality Chasm,3 identified major system problems as the principal source of many errors. In 2000, Congress instructed the Agency for Health Care Research and Quality to prepare an annual report on quality of health care in the United States, and the first of these reports is scheduled to be made public next year.
In 2000, the Health Care Financing Administration (now the Centers for Medicare & Medicaid Services) reported on 24 indicators of the quality of care delivered to Medicare beneficiaries (primarily in fee-for-service) in 1998-1999.4 These indicators measure delivery of services that evidence shows to be effective in preventing or treating breast cancer, diabetes, myocardial infarction, heart failure, pneumonia, and stroke.4 This report provides follow-up data on care given in 2000-2001 and makes comparisons with the 1998-1999 baseline data.
The tracking system used for the 1998-1999 data that was first reported in 2000 is used again for the 2000-2001 data in this report. This system is used in evaluation of the Medicare Quality Improvement Organizations (QIOs) and is independent of them.
Table 1 summarizes the clinical topics, quality indicators, sampling frame, and data sources that were used for the baseline article and are used again herein. The quality indicators and their rationale have been described in the 2000 report.4 The Medicare Quality Improvement Organization program tracks 24 quality indicators through contracted data abstraction centers, surveys, and analysis of claims data. Two of these (time to thrombolysis and time to angioplasty) are shown in Table 2 but are not analyzed herein (they were not in the 2000 report) because the number of cases observed in most states was quite small.
We followed the same fee-for-service sampling strategy and data collection procedures as were first reported for the baseline data with 2 exceptions. Information on influenza and pneumococcal vaccination rates came from a specially contracted survey using the influenza and pneumococcal vaccination items from the Behavioral Risk Factor Surveillance System (BRFSS) and designed to emulate the BRFSS sampling strategy as closely as possible. This was done because appropriately timed data from the regularly scheduled BRFSS were not available.5 We also substituted the 1999 BRFSS data for the earlier 1997 BRFSS data in our baseline rates because these later data represent state rates during the 1998-1999 baseline period better than the 1997 data. In addition, we made minor corrections in the claims processing algorithms used to construct the diabetes indicators for the 1998-1999 period. These changes resulted in small, nonmaterial, changes in the baseline rates first reported in the 2000 report. The corrected baseline rates for the immunization and diabetes indicators are used to make comparisons with the follow-up performance from the 2000-2001 period.
Reliability was calculated as the percentage agreement on all abstraction data elements between 2 blinded, independent abstractors at different abstraction centers. Each abstraction center also performed internal reliability assessments on a monthly random sample of 30 cases taken from abstracts completed during the previous month.6
Absolute improvement is defined as the change in performance from baseline to follow-up (measured in percentage points for all indicators except those measured in minutes); relative improvement is defined as the absolute improvement divided by the difference between the baseline performance and perfect performance (100%); relative improvement can also be called the decrease in the error or failure rate. The definition of relative improvement differs from the usual method of using the baseline rate as the denominator. We used this definition because dividing by the baseline rate exaggerates small changes for poorly performing states while minimizing changes in states that already perform well.
Performance was calculated at the state level for each of the quality indicators. For the 22 quality indicators discussed herein, results were calculated as the percentage of patients who had no contraindications and who received the indicated treatment. We direct our attention both to variation among states (including the District of Columbia and Puerto Rico) and to national trends. Therefore, we calculated for each indicator both performance of the median state and the national average (weighted by the number of aged Medicare beneficiaries in each state). We calculated the SD of each indicator rate across the set of states. To summarize the overall changes we observed on each indicator, we calculated the absolute and relative improvement on the indicator in the median state. To summarize the overall changes that we observed within each state, we calculated a median amount of absolute and relative improvement across the set of indicators in the state. Finally, we characterized the median absolute and relative national improvement as the median of these state medians.
We also calculated the rank of each state on each quality indicator based on performance rates during the 2000-2001 follow-up period and the rank on each quality indicator based on the amount of relative improvement observed. We then calculated the average rank for each state across the 22 quality indicators and arrayed the states according to their average rank, again based on their performance rates during the 2000-2001 follow-up period. We ranked states in a similar way on the amount of relative improvement. The changes in data described above and changes in our algorithm for breaking ties on ranking resulted in slight changes of ranking for 1998-1999 from those reported in the earlier article.
We tested the equality of the relative improvement for the inpatient indicators (the first 16 indicators in Table 1) and outpatient indicators (the last 6 indicators in Table 1) using a t test without assumption of equal variances and treating each indicator rate in each state as an observation.
The reliability of data elements used to construct quality indicators based on medical record abstraction ranged from 80% to 95% with a median interrater reliability of 90%.
Table 2 shows the 2000-2001 performance and change from baseline for each indicator in each state. Across the 1144 pairs of baseline vs re-measurement comparisons (ie, 52 states and territories × 22 indicators), absolute increases in performance occurred in 81% (925/1144) of the observations (χ21 = 240.8; P<.001). For all 22 indicators, state performance at baseline predicted performance at follow-up, generally quite powerfully (median r = 0.74; range: 0.29-0.98). A state's average rank on the 22 indicators was highly stable over time (r = 0.93 for 1998-1999 vs 2000-2001). For all but one indicator, absolute improvement was greater when performance was low at baseline than when it was high at baseline (median r = −0.43; range: 0.12 to −0.93); a similar pattern occurred for state performance as measured by performance on the median indicator in the state (r, −0.30) and for indicator performance as measured by the median state's performance (r, −0.43).
Table 3 shows summary statistics for each indicator for the country as a whole. The performance of the median state as well as the weighted national average improved on 20 of the 22 indicators (all but use of angiotensin-converting enzyme inhibitors in heart failure and performance of blood culture prior to starting antibiotics in pneumonia). Performance in the median state on the median indicator was 69.5% appropriate care in 1998-1999 and 73.4% in 2000-2001; the median absolute improvement was 3.9%, and the median relative improvement was 12.8%. The average relative improvement was 19.9% for outpatient indicators combined and 11.9% for inpatient indicators combined (P<.001).
Figure 1 shows the national pattern of performance in 2000-2001 (follow-up). As in the previous report on 1998-1999, better performance is concentrated in northern states and less populous states. Figure 2 shows the pattern of relative improvement. Geographic trends are similar but less marked than for follow-up performance.
We believe this is the first national study to show improvement in quality of care over time for multiple conditions in inpatient and outpatient settings. However, these quality indicators give a somewhat unbalanced picture of Medicare services. They overrepresent inpatient and preventive services, underrepresent ambulatory care, and represent very few interventional procedures. This study is also generally limited to care delivered in fee-for-service Medicare. Nationally, about 85% of Medicare beneficiaries are cared for under fee-for-service care and about 15% under managed care, but in Arizona, California, Florida, and Pennsylvania more than 25% of beneficiaries are enrolled in managed care. Comparing Health Employer Data and Information Set (HEDIS) data from managed care with this fee-for-service Medicare data presents technical problems that we have not yet solved for these measures, but HEDIS data for managed care demonstrate similar trends.7 Furthermore, because of technical challenges such as risk adjustment, we focused on measuring processes of care critical to outcomes rather than on measuring outcomes themselves.
Growing national alarm over unrealized opportunities to improve care has been accompanied by a significant improvement in care, although far more remains to be done than has been accomplished. The improvement reported herein is consistent with the goals of the Medicare QIO program, which has performance-based contracts with QIOs to achieve precisely these kinds of improvement.8 The QIO program has created the performance measurement system that tracks progress on these topics and has dramatically heightened national awareness of the opportunity for improvement. However, these cross-sectional data do not provide conclusive information about the degree to which the improvement can be attributed to the QIOs' quality improvement efforts. There is evidence that QIO interventions can cause improvement,9 but the effort during the period of this study was national, with no control group, and the strong emphasis on partnerships for improvement makes isolating the contribution of the QIO program almost impossible. Indeed, using a clinical model to conduct research that will prove linkages between interventions (such as fail-safe systems) and improved quality faces many of the same difficulties as using a clinical research model to study many aspects of patient safety.10 Nor does current evidence allow us to estimate how much of the improvement reported herein may be attributed to heightened awareness of specific clinical treatments and how much may be attributed to changes in health care systems.
Ten years ago, Rogers et al11 and Kahn et al12 reported an improvement in quality of inpatient care for Medicare beneficiaries with 5 conditions during the mid 1980s. Our study suggests that this trend continues and is broader. However, despite this evidence, a wide gap remains between the care that could be delivered and the care that is delivered to Medicare beneficiaries. In part the explanation for this discrepancy is that the diffusion of standards of care is relatively slow, that new standards are developed continually, and that the performance gap is very wide compared with progress. The greatest improvements in inpatient care were (1) prescription of β-blockers for patients with acute myocardial infarction at discharge, (2) delivering antibiotics within 8 hours of reaching the hospital for patients with pneumonia, and (3) avoiding the administration of sublingual nifedipine to patients with acute stroke. Yet, in 2000-2001, 21% of patients with myocardial infarction and without contraindication to β-blockers were still discharged without a prescription and 13% of patients with pneumonia still waited more than 8 hours for antibiotics. By contrast, the number of patients receiving sublingual nifedipine dropped by 77% to about 1%, and the measure has been dropped from QIO contracts because so little opportunity for improvement remains. Growing evidence suggests that improvement and adoption of best practices is limited or promoted by the systems within which care is delivered and that we cannot close those gaps unless we change the systems.3 Although it is risky to generalize from these few examples, it seems intuitive that changing the system to prevent doing something risky would be easier than changing it to do something of potential benefit both reliably and promptly.
Centers for Medicare & Medicaid Services is dropping stroke from the QIO contracts because there seems to be little further systemic improvement to be achieved on use of sublingual nifedipine and because clinically valid abstraction of eligibility for warfarin use in patients with atrial fibrillation is very difficult.
Centers for Medicare & Medicaid Services will be adding 3 indicators related to patient safety in the inpatient setting: use of appropriate antibiotics for prophylaxis against surgical infection, appropriate timing of the administration of those antibiotics, and appropriate discontinuation after surgery.13,14 Centers for Medicare & Medicaid Services and the Joint Commission on Accreditation of Healthcare Organizations have modified their performance indicators to make them virtually identical for areas that both organizations cover. Quality Improvement Organizations will also extend their work to improving performance on quality indicators for both nursing homes and home health agencies. The National Quality Forum endorsed a group of indicators for hospitals in 200215 and is scheduled to endorse additional hospital measures, as well as nursing home measures, in 2003. Quality Improvement Organizations will also be working to help hospitals collect their own data, with the hope that those hospitals will soon decide to publish their performance data.16 The health care system still urgently needs systems that will help it to keep up with change and needs partnerships among those who support quality improvement to move it forward more rapidly.17
The findings of this study are encouraging in showing that improvement is possible and is taking place. They should not lead to complacency: there is still a very long way to go, and medicine is changing at least as fast as our progress in implementing what was the standard of care just a few years ago.
Corresponding Author: Stephen F. Jencks, MD, MPH, Centers for Medicare & Medicaid Services, 7500 Security Blvd, Mail Stop S3-02-01, Baltimore, MD 21244 (e-mail: sjencks@cms.hhs.gov).
Author Contributions:Study concept and design: Jencks, Huff, Cuerdon.
Acquisition of data: Jencks, Huff, Cuerdon.
Analysis and interpretation of data: Jencks, Huff, Cuerdon.
Drafting of the manuscript: Jencks, Cuerdon.
Critical revision of the manuscript for important intellectual content: Jencks, Huff, Cuerdon.
Statistical expertise: Huff, Cuerdon.
Obtained funding: Jencks.
Administrative, technical, or material support: Huff, Cuerdon.
Study supervision: Jencks, Cuerdon.
Funding/Support: All funding for this work was provided by the Centers for Medicare & Medicaid Services.
Disclaimer: The opinions herein are the authors' and not necessarily those of the Centers for Medicare & Medicaid Services.
Acknowledgment: We especially thank Joyce V. Kelly, PhD, who coordinated the national PRO quality improvement efforts and Jeffrey Kang, MD, MPH, without whom this work would not have been possible. We also thank Dale Burwen, MD, Barbara Fleming, MD, Peter Houck, MD, Annette Kussmaul, MD, David Nilasena, MD, and Diane Ordin, MD, for their leadership on the individual clinical topics and Susan Arday, PhD, for support of the immunization survey.
1.US Department of Health and Human Services. Healthy People 2010, Understanding and Improving Health. Washington DC: US Government Printing Office; 2000.
2.Kohn
LT, Corrigan
JM, Donaldson
MS. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.
3.Committee on Quality of Health Care in America. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.
4.Jencks
SF, Cuerdon
T, Burwen
DR,
et al. Quality of medical care delivered to Medicare beneficiaries: a profile at state and national levels.
JAMA. 2000;284:1670-1676.
PubMedGoogle ScholarCrossref 5.Not Available. Behavioral Risk Factor Surveillance System resources page. Centers for Disease Control and Prevention Web site . Available at:
http://www.cdc.gov/brfss. Accessed December 12, 2002.
6.Huff
ED. Comprehensive reliability assessment and comparison of quality indicators and their components.
J Clin Epidemiol. 1997;50:1395-1404.
PubMedGoogle ScholarCrossref 9.Marciniak
T, Ellerbeck
EF, Krumholz
HM, Radford
MJ, Vogel
RA, Jencks
SF. Improving the quality of care for Medicare patients with acute myocardial infarction: results from the cooperative cardiovascular project.
JAMA. 1998;279:1351-1357.
PubMedGoogle ScholarCrossref 11.Rogers
WH, Draper
D, Kahn
KJ,
et al. Quality of care before and after implementation of the DRG-based prospective payment system: a summary of effects.
JAMA. 1990;264:1989-1994.
PubMedGoogle ScholarCrossref 12.Kahn
KJ, Keeler
EB, Sherwood
MJ,
et al. Comparing outcomes of care before and after implementation of the DRG-based prospective payment system.
JAMA. 1990;264:1984-1988.
PubMedGoogle ScholarCrossref 13.Mangram
AJ, Horan
TC, Pearson
ML, Silver
LC, Jarvis
WR, for the Hospital Infection Control Practices Advisory Committee. Guideline for prevention of surgical site infection, 1999.
Infect Control Hosp Epidemiol. 1999;20:250-278.
PubMedGoogle ScholarCrossref 14.Burke
JP. Maximizing appropriate antibiotic prophylaxis for surgical patients: an update for LDS Hospital, Salt Lake City.
Clin Infect Dis. 2001;33(suppl 2):S78-S83.
PubMedGoogle ScholarCrossref 15.Not Available.
Hospital Care National Performance Measures (Group 1)—Interim report. 2002, Washington, DC: the National Forum for Healthcare Quality Measurement and Reporting, National Quality Forum Available at:
http://www.qualityforum.org/txhospGrp1publicweb.pdf. Accessed December 18, 2002.
16.Brown
D. Hospitals will be rated on their performance.
Washington Post. December13 , 2002:A1.
Google Scholar