[Skip to Navigation]
Sign In
Figure. Differences Between Vulnerable Beneficiaries and Others for 3 Medicare Subgroups (n = 46 Indicators)
Image description not available.
HPSA indicates Health Professional Shortage Area. Data compare African American with white beneficiaries, those in HPSAs with those not in HPSAs, and those living in poverty areas with those not living in poverty areas. The number of indicators for which there was a statistically significant difference in each comparison is shown as the darker portion of each column.
Table 1. Final Set of Indicators and Unadjusted Proportion of Patients Receiving Care and Number of Patients With Indicated Disease or Condition for 1% Sample of All Beneficiaries, 1994-1996*
Image description not available.
Table 2. Comparison of Statistically Significant Results for 1% Sample of Vulnerable and Nonvulnerable Populations, 1994-1996*
Image description not available.
1.
Hafner-Eaton C. Physician utilization disparities between the uninsured and insured.  JAMA.1993;269:787-792.Google Scholar
2.
Kahn KL, Pearson ML, Harrion ER.  et al.  Health care for black and poor hospitalized Medicare patients.  JAMA.1994;271:1169-1174.Google Scholar
3.
Woolhandler S, Himmelstein D. Reverse targeting of preventive care due to lack of health insurance.  JAMA.1988;259:2872-2874.Google Scholar
4.
Physician Payment Review Commission.  Access for Medicare beneficiaries. In: Annual Report to Congress, 1994. Washington, DC: Physician Payment Review Commission; 1994:ch 17.
5.
Pappas G, Queen S, Hadden W, Fisher G. The increasing disparity in mortality between socioeconomic groups in the United States.  N Engl J Med.1993;329:103-109.Google Scholar
6.
Adler NE, Boyce T, Chesney MA.  et al.  Socioeconomic inequalities in health.  JAMA.1993;269:3140-3145.Google Scholar
7.
President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry.  The state of quality: how good is care? In: Quality First: Better Health Care for All Americans. Washington, DC: Government Printing Office; 1998:21-40.
8.
Schneider EC, Riehl V, Courte-Wienecke S, Eddy DM, Sennett C.for the National Committee for Quality Assurance.  Enhancing performance measurement.  JAMA.1999;282:1184-1190.Google Scholar
9.
Garnick DW, Lawthers AG, Palmer RH.  et al.  A computerized system for reviewing medical records from physicians' offices.  Jt Comm J Qual Improv.1994;20:679-694.Google Scholar
10.
Institute of Medicine Committee on Monitoring Access to Personal Health Care Services.  Access to Health Care in America. Washington, DC: National Academy Press; 1993.
11.
Palmer H. Measuring clinical performance to provide information for quality improvement.  Qual Manag Health Care.1996;4:1-6.Google Scholar
12.
Samsa GP, Bian J, Lipscomb J, Matchar DB. Epidemiology of recurrent cerebral infarction.  Stroke.1999;30:338-349.Google Scholar
13.
Johantgen M, Elixhauser A, Ball JK, Goldfarb M, Harris DR. Quality indicators using hospital discharge data.  Jt Comm J Qual Improv.1998;24:88-105.Google Scholar
14.
Kritchevsky SB, Simmons BP, Braun BI. The project to monitor indicators.  Infect Control Hosp Epidemiol.1995;16:33-35.Google Scholar
15.
Health Care Financing Administration.  Breast Cancer National Project. Available at: http://www.hcfa.gov/quality/3v.htm. Accessed January 13, 2000.
16.
Health Care Financing Administration.  Diabetes National Project. Available at: http://www.hcfa.gov/quality/3t.htm. Accessed January 13, 2000.
17.
Health Care Financing Administration.  Research and Analytic Support for Implementing Performance Measurement in Medicare Fee-for-Service. Available at: http://www.hcfa.gov/quality/docs/ffs-1.htm. Accessed January 13, 2000.
18.
Physician Payment Review Commission.  Beneficiaries and the Medicare fee schedule. In: Annual Report to Congress, 1993. Washington, DC: Physician Payment Review Commission; 1994:ch 5.
19.
Merrell K, Colby DC, Hogan C. Medicare beneficiaries covered by Medicaid buy-in agreements.  Health Aff (Millwood).1997;16:175-184.Google Scholar
20.
Physician Payment Review Commission.  Monitoring Access of Medicare Beneficiaries. Washington, DC: Physician Payment Review Commission; 1992. Report No. 92-5:7-8.
21.
Bishop W, Park K, Rector J, Faulkner M. HEDIS audits.  J Healthc Qual.1998;20:10-15.Google Scholar
22.
Goldfield N, Villani J. The use of administrative data as the first step in the continuous quality improvement process.  Am J Med Qual.1996;11:S35-S38.Google Scholar
23.
Leape LL, Hilborne LH, Kahan JP.  et al.  Coronary Artery Bypass Graft: A Literature Review and Ratings of Appropriateness and Necessity. Santa Monica, Calif: RAND; 1991.
24.
Fink A, Siu AL, Brook RH.  et al.  Assuring the quality of health care for older persons: an expert panel's priorities.  JAMA.1987;258:1905-1908.Google Scholar
25.
Kahn KL, Rogers WH, Rubenstein LV.  et al.  Measuring quality of care with explicit process criteria before and after implementation of the DRG-based prospective payment system.  JAMA.1990;264:1969-1970.Google Scholar
26.
Siu AL, McGlynn EA, Morgenstern H.  et al.  A fair approach to comparing quality of care.  Health Aff (Millwood).1991;10:62-75.Google Scholar
27.
Asch S, Sloss E, Kravitz R, Kamberg C, Young R. Access to Care for the Elderly Project (ACE-PRO) Project MemorandumSanta Monica, Calif: RAND; 1995. Publication PM-435-PPRC.
28.
Rutstein DD, Berenberg W, Chalmers TC.  et al.  Measuring the quality of medical care: a clinical method.  N Engl J Med.1976;294:582-588.Google Scholar
29.
Billings J, Zeitel L, Lukomnik J.  et al.  Impact of socioeconomic status on hospital use in New York City.  Health Aff (Millwood).1993;12:162-173.Google Scholar
30.
Weissman JS, Gatsonis C, Epstein AM. Rates of avoidable hospitalization by insurance status in Massachusetts and Maryland.  JAMA.1992;268:2388-2394.Google Scholar
31.
Kahan JP, Park E, Leape LL.  et al.  Variations by specialty in physician ratings of the appropriateness and necessity of indications for procedures.  Med Care.1996;34:512-523.Google Scholar
32.
 Criteria for designation of areas having shortages of primary medical care professional(s). Available at: http://www.bpch.hrsa.gov/dsd/hpsa_fr2.htm. Accessed December 16, 1999.
33.
Mitchell JB, Bubolz T, Paul JE.  et al.  Using Medicare claims for outcomes research.  Med Care.1994;32:JS38-JS51.Google Scholar
34.
 MedPAC Comment on HCFA's Risk Adjustment Proposal . Washington, DC: Medicare Payment Advisory Commission; 1999.
35.
Hannan EL, Racz MJ, Jollis JG, Peterson ED. Using Medicare claims data to assess provider quality for CABG surgery.  Health Serv Res.1997;31:659-678.Google Scholar
36.
Iezzoni LI, Foley SM, Heeren T.  et al.  A method for screening the quality of hospital care using administrative data.  QRB Qual Rev Bull.1992;18:361-371.Google Scholar
37.
Gray DT, Hodge DO, Ilstrup DM, Butterfield LC, Baratz KH. Concordance of Medicare data and population-based clinical data on cataract surgery utilization in Olmsted County, Minnesota.  Am J Epidemiol.1997;145:1123-1126.Google Scholar
38.
Malenka DJ, McLerran D, Roos N, Fisher ES, Wennberg JE. Using administrative data to describe casemix.  J Clin Epidemiol.1994;47:1027-1032.Google Scholar
39.
Romano PS, Roos LL, Luft HS, Jollis JG, Doliszny K. A comparison of administrative versus clinical data: coronary artery bypass surgery as an example.  J Clin Epidemiol.1994;47:249-260.Google Scholar
40.
McClish DK, Penberthy L, Whittemore M.  et al.  Ability of Medicare claims data and cancer registries to identify cancer cases and treatment.  Am J Epidemiol.1997;145:227-233.Google Scholar
41.
Roos LL, Sharp SM, Cohen MM. Comparing clinical information with claims data.  J Clin Epidemiol.1991;44:881-888.Google Scholar
42.
Miller ME, Welch WP, Wong HS. Exploring the relationship between inpatient facility and physician services.  Med Care.1997;35:114-127.Google Scholar
43.
Charlson M, Szatrowski TP, Peterson J, Gold J. Validation of a combined comorbidity index.  J Clin Epidemiol.1994;47:1245-1251.Google Scholar
44.
Iezzoni LI, Daley J, Heeren T.  et al.  Using administrative data to screen hospitals for high complication rates.  Inquiry.1994;31:40-55.Google Scholar
45.
Leatherman S, Peterson E, Heinen L, Quam L. Quality screening and management using claims data in a managed care setting.  QRB Qual Rev Bull.1991;17:349-359.Google Scholar
Original Contribution
November 8, 2000

Measuring Underuse of Necessary Care Among Elderly Medicare Beneficiaries Using Inpatient and Outpatient Claims

Author Affiliations

Author Affiliations: RAND, Santa Monica, Calif (Drs Asch, Sloss, Brook, and Kravitz); University of California, Los Angeles (Drs Asch and Brook), and Veterans Affairs Greater Los Angeles Health Care System (Dr Asch), Los Angeles, Calif; Direct Research, Vienna, Va (Dr Hogan); and University of California, Davis, Center for Health Services Research in Primary Care (Dr Kravitz).

JAMA. 2000;284(18):2325-2333. doi:10.1001/jama.284.18.2325
Abstract

Context Continuing changes in the health care delivery system make it essential to monitor underuse of needed care, even for relatively well-insured populations. Traditional approaches to measuring underuse have relied on patient surveys and chart reviews, which are expensive, or simple single-condition claims-based indicators, which are not clinically convincing.

Objective To develop a comprehensive, low-cost system for measuring underuse of necessary care among elderly patients using inpatient and outpatient Medicare claims.

Design A 7-member, multispecialty expert physician panel was assembled and used a modified Delphi method to develop clinically detailed underuse indicators likely to be associated with avoidable poor outcomes for 15 common acute and chronic medical and surgical conditions. An automated system was developed to calculate the indicators using administrative data.

Setting and Subjects A total of 345,253 randomly selected elderly US Medicare beneficiaries in 1994-1996.

Main Outcome Measures Proportion of beneficiaries receiving care, stratified by indicators of necessary care (n = 40, including 3 for preventive care), and avoidable outcomes (n = 6).

Results For 16 of 40 necessary care indicators (including preventive care indicators), beneficiaries received the indicated care less than two thirds of the time. Of all indicators, African Americans scored significantly worse than whites on 16 and better on 2; residents of poverty areas scored significantly lower than nonresidents on 17 and higher on 1; residents of federally defined Health Professional Shortage Areas scored significantly lower than nonresidents on 16 and higher on none (P<.05 for all).

Conclusions This claims-based method detected substantial underuse problems likely to result in negative outcomes in elderly populations. Significantly more underuse problems were detected in populations known to receive less-than-average medical care. The method can serve as a reliable, valid tool for monitoring trends in underuse of needed care for older patients and for comparing care across health care plans and geographic areas based on claims data.

Monitoring access to needed medical care has become increasingly important in today's rapidly changing medical marketplace. Traditionally, measurement efforts have focused on vulnerable (eg, poor, uninsured) populations, who have a higher risk of being sick and whose access to care is below average. Numerous studies have shown that these vulnerable populations underuse needed services. They have a lower likelihood of seeing a physician, higher emergency department use, a greater likelihood of delaying care, and lower use of preventive care services.1-4 Not surprisingly, these populations also have poorer health outcomes and higher mortality rates.4-6

The recent push for cost containment in health care has generated additional interest in the underuse issue beyond the traditional concern for vulnerable populations.7 Access to and quality of care have become important issues in assessing all health insurance programs, including Medicare, which has been the focus of a federally mandated tracking effort since 1989.4,8 In addition, managed care, fee restraints, utilization review, and other cost containment mechanisms have introduced the possibility that even fairly mainstream insured populations may encounter barriers to use of needed services.

Measuring underuse of needed care has proven to be a difficult methodological problem. Earlier studies often focused on use of services, such as the emergency department, but could not determine whether patients who underused these services actually received less evidence-based clinical care, as measured by specific criteria for indicated clinical processes. Criticism of the relatively few assessments of underuse that were based on specific criteria centered on 4 concerns. First, sample identification often depended on the very use that was being evaluated. Second, the survey and chart review data techniques used were expensive to collect and, thus, difficult to apply on a routine basis, although targeted techniques were successful in reducing costs in some studies.9 Third, most methods have focused on only a few medical conditions, even though development of a system that encompasses multiple medical conditions and both inpatient and outpatient care is essential for broad-based quality monitoring.10-13 Fourth, these methods were rarely validated in terms of their ability to detect differences in care across patient populations.

Several recent efforts to develop and evaluate clinical performance indicators of quality have attempted to address these concerns. The Health Care Cost and Utilization Project evaluated a system of 33 mostly surgical indicators of avoidable complications and outcomes based on hospital discharge data.13 As part of its Comparing Hospital Performance Indicators demonstration project,14 the Joint Commission on Accreditation of Healthcare Organizations evaluated medical record–based quality measurement systems for use by its accredited organizations. The Health Care Financing Administration (HCFA) has developed quality indicators to monitor Medicare quality of care for several conditions, but only 2 of them (breast cancer and diabetes) rely solely on administrative data.15,16 Likewise, for a project that compares Medicare risk plans and the Medicare fee-for-service sector using performance measures from the Health Plan Employer Data Information Set (HEDIS) 3.0, HCFA developed only 3 measures based solely on administrative data (diabetic retinal eye examinations, mental health hospitalization follow-up care, and mammography).17

A number of studies have evaluated rates of procedure use across populations but, with few exceptions, have done so without applying precise indicators of need.4,18-20 Professional societies, such as the American Medical Association, and private evaluation and accreditation bodies, such as the National Committee for Quality Assurance, have also developed multicondition performance measurement systems that are based on chart and administrative data and include underuse evaluations, but these have not been fully evaluated for validity based on expert judgment of criteria or ability to distinguish levels of underuse among patient populations.21

In an effort to monitor underuse among Medicare beneficiaries, we sought to build on prior work to develop and validate an inexpensive measurement system that relies solely on administrative data. We wanted the system to scan a large enough set of important conditions that it would be useful for identifying target areas for quality improvement within a health care delivery system. In addition, we intended to develop a method that would eventually apply to the entire Medicare population for both outpatient and inpatient care. While outpatient data for those who are enrolled in Medicare managed care plans will begin to be collected in 2000-2001, outpatient data on other Medicare beneficiaries is currently available. We used rigorous expert panel methods to ensure that the measure was clinically valid and evidence based. We further validated the measure's ability to detect clinically and statistically significant differences by applying it to populations expected to have problems with underuse based on prior studies, such as those in underserved areas, and comparing results for these areas to those with greater medical resources. If successful, development of a clinically valid, comprehensive, inpatient and outpatient claims–based measurement tool capable of detecting differences in underuse would provide health care organizations with the high-yield, low-cost screening tool they need.22

Methods

Using information from the published literature and expert opinion, we developed clinical indicators of underuse for the elderly Medicare population and applied them to administrative data. We tested and validated the system on Medicare claims from 1992-1993 and applied it to claims from 1994-1996. In defining the indicators, we incorporated clinical performance standards related to necessary care. Necessary care denotes care for which (1) the benefits of the care outweigh the risks (ie, the care is appropriate), (2) the benefits to the patient are likely and substantial, and (3) physicians have judged that not recommending the care would be improper. We asked the expert panel to identify services that met this minimum quality standard for the average patient visiting the average physician.23

This research entailed selecting medical conditions, developing clinical indicators for each condition, using an expert panel to rate the indicators, selecting a final set of indicators, developing computer algorithms to calculate the indicators from administrative data, applying the indicators to Medicare claims data, and evaluating the indicators for vulnerable populations.

Selection of Medical Conditions

We started with several lists of conditions thought to be amenable to quality improvement efforts for elderly and other populations.24-26 We narrowed this list by selecting conditions (1) with a high prevalence or incidence among the elderly population, (2) for which effective medical treatment is available, and (3) that are identifiable from diagnoses coded on claims data.

Development of Clinical Indicators

We developed 2 types of indicators: those reflecting minimum standards of acceptable care (necessary care indicators) and those representing potentially avoidable outcomes (avoidable outcome indicators). Necessary care indicators assess whether patients with a specified condition receive certain procedures. For example, patients with diabetes should have an annual eye examination. Avoidable outcome indicators are diagnoses that should appear less frequently in the claims records of patients who have adequate access to needed care, and, thus, reflect antecedent underuse. An example is a diagnosis of ruptured appendix.

We based the necessary care indicators on both inpatient and outpatient care. For each condition, we attempted to identify indicators for each stage of care: initial evaluation, diagnostic tests, therapeutic interventions, hospitalization follow-up, monitoring of routine care, and avoidable outcomes. We developed the initial set of proposed underuse indicators from available sources, including randomized controlled trials and meta-analyses of such trials (whenever possible), review articles, practice guidelines, observational studies, medical textbooks, consensus reports, opinions of individual experts, and our own judgment. For each indicator, project staff summarized the supporting evidence with references.27 Randomized controlled trial data on the relationship between indicated care and outcomes were not available from the literature for all indicators. For a subset of the indicators (post–myocardial infarction visit, gastrointestinal tract workup for iron deficiency anemia, carotid imaging for carotid territory stroke, eye examination for diabetic patients, and mammogram for women aged 65-74 years), we were able to link the necessary care specified in the indicator to an improvement in outcomes using simple decision analysis trees anchoring panelists' ratings. For avoidable outcome indicators, we relied on previous work related to sentinel events,28 ambulatory-sensitive conditions,29 and related work.30

Expert Evaluation of Indicators

We assembled a 7-member, multispecialty panel of physicians that convened twice to discuss and rate the indicators using a modified Delphi method. Nominated by relevant specialty societies, the panelists practiced family medicine, general internal medicine, geriatrics, cardiology, nephrology, endocrinology, and general surgery in academic, fee-for-service, and managed care settings. In addition to preventive care, the expert panel evaluated indicators for the following 15 conditions: acute myocardial infarction, anemia, angina, breast cancer, cerebrovascular accident, cholelithiasis, chronic obstructive pulmonary disease, congestive heart failure, depression, diabetes, gastrointestinal bleeding, hip fracture, hypertension, pneumonia, and transient ischemic attack.

Panel members rated the indicators 4 times before and during the 2 meetings. They rated necessary care indicators on 4 dimensions (outcome improvement, necessity, feasibility, and suitability) and avoidable outcome indicators on 3 dimensions (preventability, feasibility, and suitability). Panelists recorded each rating on a 9-point scale (for outcome improvement, 1 = unlikely and 9 = very likely; for necessity, 1 = clearly not necessary and 9 = clearly necessary; for feasibility, 1 = not feasible and 9 = definitely feasible; for suitability, 1 = not suitable and 9 = suitable; and for preventability, 1 = never preventable and 9 = always preventable). After each round of rating, median ratings summarizing the scores of the 7 panelists were calculated. After round 1, indicators with a median rating of 7 or more on the necessity (or preventability) scale and 4 or more on all other scales were retained. We chose 4 as the threshold because we planned to do feasibility testing and wanted to be inclusive. After rounds 2 and 4, indicators were retained based on a median suitability rating of 7 or higher with no statistical disagreement.23

We proposed 136 indicators to the panelists: 105 necessary care indicators and 31 avoidable outcome indicators. Prior to the first meeting, panelists received summaries of the evidence supporting each indicator. Before the second meeting, we gave panel members numeric tables showing how the indicators performed on claims data for a 1% sample of Medicare beneficiaries. For each round of ratings, indicators were added or deleted based on the panelists' ratings: in round 1, 11 indicators were added; in round 2, 17 were added and 18 were deleted; in round 3, 1 was added and 83 were deleted; and in round 4, 3 were added and 31 were deleted.

Ultimately, almost two thirds of the initial set of indicators were deleted, resulting in a total of 47 indicators (41 necessary care indicators and 6 avoidable outcome indicators; Table 1). All of these indicators received a median suitability rating of 7 or higher without statistical disagreement among the panelists,23 and none had a necessity, preventability, or feasibility rating of less than 6. Although the panelists' ratings varied (mean ratings among the 7 panelists ranged from 6.8 to 7.4), there was substantial agreement. As in previous panel projects, specialist panelists rated indicators in their area of specialty somewhat more highly31 but ultimately tended to agree with the overall final panel disposition (κ = 0.80). No indicator receiving a median suitability rating of 7 or higher was disqualified because of panel disagreement. One necessary care indicator, having a lipid profile in the first year after initial diagnosis of angina, was dropped during subsequent analysis because of implementation and programming difficulties, leaving 46 reported here. Of these, 5 were based on randomized controlled trial evidence, 7 on observational trials, and the remainder on expert opinion.

Application of Indicators to Medicare Claims Data

We developed an automated system for calculating the indicators from administrative data. The system for scoring the indicators uses inpatient and outpatient utilization data from Medicare part A and B claims. All Medicare beneficiaries younger than 65 years were excluded from the sample to ensure generalizability since all people aged 65 years or older are eligible for Medicare.

We analyzed data on a randomly selected 1% sample of Medicare beneficiaries aged 65 years or older from 2 periods: January 1, 1992, through December 31, 1993, and July 1, 1994, through June 30, 1996. We used the results of the 1992-1993 analysis to develop the computer algorithms, which we then applied to the 1994-1996 data reported here.

The 2 samples were restricted to beneficiaries aged 65 years or older who were enrolled in traditional fee-for-service Medicare for at least 1 month during 1 of the study periods. We excluded months during which beneficiaries were enrolled in a managed care program because these utilization data were not available for the study periods. Hospital inpatient, hospital outpatient, and physician claims for services incurred from July 1, 1994, through June 30, 1996, were obtained from Medicare Standard Analytic files. Beneficiary characteristics were merged from records in the denominator file. A total of 345,253 individuals met our inclusion criteria for at least 1 indicator.

We identified beneficiaries with the target medical conditions and treatments using diagnoses and procedures recorded on Medicare claims. We constructed algorithms for each indicator specifying the qualifying diagnosis (International Classification of Diseases, Ninth Revision [ICD-9] codes), the time frame required to observe whether care was provided, and the necessary care (ICD-9 and Current Procedural Terminology, Fourth Edition procedure codes).

For each indicator, we included only beneficiaries with the relevant diagnoses who were enrolled in Medicare fee-for-service for an adequate number of months to have received the necessary care. We then calculated the proportion of these beneficiaries who had received the necessary care, as well as the proportion of several vulnerable populations within this sample who had received the necessary care. For each indicator, the age-sex distribution of the entire eligible population was used for direct standardization of each vulnerable population's rate. We conducted 3 paired t tests for each indicator, comparing underuse for (1) African American and white beneficiaries; (2) beneficiaries residing in a federally defined Health Professional Shortage Area (HPSA)32 and those residing outside an HPSA; and (3) beneficiaries residing in a poverty ZIP code (in which more than 30% of the population lives under the federally defined poverty line) and those residing in a nonpoverty ZIP code. For indicators that focus on care following hospitalization, we excluded beneficiaries discharged to other hospital facilities or to home health agencies.

Results
All Beneficiaries

Table 1 shows results for the final 46 indicators for all beneficiaries based on the Medicare claims data analysis for 1994-1996. For each indicator, we present 2 types of data: the proportion of beneficiaries who received the indicated care and the number of beneficiaries who were eligible to receive the care (ie, a diagnosis or treatment code indicated that they had the condition). All beneficiaries were eligible for at least 1 preventive care indicator and 45% of beneficiaries were eligible for at least 1 nonpreventive care indicator.

The results for all beneficiaries show that the proportion who received necessary care varied greatly by condition and treatment. For 14 of the 37 necessary care indicators, the administrative data show that less than two thirds of Medicare beneficiaries with these conditions received care that a physician panel considered to be a minimum quality standard. Five avoidable outcomes occurred infrequently among all beneficiaries; results for the sixth avoidable outcome show that more than half of all patients with chronic obstructive pulmonary disease were hospitalized for a respiratory diagnosis. Of all beneficiaries, 87% had visited a physician 1 or more times in the past year, and 50% had had an eye examination in the past 2 years. Of the female beneficiaries, almost half of those younger than 75 years had had a mammogram in the past 2 years.

Vulnerable Populations

Table 2 displays the statistically significant results of the paired comparisons we calculated for the 3 vulnerable populations. We show the age- and sex-adjusted proportion of vulnerable and nonvulnerable beneficiaries who received the indicated care and the number of vulnerable and nonvulnerable beneficiaries who were eligible to receive the care. Results that were statistically significant at the 5% level as well as those that were statistically significant at the 5% level with a difference in size of 10% or more are indicated.

The results indicate that vulnerable populations (African Americans, those living in HPSAs, and those living in poverty areas) were less likely than their counterparts to receive necessary care and preventive care and were more likely to have higher rates of avoidable outcomes. Figure 1 shows the comparative results for these vulnerable populations. The figure indicates the total number of indicators (necessary, avoidable, and preventive) for which each respective vulnerable group received better, worse, or the same level of care (or had higher rates of avoidable outcomes) as nonvulnerable populations.

African Americans scored significantly worse on 16 of 46 indicators, 10 of the nonpreventive necessary care indicators, all 3 of the preventive care indicators, and 3 of the 6 avoidable outcomes indicators. African Americans had lower rates of follow-up after hospitalization and fewer necessary blood tests and eye examinations. African Americans scored significantly better in only 2 indicators. Health Professional Shortage Area and poverty area residents showed similar patterns. They scored significantly worse on 12 and 11 of the necessary care indicators, 3 and 3 of the preventive care indicators, and 1 and 3 of the avoidable outcomes indicators, respectively. Poverty area residents scored significantly better on only 1 indicator. Restricting analyses of necessary care indicators to patients enrolled for at least 13 months did not change the direction or significance of any of the necessary care indicator results.

We also compared underuse in vulnerable populations according to the strength of the evidence on which the indicators were based: randomized controlled trials and observational trials vs expert opinion. For the 12 indicators based on randomized controlled trials and observational trials, African Americans scored worse on 10 of the indicators, of which 5 were statistically significant, and scored better on 2 of the indicators; HPSA residents scored worse on 8 of these indicators, 4 of which were statistically significant, and scored better on 4 of these indicators; and residents of poverty areas scored worse on 10 of these indicators, 5 of which were statistically significant, and scored better on 2 of the indicators.

Of the 34 indicators based on expert opinion, African Americans scored worse on 30 of the indicators, 11 of which were statistically significant, scored better on 3 of the indicators, of which 2 were statistically significant, and were equal to non–African Americans on 1 indicator; residents of HPSA areas scored worse on 28 of the indicators, of which 12 were statistically significant, scored better on 4 of the indicators, and were equal to non–HPSA residents on 2 indicators; and residents of poverty areas scored worse on 27 of these indicators, of which 12 were statistically significant, scored better on 6 of the indicators, of which 1 was statistically significant, and were equal to residents of nonpoverty areas on 1 indicator.

Comment

The underuse monitoring system we describe is significant in terms of its breadth, the rigor with which the indicators were selected, and its relative ease of use. The indicators span several phases of care, including prevention, initial evaluation, diagnostic tests, therapeutic interventions, follow-up, and monitoring for acute, chronic, medical, and surgical conditions. Because we included preventive care indicators, every Medicare beneficiary is eligible for at least 1 indicator (ie, yearly physician visit); moreover, 45% of the beneficiaries were eligible for at least 1 non–preventive necessary care indicator. We selected the 47 indicators using a rigorous evidence-based process of literature review, feasibility testing, and expert opinion. Moreover, because the indicators were designed to be applied to routinely collected data, we avoided the costs and resources associated with medical record review.

When we applied the system to Medicare claims data, our results suggested that underuse of necessary care is widespread for the 15 target conditions, even in the relatively well-insured Medicare population. For almost half of the indicators, less than two thirds of beneficiaries received needed care. Underuse was more likely to occur among African Americans and residents of poverty areas or those areas with a shortage of health care professionals, indicating construct validity for our measures. Although some overlap may exist among these 3 vulnerable populations, this is not a serious issue since the results are similar for all of the groups. These findings persisted regardless of the strength of the evidence on which the indicators were based (randomized controlled trials and observational trials vs expert opinion).

The clinically based indicator system presented here has several advantages.33 First, claims data are routinely collected and relatively inexpensive to analyze. Moreover, measures based on claims data can be calculated in a timely fashion, thus facilitating the repeated evaluations crucial to identifying trends and analyzing programmatic success. Claims data also allow for easy identification of geographical and ethnic subgroups with particular access problems. This system combines inpatient and outpatient claims data, thereby providing a more complete assessment of underuse, although the lack of pharmacy data limited indicator selection. Finally, the indicators were subject to rigorous expert review, though the experts often lacked randomized controlled trials to support their decisions.

By 2001, Medicare health maintenance organizations (Medicare+Choice organizations) will be required to submit encounter data to HCFA for care other than inpatient hospital stays.34 In 2004, these data, including physician office visits and hospital outpatient department visits, will be incorporated into a new risk adjustment method that HCFA will use to make monthly capitation payments to Medicare+Choice organizations.34 Thus, with access to inpatient and outpatient data for Medicare beneficiaries enrolled in both traditional Medicare and Medicare+Choice, the system could be applied routinely, uniformly, and inexpensively to the entire Medicare population. Medicare's current survey- and HEDIS-based systems, which compare Medicare+Choice and traditional Medicare, are costly and may be too expensive to be applied to all beneficiaries and may not translate well between the 2 settings. Beyond Medicare, the Institute of Medicine and the Presidents' Advisory Commission have called for a national quality report card, and a system such as this might serve as its basis in the short term.7

However, claims data–based systems have several limitations, as well. First, administrative data lack the clinical detail found in the medical record. Hannan et al35 found that complications and comorbidities were more difficult to distinguish using administrative data than using medical records. Iezzoni et al36 found that their administrative-based screening tool for hospital quality had a sensitivity of 0.92 and a specificity of 0.62 compared with charts. While prospective electronic clinical data collection would combine the detail of medical records with the ease of collection of administrative data,8 such systems are not yet available.

Validation studies of claims-based systems have shown them to be reasonably accurate but far from perfect. In 1 study, Medicare claims identified 95.6% of cataract surgeries.37 However, other studies have found underreporting of chronic conditions in inpatient claims, although some of the conditions included in our system (diabetes, congestive heart failure, unstable angina, and malignancies) were better represented.38-40 Similarly, Roos et al41 found that administrative data were quite specific (0.88-0.98) in identifying respiratory and cardiovascular surgical comorbidities when compared with anesthesiologists' assessments, but the data were not very sensitive. Adding outpatient claims to the system, as we did, may improve chart-claims agreement.42

Another potential limitation involves tracking medical services obtained outside of the Medicare program. While Medicare is the primary payer and, thus, will receive the vast majority of claims even for services that other programs cover secondarily, there are some service categories that may be underrepresented. The design of the indicator system takes this into account; for example, there are no medication indicators because Medicare lacks an outpatient pharmaceutical benefit. However, some indicated services (eg, mammograms) may be obtained outside the Medicare program. Therefore, our results can be used to evaluate underuse only within Medicare.

It is also difficult to adjust our measures for risk. Since established risk adjustment systems predict the outcomes but not the receipt of indicated care, their applicability to the necessary care indicators presented here is questionable.43 Adjusting for comorbidities using claims data is also problematic since it is sometimes difficult to distinguish complications (results of poor care) from comorbidities (predisposition to poor outcomes).35 By limiting eligibility to a reasonably well-defined and homogeneous set of clinical circumstances, we implicitly risk-adjusted our results (eg, diabetic patients with and without comorbidities both need eye examinations). However, residual unaccountable risk may explain some of the observed variations. In assessing performance of individual hospitals or physicians, including comorbidities would be reasonable, especially for the avoidable outcomes indicators. For comparison of population-based measures, the age- and sex-adjusted measures reported here are an important first step.

Additionally, it should be noted that the mammogram indicator was calculated only for women younger than 75 years and does not meet the current clinical standard for yearly mammograms. When the study was conducted, Medicare would pay for a mammogram only once every 2 years for women who were not at high risk for breast cancer, which was the clinical standard at that time. Following the Balanced Budget Act in 1997, Medicare changed its policy and now covers yearly mammograms for all women aged 65 years or older.

In our calculations, we did not expect to obtain a value of 1.0 for any of the indicators, even among people with excellent access, because claims data cannot capture all clinical characteristics that comprise the indications and contraindications for specific services. In addition, billing and coding problems could affect the claims data, and we had no mechanism to track services that patients obtained outside of Medicare. This raises the issue of what the calculated values for each indicator mean. A benchmark approach could be used to solve this problem: by estimating the rates for white populations living in nonpoverty and non-HPSA areas, we could establish a benchmark with which the other values could be compared.

Despite these limitations, claims-based systems may be used in a variety of ways to inexpensively measure underuse. Screening administrative data to determine areas of a health care system in need of further investigation is the first step in a continuous quality improvement framework,22,44,45 allowing identification of individual facilities or medical groups at risk. Moreover, many of the system's inherent biases will be reduced if the measures are used to track claims data for an extended period. Therefore, this system may be used to guide internal quality improvement efforts for large medical groups or plans, as well as purchasers' or regulators' evaluations. Future research, using chart reviews and patient surveys, is needed to directly validate the indicator system. However, the results of our initial application indicate substantial underuse, particularly among traditionally vulnerable populations.

References
1.
Hafner-Eaton C. Physician utilization disparities between the uninsured and insured.  JAMA.1993;269:787-792.Google Scholar
2.
Kahn KL, Pearson ML, Harrion ER.  et al.  Health care for black and poor hospitalized Medicare patients.  JAMA.1994;271:1169-1174.Google Scholar
3.
Woolhandler S, Himmelstein D. Reverse targeting of preventive care due to lack of health insurance.  JAMA.1988;259:2872-2874.Google Scholar
4.
Physician Payment Review Commission.  Access for Medicare beneficiaries. In: Annual Report to Congress, 1994. Washington, DC: Physician Payment Review Commission; 1994:ch 17.
5.
Pappas G, Queen S, Hadden W, Fisher G. The increasing disparity in mortality between socioeconomic groups in the United States.  N Engl J Med.1993;329:103-109.Google Scholar
6.
Adler NE, Boyce T, Chesney MA.  et al.  Socioeconomic inequalities in health.  JAMA.1993;269:3140-3145.Google Scholar
7.
President's Advisory Commission on Consumer Protection and Quality in the Health Care Industry.  The state of quality: how good is care? In: Quality First: Better Health Care for All Americans. Washington, DC: Government Printing Office; 1998:21-40.
8.
Schneider EC, Riehl V, Courte-Wienecke S, Eddy DM, Sennett C.for the National Committee for Quality Assurance.  Enhancing performance measurement.  JAMA.1999;282:1184-1190.Google Scholar
9.
Garnick DW, Lawthers AG, Palmer RH.  et al.  A computerized system for reviewing medical records from physicians' offices.  Jt Comm J Qual Improv.1994;20:679-694.Google Scholar
10.
Institute of Medicine Committee on Monitoring Access to Personal Health Care Services.  Access to Health Care in America. Washington, DC: National Academy Press; 1993.
11.
Palmer H. Measuring clinical performance to provide information for quality improvement.  Qual Manag Health Care.1996;4:1-6.Google Scholar
12.
Samsa GP, Bian J, Lipscomb J, Matchar DB. Epidemiology of recurrent cerebral infarction.  Stroke.1999;30:338-349.Google Scholar
13.
Johantgen M, Elixhauser A, Ball JK, Goldfarb M, Harris DR. Quality indicators using hospital discharge data.  Jt Comm J Qual Improv.1998;24:88-105.Google Scholar
14.
Kritchevsky SB, Simmons BP, Braun BI. The project to monitor indicators.  Infect Control Hosp Epidemiol.1995;16:33-35.Google Scholar
15.
Health Care Financing Administration.  Breast Cancer National Project. Available at: http://www.hcfa.gov/quality/3v.htm. Accessed January 13, 2000.
16.
Health Care Financing Administration.  Diabetes National Project. Available at: http://www.hcfa.gov/quality/3t.htm. Accessed January 13, 2000.
17.
Health Care Financing Administration.  Research and Analytic Support for Implementing Performance Measurement in Medicare Fee-for-Service. Available at: http://www.hcfa.gov/quality/docs/ffs-1.htm. Accessed January 13, 2000.
18.
Physician Payment Review Commission.  Beneficiaries and the Medicare fee schedule. In: Annual Report to Congress, 1993. Washington, DC: Physician Payment Review Commission; 1994:ch 5.
19.
Merrell K, Colby DC, Hogan C. Medicare beneficiaries covered by Medicaid buy-in agreements.  Health Aff (Millwood).1997;16:175-184.Google Scholar
20.
Physician Payment Review Commission.  Monitoring Access of Medicare Beneficiaries. Washington, DC: Physician Payment Review Commission; 1992. Report No. 92-5:7-8.
21.
Bishop W, Park K, Rector J, Faulkner M. HEDIS audits.  J Healthc Qual.1998;20:10-15.Google Scholar
22.
Goldfield N, Villani J. The use of administrative data as the first step in the continuous quality improvement process.  Am J Med Qual.1996;11:S35-S38.Google Scholar
23.
Leape LL, Hilborne LH, Kahan JP.  et al.  Coronary Artery Bypass Graft: A Literature Review and Ratings of Appropriateness and Necessity. Santa Monica, Calif: RAND; 1991.
24.
Fink A, Siu AL, Brook RH.  et al.  Assuring the quality of health care for older persons: an expert panel's priorities.  JAMA.1987;258:1905-1908.Google Scholar
25.
Kahn KL, Rogers WH, Rubenstein LV.  et al.  Measuring quality of care with explicit process criteria before and after implementation of the DRG-based prospective payment system.  JAMA.1990;264:1969-1970.Google Scholar
26.
Siu AL, McGlynn EA, Morgenstern H.  et al.  A fair approach to comparing quality of care.  Health Aff (Millwood).1991;10:62-75.Google Scholar
27.
Asch S, Sloss E, Kravitz R, Kamberg C, Young R. Access to Care for the Elderly Project (ACE-PRO) Project MemorandumSanta Monica, Calif: RAND; 1995. Publication PM-435-PPRC.
28.
Rutstein DD, Berenberg W, Chalmers TC.  et al.  Measuring the quality of medical care: a clinical method.  N Engl J Med.1976;294:582-588.Google Scholar
29.
Billings J, Zeitel L, Lukomnik J.  et al.  Impact of socioeconomic status on hospital use in New York City.  Health Aff (Millwood).1993;12:162-173.Google Scholar
30.
Weissman JS, Gatsonis C, Epstein AM. Rates of avoidable hospitalization by insurance status in Massachusetts and Maryland.  JAMA.1992;268:2388-2394.Google Scholar
31.
Kahan JP, Park E, Leape LL.  et al.  Variations by specialty in physician ratings of the appropriateness and necessity of indications for procedures.  Med Care.1996;34:512-523.Google Scholar
32.
 Criteria for designation of areas having shortages of primary medical care professional(s). Available at: http://www.bpch.hrsa.gov/dsd/hpsa_fr2.htm. Accessed December 16, 1999.
33.
Mitchell JB, Bubolz T, Paul JE.  et al.  Using Medicare claims for outcomes research.  Med Care.1994;32:JS38-JS51.Google Scholar
34.
 MedPAC Comment on HCFA's Risk Adjustment Proposal . Washington, DC: Medicare Payment Advisory Commission; 1999.
35.
Hannan EL, Racz MJ, Jollis JG, Peterson ED. Using Medicare claims data to assess provider quality for CABG surgery.  Health Serv Res.1997;31:659-678.Google Scholar
36.
Iezzoni LI, Foley SM, Heeren T.  et al.  A method for screening the quality of hospital care using administrative data.  QRB Qual Rev Bull.1992;18:361-371.Google Scholar
37.
Gray DT, Hodge DO, Ilstrup DM, Butterfield LC, Baratz KH. Concordance of Medicare data and population-based clinical data on cataract surgery utilization in Olmsted County, Minnesota.  Am J Epidemiol.1997;145:1123-1126.Google Scholar
38.
Malenka DJ, McLerran D, Roos N, Fisher ES, Wennberg JE. Using administrative data to describe casemix.  J Clin Epidemiol.1994;47:1027-1032.Google Scholar
39.
Romano PS, Roos LL, Luft HS, Jollis JG, Doliszny K. A comparison of administrative versus clinical data: coronary artery bypass surgery as an example.  J Clin Epidemiol.1994;47:249-260.Google Scholar
40.
McClish DK, Penberthy L, Whittemore M.  et al.  Ability of Medicare claims data and cancer registries to identify cancer cases and treatment.  Am J Epidemiol.1997;145:227-233.Google Scholar
41.
Roos LL, Sharp SM, Cohen MM. Comparing clinical information with claims data.  J Clin Epidemiol.1991;44:881-888.Google Scholar
42.
Miller ME, Welch WP, Wong HS. Exploring the relationship between inpatient facility and physician services.  Med Care.1997;35:114-127.Google Scholar
43.
Charlson M, Szatrowski TP, Peterson J, Gold J. Validation of a combined comorbidity index.  J Clin Epidemiol.1994;47:1245-1251.Google Scholar
44.
Iezzoni LI, Daley J, Heeren T.  et al.  Using administrative data to screen hospitals for high complication rates.  Inquiry.1994;31:40-55.Google Scholar
45.
Leatherman S, Peterson E, Heinen L, Quam L. Quality screening and management using claims data in a managed care setting.  QRB Qual Rev Bull.1991;17:349-359.Google Scholar
×