[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Table 1.  
Diabetes Cohort Baseline Characteristics, 2010-2014a
Diabetes Cohort Baseline Characteristics, 2010-2014a
Table 2.  
Clinical Practice Group Performance on Current Process and Disease Control Measures and Rates of Utilization-Based Outcomes
Clinical Practice Group Performance on Current Process and Disease Control Measures and Rates of Utilization-Based Outcomes
Table 3.  
Adjusted Correlations Among Performance on Diabetes Process, Disease Control Quality Measures, and Utilization-Based Outcomesa
Adjusted Correlations Among Performance on Diabetes Process, Disease Control Quality Measures, and Utilization-Based Outcomesa
Table 4.  
Estimate of Clinical Practice Group Quality Performance on Hospitalization Rates for Major Adverse Cardiovascular Events or Diabetesa
Estimate of Clinical Practice Group Quality Performance on Hospitalization Rates for Major Adverse Cardiovascular Events or Diabetesa
1.
New England Journal of Medicine Catalyst. What is pay for performance in healthcare? https://catalyst.nejm.org/pay-for-performance-in-healthcare/. Published March 1, 2018. Accessed June 6, 2019.
2.
American Medical Association. Understanding Medicare’s merit-based incentive payment system (MIPS). https://www.ama-assn.org/practice-management/understanding-medicare-merit-based-incentive-program-mips. Accessed July 9, 2019.
3.
Centers for Medicare & Medicaid Services. Medicare shared savings program: quality measure benchmarks for the 2018 and 2019 reporting years. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Downloads/2018-and-2019-quality-benchmarks-guidance.pdf. Published February 2019. Accessed April 1, 2019.
4.
Centers for Medicare & Medicaid Services. Guide to quality performance standards for accountable care organizations starting in 2012: pay for reporting and pay for performance. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Downloads/ACO-Guide-Quality-Performance-2012.PDF. Published 2012. Accessed April 1, 2019.
5.
MedPAC. June 2018 report to the Congress: Medicare and the health care delivery system. Applying the Commission’s principles for measuring quality: population-based measures and hospital quality incentives. http://www.medpac.gov/docs/default-source/reports/jun18_ch7_medpacreport_sec.pdf?sfvrsn=0. Published June 2018. Accessed April 1, 2019.
6.
Medicare Payment Advisory Commission. Public meeting. http://medpac.gov/docs/default-source/default-document-library/jan-2018-meeting-transcript.pdf?sfvrsn=0. Published January 11, 2018. Accessed July 9, 2019.
7.
Bradley  EH, Herrin  J, Elbel  B,  et al.  Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality.  JAMA. 2006;296(1):72-78. doi:10.1001/jama.296.1.72PubMedGoogle ScholarCrossref
8.
Peterson  ED, Roe  MT, Mulgund  J,  et al.  Association between hospital process performance and outcomes among patients with acute coronary syndromes.  JAMA. 2006;295(16):1912-1920. doi:10.1001/jama.295.16.1912PubMedGoogle ScholarCrossref
9.
Granger  CB, Steg  PG, Peterson  E,  et al; GRACE Investigators.  Medication performance measures and mortality following acute coronary syndromes.  Am J Med. 2005;118(8):858-865. doi:10.1016/j.amjmed.2005.01.070PubMedGoogle ScholarCrossref
10.
Parast  L, Doyle  B, Damberg  CL,  et al.  Challenges in assessing the process-outcome link in practice.  J Gen Intern Med. 2015;30(3):359-364. doi:10.1007/s11606-014-3150-0PubMedGoogle ScholarCrossref
11.
Lilford  RJ, Brown  CA, Nicholl  J.  Use of process measures to monitor the quality of clinical practice.  BMJ. 2007;335(7621):648-650. doi:10.1136/bmj.39317.641296.ADPubMedGoogle ScholarCrossref
12.
McGlynn  EA.  Six challenges in measuring the quality of health care.  Health Aff (Millwood). 1997;16(3):7-21. doi:10.1377/hlthaff.16.3.7PubMedGoogle ScholarCrossref
13.
Gilstrap  LG, Snipelisky  D, AbouEzzeddine  O,  et al.  Unanswered questions in contemporary heart failure.  J Card Fail. 2017;23(10):770-774. doi:10.1016/j.cardfail.2017.06.009PubMedGoogle ScholarCrossref
14.
US Census Bureau. American Community Survey (ACS). https://www.census.gov/programs-surveys/acs/. Revised June 17, 2018. Accessed March 28, 2018.
15.
Rural Health Research Center. Rural-urban commuting area codes (RUCAs). http://depts.washington.edu/uwruca/ruca-about.php. Accessed July 9, 2019.
16.
Centers for Disease Control and Prevention. Census regions and divisions of the United States. https://www2.census.gov/geo/pdfs/maps-data/maps/reference/us_regdiv.pdf. Published March 28, 2018. Accessed July 9, 2019.
17.
Lo-Ciganic  W, Zgibor  JC, Ruppert  K, Arena  VC, Stone  RA.  Identifying type 1 and type 2 diabetic cases using administrative data: a tree-structured model.  J Diabetes Sci Technol. 2011;5(3):486-493. doi:10.1177/193229681100500303PubMedGoogle ScholarCrossref
18.
Hamad  R, Modrek  S, Kubo  J, Goldstein  BA, Cullen  MR.  Using “big data” to capture overall health status: properties and predictive value of a claims-based health risk score.  PLoS One. 2015;10(5):e0126054. doi:10.1371/journal.pone.0126054PubMedGoogle ScholarCrossref
19.
Centers for Medicare & Medicaid Services. Two-step attribution for measures included in the value modifier. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Attribution-Fact-Sheet.pdf. Published August 2015. Accessed December 11, 2017.
20.
National Committee for Quality Assurance. Comprehensive diabetes care (CDC). https://www.ncqa.org/hedis/measures/comprehensive-diabetes-care/. Published 2019. Accessed July 9, 2019.
21.
Centers for Medicare & Medicaid Services. Quality payment program: MIPS overview. https://qpp.cms.gov/mips/overview. Accessed July 9, 2019.
22.
Centers for Medicare & Medicaid Services. Consensus core set: ACO and PCMH/primary care measures, version 1.0. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityMeasures/Downloads/ACO-and-PCMH-Primary-Care-Measures.pdf. Updated February 3, 2016. Accessed July 9, 2019.
23.
Stone  NJ, Robinson  JG, Lichtenstein  AH,  et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  Circulation. 2014;129(25)(suppl 2):S1-S45. doi:10.1161/01.cir.0000437738.63853.7aPubMedGoogle ScholarCrossref
24.
Hawn  MT, Graham  LA, Richman  JS, Itani  KM, Henderson  WG, Maddox  TM.  Risk of major adverse cardiac events following noncardiac surgery in patients with coronary stents.  JAMA. 2013;310(14):1462-1472. doi:10.1001/jama.2013.278787PubMedGoogle ScholarCrossref
25.
Shih  CJ, Chu  H, Chao  PW,  et al.  Long-term clinical outcome of major adverse cardiac events in survivors of infective endocarditis: a nationwide population-based study.  Circulation. 2014;130(19):1684-1691. doi:10.1161/CIRCULATIONAHA.114.012717PubMedGoogle ScholarCrossref
26.
Hasvold  P, Thuresson  M, Sundström  J,  et al.  Association between paradoxical HDL cholesterol decrease and risk of major adverse cardiovascular events in patients initiated on statin treatment in a primary care setting.  Clin Drug Investig. 2016;36(3):225-233. doi:10.1007/s40261-015-0372-9PubMedGoogle ScholarCrossref
27.
Agency for Healthcare Research and Quality. Prevention quality indicators technical specifications updates: version 6.0 (ICD-9). https://www.qualityindicators.ahrq.gov/Modules/PQI_TechSpec_ICD09_v60.aspx. Published October 2016. Accessed July 9, 2019.
28.
Eisinga  R, Grotenhuis  Mt, Pelzer  B.  The reliability of a two-item scale: Pearson, Cronbach, or Spearman-Brown?  Int J Public Health. 2013;58(4):637-642. doi:10.1007/s00038-012-0416-3PubMedGoogle ScholarCrossref
29.
Webb NM, Shavelson RJ, Haertel EH. Reliability coefficients and generalizability theory. In: Rao CR, Sinharay S, eds. Handbook of Statistics 26: Psychometrics. Amsterdam, the Netherlands: Elsevier BV; 2006.
30.
National Quality Forum. MAP 2018 Considerations for implementing measures in federal programs: MIPS and MSSP. https://www.qualityforum.org/Projects/im/MAP/Clinician.../2018_Final_Report.aspx. Published March 15, 2018. Accessed June 20, 2018.
31.
Centers for Medicare & Medicaid Services. Two-step attribution for measures included in the value modifier. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Attribution-Fact-Sheet.pdf. Updated March 2017. Accessed July 9, 2019.
32.
Stone  NJ, Robinson  JG, Lichtenstein  AH,  et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  J Am Coll Cardiol. 2014;63(25, pt B):2889-2934. doi:10.1016/j.jacc.2013.11.002PubMedGoogle ScholarCrossref
33.
Gilstrap  LG, Chernew  ME, Nguyen  CSA, Bai  B, Landrum  MB. Trends in statin use and adherence and the impact of the 2013 cholesterol guidelines. Paper presented at: American Heart Association National Meeting; November 2018; Chicago, Illinois.
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    Views 879
    Original Investigation
    Health Policy
    August 14, 2019

    Association Between Clinical Practice Group Adherence to Quality Measures and Adverse Outcomes Among Adult Patients With Diabetes

    Author Affiliations
    • 1The Dartmouth Institute, Dartmouth Medical School, Lebanon, New Hampshire
    • 2Division of Cardiology, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
    • 3Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts
    • 4Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
    • 5Division of General Medicine, Beth Israel Deaconess Hospital, Boston, Massachusetts
    JAMA Netw Open. 2019;2(8):e199139. doi:10.1001/jamanetworkopen.2019.9139
    Key Points español 中文 (chinese)

    Question  What are the correlations between types of quality measures in treatment of diabetes, and how might these correlations be associated with quality-based reimbursement for chronic diseases?

    Findings  In this cross-sectional study of 652 258 adults with diabetes from 2010 to 2014 at the clinical practice group level, correlations among quality measures were weak, and process and disease control performance were not strongly associated with hospitalization rates. Process and disease control performance at the clinical practice group level explained 3.9% of the variation in hospitalization rates at the individual level.

    Meaning  These findings raise concern about the use of utilization-based outcomes (hospitalizations) as a measure of quality in chronic diseases.

    Abstract

    Importance  Clinical practice group performance on quality measures associated with chronic disease management has become central to reimbursement. Therefore, it is important to determine whether commonly used process and disease control measures for chronic conditions correlate with utilization-based outcomes, as they do in acute disease.

    Objective  To examine the associations among clinical practice group performance on diabetes quality measures, including process measures, disease control measures, and utilization-based outcomes.

    Design, Setting, and Participants  This retrospective, cross-sectional analysis examined commercial claims data from a national health insurance plan. A cohort of eligible beneficiaries with diabetes aged 18 to 65 years who were enrolled for at least 12 months from January 1, 2010, through December 31, 2014, was defined. Eligible beneficiaries were attributed to a clinical practice group based on the plurality of their primary care or endocrinology office visits. Data were analyzed from October 1, 2018, through April 30, 2019.

    Main Outcomes and Measures  For each clinical practice group, performance on current diabetes quality measures included 3 process measures (2 testing measures [hemoglobin A1c {HbA1c} and low-density lipoprotein {LDL} testing] and 1 drug use measure [statin use]) and 2 disease control measures (HbA1c <8% and LDL level <100 mg/dL). The rates of utilization-based outcomes, including hospitalization for diabetes and major adverse cardiovascular events (MACEs), were also measured.

    Results  In this cohort of 652 258 beneficiaries with diabetes from 886 clinical practice groups, 42.9% were aged 51 to 60 years, and 52.6% were men. Beneficiaries lived in areas that were predominantly white (68.1%). At the clinical practice group level, except for high correlation between the 2 testing measures, correlations among different quality measures were weak (r range, 0.010-0.244). Rate of HbA1c of less than 8% had the strongest correlation with hospitalization for MACE (r = −0.046; P = .03) and diabetes (r = −0.109; P < .001). Rates of HbA1c control at the clinical practice group level were not significantly associated with likelihood of hospitalization at the individual level. Performance on the process and disease control measures together explained 3.9% of the variation in the likelihood of hospitalization for a MACE or diabetes at the individual level.

    Conclusions and Relevance  In this study, performance on utilization-based measures—intended to reflect the quality of chronic disease management—was only weakly associated with direct measures of chronic disease management, namely, disease control measures. This correlation should be considered when determining the degree of financial emphasis to place on hospitalization rates as a measure of quality in treatment of chronic diseases.

    Introduction

    Since the early 2000s, performance on quality measures has become central to the reimbursement of medical providers (defined as any health care professionals who directly care for patients and bill Medicare under their own license), most commonly within clinical practice groups and hospitals. Many commercial insurance companies now use pay-for-performance programs to define some percentage of provider reimbursement,1 and with the passage of the Medicare and CHIP Reauthorization Act of 2015, most providers now participate in a value-based payment program that relies on quality measurement to determine reimbursement.2 Therefore, how quality performance is measured and how those measures correlate with each other is of paramount importance to patients, providers, and payers.

    Recently, payment models have gravitated away from traditional process and disease control measures and toward utilization-based outcomes, such as hospitalizations, to define and measure quality. This change is demonstrated most clearly by the evolution of the accountable care organization quality benchmark measures.3,4 Moreover, the Medicare Payment Advisory Commission recently recommended a focus on hospitalization rates as a population-based quality measure in alternative payment models5 and in their proposed replacement of the Merit-Based Incentive Payment System, largely to ease the burden of quality reporting.6

    In acute disease, the associations between process measures, utilization-based outcomes, and patient outcomes (eg, mortality) are well established and have been previously studied.7-9 In chronic diseases, however, in addition to the challenges of time lags and lower mortality rates, the associations between different types of quality measures have not been well studied.9 Given these challenges and the increasing prevalence, cost, and complexity of chronic disease care, further work is timely to determine the association among traditional process measures of quality (eg, testing and drug use measures), which are easy to measure but may not be strongly correlated with outcomes10,11; disease control measures, which are costly to measure but have robust clinical validity12; and utilization-based outcomes, which are also easy to measure but of unclear validity in chronic diseases. This study examined the associations among quality metrics, including process, disease control, and utilization-based outcomes, for diabetes, one of the most common and clinically important chronic conditions,13 and explored the implications for quality-based reimbursement.

    Methods
    Study Population

    We used medical and pharmacy claims from January 1, 2010, through December 31, 2014, and laboratory results from a large national health insurance plan. We obtained institutional review board approval from Harvard University’s Committee on the Use of Human Subjects, which waived the need for informed consent because the data sets were deidentified. This article is compliant with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cross-sectional studies. All analyses were performed from October 1, 2018, through April 30, 2019.

    Our sample included individuals aged 18 to 65 years who were enrolled continuously for at least 1 calendar (measurement) year from 2010 through 2014 with diabetes, defined by the 2013 Healthcare Effectiveness Data and Information Set criteria of more than 1 inpatient visit or more than 2 outpatient visits with an International Classification of Diseases, Ninth Revision (ICD-9) code for diabetes. Because pharmacy data were limited to beneficiaries with pharmacy benefits with the same insurer (48% for the diabetes cohort) and laboratory data were available for only some beneficiaries (52% for the diabetes cohort) depending on where they underwent laboratory testing, some quality measures were limited to these patient subsets.

    Patient Characteristics

    Age and sex were obtained from the enrollment file. Other demographic and socioeconomic variables, including race/ethnicity (percentage white, black, or Hispanic), educational level (percentage with a college education), and median income, were determined using the beneficiaries’ zip codes and US Census data.14 Rural or urban residence was determined using the rural-urban commuting area code of the beneficiaries’ zip codes.15 Geographic region was determined using the beneficiaries’ zip codes and classified into 1 of 4 regions (Northeast, Midwest, South, and West) as defined by the Centers for Disease Control and Prevention.16 Type 1 diabetes was determined using ICD-9 codes (250.x1 or 250.x3).17 Other comorbidities were measured during the calendar year using the DxCG intelligence tool’s proprietary algorithm.18

    Patient to Clinical Practice Group Attribution

    Eligible patients were assigned to clinical practice groups, defined using tax identification numbers (TINs). A single TIN can represent a solo health care professional, a small physician group, or a larger group. Large organizations will occasionally bill using multiple TINs. Patients were assigned to clinical practice groups (TINs) using a modified version of the 2-step attribution rule of the Centers for Medicare & Medicaid Services.19 In each year, beneficiaries were assigned to a TIN using a plurality of primary care or, because we focus on patients with diabetes, endocrinology face-to-face office visits (weighted equally and identified using Current Procedural Terminology [CPT] codes 99201-99215, 99241-99245, G0402, G0438, and G0438), with specialty codes for family medicine (08), internal medicine (11), geriatric medicine (38), general provider organization (01), or endocrinology (46). If patients had the same number of visits to more than 1 TIN, they were assigned to the TIN with the greater sum of allowed costs. To ensure sufficient precision within a clinical practice group, we restricted our sample to clinical practice groups (TINs) with at least 40 attributed beneficiaries with diabetes, of whom 20 had to have laboratory and pharmacy data for every relevant measure.

    Process and Disease Control Measures

    For each clinical practice group, we analyzed its rates of adherence to the diabetes quality measures most commonly used by current alternative payment models and quality measurement/reimbursement models (eTable 1 in the Supplement). These models included 3 process measures (2 testing-based measures [hemoglobin A1c {HbA1c} and low-density lipoprotein {LDL} level] and 1 drug use measure [statin use]) and 2 disease control measures (HbA1c <8% and LDL level <100 mg/dL; to convert HbA1c to proportion of total hemoglobin, multiply by 0.01; LDL to millimoles per liter, multiply by 0.0259).20-22 Although the blood cholesterol guidelines switched from being LDL based to risk factor based in 2013,23 we included LDL level control because it is relevant for most of the time frame of our data (2010-2012 and most of 2013). The details of how each measure was coded are included in eTable 2 in the Supplement.

    Utilization-Based Outcome Rates

    Next, we examined 2 commonly considered outcome measures: admissions for major adverse cardiovascular events (MACEs) and diabetes. Admissions for MACEs were determined using ICD-9, CPT, and Healthcare Common Procedure Coding System codes for acute coronary syndrome, stroke, malignant dysrhythmia, sudden cardiac death, and coronary revascularization (eTable 3 in the Supplement).24-26 Admissions for diabetes were determined using the Agency for Healthcare Research and Quality’s prevention quality indicator (PQI 93) for diabetes.27 The frequency distribution of indications for admission within PQI 93 is displayed in eTable 4 in the Supplement.

    Statistical Analysis

    We first examined variation in the group-level rates of adherence to diabetes quality measures and admission rates and used hierarchical logistic regression models to compute risk-adjusted clinical practice group–level performance. Next, we examined the correlations among process measures, disease control measures, and utilization-based outcomes. Specifically, we computed mean rates for each measure in each group and pairwise correlations between the clinical practice groups. We then adjusted these correlations for all patient factors listed in Table 1. Because measurement error in group rates can bias estimated correlations toward zero, we adjusted the pairwise correlations using Spearman-Brown reliability adjustment to account for sampling variation in group-level estimates.28,29 This adjustment attenuates observed correlations by the estimated reliability of the measures that we compute as a function of the mean number of patients with diabetes within all physician practices and the intraclass correlation coefficients for each quality measure. We used a generalized estimating equation model with a logit link to determine the magnitude of association between clinical practice group performance on process and disease control measures and the likelihood of hospitalization for a MACE or diabetes at the individual level. This model was adjusted for patient-level variables. Two-sided P < .05 was considered significant in all analyses. All statistical analyses were performed using SAS version 9.4 (SAS Institute Inc) and R version 3.5.1 (R Project for Statistical Computing).

    Results

    Baseline characteristics for the cohort with diabetes from 2010 through 2014 are displayed in Table 1. The diabetic cohort included 652 258 patients from 886 clinical practice groups. Most patients (42.9%) were aged 51 to 60 years, and slightly more were men than women (52.6% vs 47.4%). Most beneficiaries lived in areas that were predominantly white (68.1%), with a median area-level income of approximately $65 691.09 per year. Most (94.1%) resided in urban areas, and more than half (53.4%) lived in the South. Only 8.4% had type 1 diabetes. Approximately one-half had hypertension (53.3%) and hyperlipidemia (49.4%), but only 10.4% had ischemic heart disease.

    Clinical practice group performance on current diabetes quality measures and their rates of hospitalization are displayed in Table 2. In terms of process measure performance, the mean (SD) rates of achievement were high for current testing measures of LDL level (83% [8%]; median, 84% [interquartile range {IQR}, 80%-88%]) and HbA1c (87% [7%]; median, 88% [IQR, 84%-91%]), with little variation (coefficient of variation [CV], 9.3% and 7.5%, respectively) among clinical practice groups. Drug use rates were lower (mean [SD], 58% [9%]; median, 58% [IQR, 53%-63%]) but had more variation (CV, 15.0%). Disease control measures rates, including HbA1c of less than 8% (mean [SD], 70% [8%]; median, 70% [IQR, 65%-75%]; CV, 12.1%) and LDL levels of less than 100 mg/dL (mean [SD], 59% [8%]; median, 59% [IQR, 54%-64%]; CV, 13.6%), were slightly higher than drug use rates and had similar variation among clinical practice groups. The rates of hospitalization for MACEs (mean [SD], 2% [1%]; median, 1% [IQR, 1%-2%]; CV, 78.6%) and diabetes (mean [SD], 3% [3%]; median, 2% [IQR, 1%-3%]; CV, 66.8%) were low in this population but demonstrated high rates of variation among clinical practice groups.

    Correlations between different quality measures at the group level were highly variable; when adjusted for patient covariates, they became closer to zero. Unadjusted correlations are displayed in eTable 5 in the Supplement. Adjusted correlations are displayed in Table 3. Hemoglobin A1c of less than 8% had the strongest correlation with hospitalization for MACEs (r = −0.046; P = .03), whereas LDL testing had the weakest correlation (r = −0.013; P > .05). Hemoglobin A1c of less than 8% also had the strongest correlation with hospitalization for diabetes (r = −0.109; P < .001), whereas LDL levels of less than 100 mg/dL had the weakest (r = −0.047; P = .02).

    Next, we examined the association among clinical practice group performance on process and disease control measures and utilization-based outcomes at the enrollee level. The results are displayed in Table 4. Three quality measures were significantly associated with hospitalizations, including LDL testing (estimate, −1.223 [95% CI, −1.950 to −0.497]), statin use (estimate, −0.603 [95% CI, −0.985 to −0.222]), and LDL level of less than 100 mg/dL (estimate, −0.364 [95% CI, −0.703 to −0.026]). A 10–percentage point improvement in statin use at the clinical group level was associated with an 11% decrease in the odds of hospitalization (odds ratio [OR], 0.89; 95% CI, 0.82-0.95), and a 10–percentage point improvement in the rates of LDL levels of less than 100 mg/dL was associated with a 6% decrease in odds of hospitalization (OR, 0.94; 95% CI, 0.91-0.98). Rates of HbA1c of less than 8% at the clinical practice group level were not significantly associated with an individual’s likelihood of hospitalization. Together, performance on these 5 measures of quality explained 3.9% of the variation in likelihood of hospitalization for a MACE or diabetes at the individual level.

    Discussion

    We analyzed a large, national cohort of commercially insured patients aged 18 to 65 years to determine the associations among process (testing and drug use) measures, disease control measures, and utilization-based outcomes in diabetes, a common chronic condition. We found testing measure rates to be high with little variation among clinical practice groups, whereas drug use and disease control measure rates were lower with more variation between clinical practice groups. The rates of hospitalization were low but demonstrated greater variability among clinical practice groups. In addition, we found that associations between process, disease control, and utilization-based outcomes were low and became even smaller when adjusted for patient covariates. Finally, we found that changes in a clinical practice group’s performance on current process and disease control measures had a minimal association with its rates of hospitalization for MACEs or diabetes and that overall clinical practice group performance on those 5 measures explained only 3 clinical practice group rates of hospitalization (LDL testing, statin use, and LDL level <100 mg/dL), which were significantly associated with hospitalization at the individual level.

    The correlations described in this study were weaker than those previously described in acute diseases. In a population of patients hospitalized with acute myocardial infarction, the correlation between process measure performance and patient outcomes was higher (r = 0.40; P < .001).7,8 This stronger correlation may be due to the sicker nature of hospitalized patients or to the lack of a time lag, higher event rates, and less confounding in acute diseases. By comparison, in our study, the strongest correlation we observed between process or disease control measures and utilization-based outcomes was only −0.109. Given that the observed correlations decreased with adjustment for patient covariates, we can also conclude that the negative correlations between process or disease control measuews and utilization-based measures were weak and likely biased away from the null by residual confounding. Thus, the true correlation, without selection effects, would be even weaker or perhaps even positive. We believe that this outcome is a significant absence of a correlation that has been widely presumed to be present in prior iterations of quality measurement in chronic diseases. To further substantiate this finding, we estimated the association of changes in a clinical practice group’s quality measure performance on hospitalization rates. We found all of these estimates to be quite small. We determined that a clinical practice group’s performance on these 5 measures taken together explained only 3.9% of the variation between groups’ rates of hospitalizations.

    These findings have several potential explanations. First, as demonstrated in this study, process measures are topped out, with little variation among clinical practice groups. Although not surprising in light of the incentives that have been placed on process measures in recent decades,30 these measures explain their limited ability to discriminate among clinical practice groups. In contrast, drug use and disease control measures demonstrated lower levels of achievement and higher levels of variation, suggesting that these measures may be better able to distinguish quality among clinical practice groups. Whether it is practically feasible, however, to calculate drug use and disease control measures for large populations remains unknown.

    A potential explanation for the weak association among drug use, disease control performance, and utilization-based outcome rates is that higher hospitalization rates may not always be indicative of poor outpatient care in chronic diseases, such as diabetes. To test this hypothesis, we estimated the association of changes in a clinical practice groups’ quality measure achievement rate on their patients’ individual rates of hospitalization for a MACE or diabetes. Perhaps most interestingly, rates of change in HbA1c of less than 8% were not significantly associated with hospitalization rates at the clinical practice group level. Because we know from robust clinical trial data that improved disease control results in better outcomes at the individual patient level, these findings at the clinical group level suggest that, on the margin, higher hospitalization rates may not always be indicative of poorer outpatient care and that the use of hospitalization rates as a measure of clinical quality in chronic diseases may merit additional study.

    Moreover, in contrast to prior work in acute diseases in which quality performance on process and disease control measures explained approximately 6% of the variation in patient outcomes,7 in this study, clinical practice group performance on all 5 process and disease control measures explained only 3.9% of the variation in utilization-based outcome rates between clinical practice groups. This finding suggests, at the very least, that clinical practice groups’ performance on current quality measures should not be used to predict utilization-based outcome rates in chronic diseases and begins to question the validity of using utilization-based outcome rates as a quality measure in chronic diseases.

    Limitations

    This study does not attempt to prove (or disprove) a causal relationship between quality measure performance and utilization-based outcome rates. Rather, this study describes the associations between quality performance on commonly used quality metrics at the clinical practice group level. To that end, this study uses administrative data that lack the granularity of clinical data, and we are limited in our ability to adjust for only patient-level characteristics that are identifiable in claims data. Although all of the correlations and models presented are adjusted for patient-level characteristics, adjustment for unmeasured confounding would likely result in lower correlations at the clinical practice group level. Although these lower correlations may limit the predictive capacity of our models, they accurately mirror the real world, where payers often only have access to administrative claims data. Second, this work, like current reimbursement methods, relies on accurately attributing patients to a provider and/or a clinical practice group responsible for most of their care.31 The current Centers for Medicare & Medicaid Services algorithm attributes patients to a primary care clinical practice group; this study did the same and added endocrinologists to reflect the study’s focus on diabetes care. Notably, allowing endocrinology visits into the attribution algorithm could bias the associations. For example, if patients with worse diabetes see their endocrinologist more often or if they see an endocrinologist more often after hospitalizations or when HbA1c increases, that factor could strengthen the observed association between HbA1c of less than 8% and hospitalizations for diabetes. However, although the association between HbA1c level and admission for diabetes was the strongest association we observed between utilization-based outcomes and quality measure performance, it remains weak by traditional standards. Third, we consider LDL levels of less than 100 mg/dL as a disease control measure. Although this definition does not reflect the 2013 Cholesterol Guideline change (which eliminated specific LDL targets),32 it is appropriate for most of this clinical period and still reflects much of clinical practice.33 In addition, these results were derived from the commercially insured population aged 18 to 65 years. Associations may be higher (or lower) among patients who are more likely to experience frequent hospitalizations (ie, older patients and those with less socioeconomic support). This study is cross-sectional and does not attempt to provide a causal relationship between disease control performance and utilization-based outcomes at an individual level. Rather, this study seeks to compare quality performance and clinical outcomes by a clinical practice group in the same year in the attempt to better understand the implications of using these metrics as measures of quality in chronic diseases.

    Conclusions

    In this study, the associations among different types of diabetes quality measures were weak, and much variation in the rates of utilization-based outcomes was unexplained by clinical practice group performance on traditional process and disease control measures. This outcome may be due in part to the topped-out nature of process measures, but the weak association between clinically robust disease control measures and hospitalization rates, the modest difference in hospitalization rates based on process and disease control performance, and the small amount of variation between clinical practice group hospitalization rates explained by process and disease control performance all raise concern about the validity of utilization-based outcomes as a measure of quality in chronic diseases. In chronic diseases such as diabetes, more hospitalizations may not necessarily be evidence of poor outpatient care, which has significant implications for quality-based reimbursement in chronic disease management.

    Back to top
    Article Information

    Accepted for Publication: June 23, 2019.

    Published: August 14, 2019. doi:10.1001/jamanetworkopen.2019.9139

    Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Gilstrap LG et al. JAMA Network Open.

    Corresponding Author: Mary Beth Landrum, PhD, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115 (landrum@hcp.med.harvard.edu).

    Author Contributions: Dr Landrum had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

    Concept and design: Gilstrap, Chernew, Nguyen, McWilliams, Landon, Landrum.

    Acquisition, analysis, or interpretation of data: Gilstrap, Chernew, Nguyen, Alam, Bai, McWilliams, Landrum.

    Drafting of the manuscript: Gilstrap, Alam, Bai.

    Critical revision of the manuscript for important intellectual content: All authors.

    Statistical analysis: Gilstrap, Nguyen, Alam, Bai, McWilliams, Landrum.

    Obtained funding: Gilstrap, Chernew.

    Administrative, technical, or material support: Chernew, Nguyen.

    Supervision: Landon.

    Conflict of Interest Disclosures: Dr Chernew reported receiving grants from the Laura and John Arnold Foundation during the conduct of the study. Dr McWilliams reported receiving grants from the Laura and John Arnold Foundation during the conduct of the study. Dr Landrum reported receiving grants from the Laura and John Arnold Foundation during the conduct of the study. Ms Nguyen reported receiving grants from the Laura and John Arnold Foundation during the conduct of the study. Dr Landrum reported receiving grants from the Laura and John Arnold Foundation during the conduct of the study and grants from Pfizer, Inc, outside the submitted work. No other disclosures were reported.

    Funding/Support: This study was supported by a grant from the Laura and John Arnold Foundation and in part by grant K23HL142835-01 from the National Institutes of Health, National Heart, Lung, and Blood Institute (Dr Gilstrap).

    Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

    References
    1.
    New England Journal of Medicine Catalyst. What is pay for performance in healthcare? https://catalyst.nejm.org/pay-for-performance-in-healthcare/. Published March 1, 2018. Accessed June 6, 2019.
    2.
    American Medical Association. Understanding Medicare’s merit-based incentive payment system (MIPS). https://www.ama-assn.org/practice-management/understanding-medicare-merit-based-incentive-program-mips. Accessed July 9, 2019.
    3.
    Centers for Medicare & Medicaid Services. Medicare shared savings program: quality measure benchmarks for the 2018 and 2019 reporting years. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Downloads/2018-and-2019-quality-benchmarks-guidance.pdf. Published February 2019. Accessed April 1, 2019.
    4.
    Centers for Medicare & Medicaid Services. Guide to quality performance standards for accountable care organizations starting in 2012: pay for reporting and pay for performance. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Downloads/ACO-Guide-Quality-Performance-2012.PDF. Published 2012. Accessed April 1, 2019.
    5.
    MedPAC. June 2018 report to the Congress: Medicare and the health care delivery system. Applying the Commission’s principles for measuring quality: population-based measures and hospital quality incentives. http://www.medpac.gov/docs/default-source/reports/jun18_ch7_medpacreport_sec.pdf?sfvrsn=0. Published June 2018. Accessed April 1, 2019.
    6.
    Medicare Payment Advisory Commission. Public meeting. http://medpac.gov/docs/default-source/default-document-library/jan-2018-meeting-transcript.pdf?sfvrsn=0. Published January 11, 2018. Accessed July 9, 2019.
    7.
    Bradley  EH, Herrin  J, Elbel  B,  et al.  Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short-term mortality.  JAMA. 2006;296(1):72-78. doi:10.1001/jama.296.1.72PubMedGoogle ScholarCrossref
    8.
    Peterson  ED, Roe  MT, Mulgund  J,  et al.  Association between hospital process performance and outcomes among patients with acute coronary syndromes.  JAMA. 2006;295(16):1912-1920. doi:10.1001/jama.295.16.1912PubMedGoogle ScholarCrossref
    9.
    Granger  CB, Steg  PG, Peterson  E,  et al; GRACE Investigators.  Medication performance measures and mortality following acute coronary syndromes.  Am J Med. 2005;118(8):858-865. doi:10.1016/j.amjmed.2005.01.070PubMedGoogle ScholarCrossref
    10.
    Parast  L, Doyle  B, Damberg  CL,  et al.  Challenges in assessing the process-outcome link in practice.  J Gen Intern Med. 2015;30(3):359-364. doi:10.1007/s11606-014-3150-0PubMedGoogle ScholarCrossref
    11.
    Lilford  RJ, Brown  CA, Nicholl  J.  Use of process measures to monitor the quality of clinical practice.  BMJ. 2007;335(7621):648-650. doi:10.1136/bmj.39317.641296.ADPubMedGoogle ScholarCrossref
    12.
    McGlynn  EA.  Six challenges in measuring the quality of health care.  Health Aff (Millwood). 1997;16(3):7-21. doi:10.1377/hlthaff.16.3.7PubMedGoogle ScholarCrossref
    13.
    Gilstrap  LG, Snipelisky  D, AbouEzzeddine  O,  et al.  Unanswered questions in contemporary heart failure.  J Card Fail. 2017;23(10):770-774. doi:10.1016/j.cardfail.2017.06.009PubMedGoogle ScholarCrossref
    14.
    US Census Bureau. American Community Survey (ACS). https://www.census.gov/programs-surveys/acs/. Revised June 17, 2018. Accessed March 28, 2018.
    15.
    Rural Health Research Center. Rural-urban commuting area codes (RUCAs). http://depts.washington.edu/uwruca/ruca-about.php. Accessed July 9, 2019.
    16.
    Centers for Disease Control and Prevention. Census regions and divisions of the United States. https://www2.census.gov/geo/pdfs/maps-data/maps/reference/us_regdiv.pdf. Published March 28, 2018. Accessed July 9, 2019.
    17.
    Lo-Ciganic  W, Zgibor  JC, Ruppert  K, Arena  VC, Stone  RA.  Identifying type 1 and type 2 diabetic cases using administrative data: a tree-structured model.  J Diabetes Sci Technol. 2011;5(3):486-493. doi:10.1177/193229681100500303PubMedGoogle ScholarCrossref
    18.
    Hamad  R, Modrek  S, Kubo  J, Goldstein  BA, Cullen  MR.  Using “big data” to capture overall health status: properties and predictive value of a claims-based health risk score.  PLoS One. 2015;10(5):e0126054. doi:10.1371/journal.pone.0126054PubMedGoogle ScholarCrossref
    19.
    Centers for Medicare & Medicaid Services. Two-step attribution for measures included in the value modifier. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Attribution-Fact-Sheet.pdf. Published August 2015. Accessed December 11, 2017.
    20.
    National Committee for Quality Assurance. Comprehensive diabetes care (CDC). https://www.ncqa.org/hedis/measures/comprehensive-diabetes-care/. Published 2019. Accessed July 9, 2019.
    21.
    Centers for Medicare & Medicaid Services. Quality payment program: MIPS overview. https://qpp.cms.gov/mips/overview. Accessed July 9, 2019.
    22.
    Centers for Medicare & Medicaid Services. Consensus core set: ACO and PCMH/primary care measures, version 1.0. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/QualityMeasures/Downloads/ACO-and-PCMH-Primary-Care-Measures.pdf. Updated February 3, 2016. Accessed July 9, 2019.
    23.
    Stone  NJ, Robinson  JG, Lichtenstein  AH,  et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  Circulation. 2014;129(25)(suppl 2):S1-S45. doi:10.1161/01.cir.0000437738.63853.7aPubMedGoogle ScholarCrossref
    24.
    Hawn  MT, Graham  LA, Richman  JS, Itani  KM, Henderson  WG, Maddox  TM.  Risk of major adverse cardiac events following noncardiac surgery in patients with coronary stents.  JAMA. 2013;310(14):1462-1472. doi:10.1001/jama.2013.278787PubMedGoogle ScholarCrossref
    25.
    Shih  CJ, Chu  H, Chao  PW,  et al.  Long-term clinical outcome of major adverse cardiac events in survivors of infective endocarditis: a nationwide population-based study.  Circulation. 2014;130(19):1684-1691. doi:10.1161/CIRCULATIONAHA.114.012717PubMedGoogle ScholarCrossref
    26.
    Hasvold  P, Thuresson  M, Sundström  J,  et al.  Association between paradoxical HDL cholesterol decrease and risk of major adverse cardiovascular events in patients initiated on statin treatment in a primary care setting.  Clin Drug Investig. 2016;36(3):225-233. doi:10.1007/s40261-015-0372-9PubMedGoogle ScholarCrossref
    27.
    Agency for Healthcare Research and Quality. Prevention quality indicators technical specifications updates: version 6.0 (ICD-9). https://www.qualityindicators.ahrq.gov/Modules/PQI_TechSpec_ICD09_v60.aspx. Published October 2016. Accessed July 9, 2019.
    28.
    Eisinga  R, Grotenhuis  Mt, Pelzer  B.  The reliability of a two-item scale: Pearson, Cronbach, or Spearman-Brown?  Int J Public Health. 2013;58(4):637-642. doi:10.1007/s00038-012-0416-3PubMedGoogle ScholarCrossref
    29.
    Webb NM, Shavelson RJ, Haertel EH. Reliability coefficients and generalizability theory. In: Rao CR, Sinharay S, eds. Handbook of Statistics 26: Psychometrics. Amsterdam, the Netherlands: Elsevier BV; 2006.
    30.
    National Quality Forum. MAP 2018 Considerations for implementing measures in federal programs: MIPS and MSSP. https://www.qualityforum.org/Projects/im/MAP/Clinician.../2018_Final_Report.aspx. Published March 15, 2018. Accessed June 20, 2018.
    31.
    Centers for Medicare & Medicaid Services. Two-step attribution for measures included in the value modifier. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Attribution-Fact-Sheet.pdf. Updated March 2017. Accessed July 9, 2019.
    32.
    Stone  NJ, Robinson  JG, Lichtenstein  AH,  et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  2013 ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  J Am Coll Cardiol. 2014;63(25, pt B):2889-2934. doi:10.1016/j.jacc.2013.11.002PubMedGoogle ScholarCrossref
    33.
    Gilstrap  LG, Chernew  ME, Nguyen  CSA, Bai  B, Landrum  MB. Trends in statin use and adherence and the impact of the 2013 cholesterol guidelines. Paper presented at: American Heart Association National Meeting; November 2018; Chicago, Illinois.
    ×