[Skip to Navigation]
Sign In
Figure.  Performance on Diabetes and Cardiovascular Disease (CVD) Measures by 1400 Physician Groups After Adjustment
Performance on Diabetes and Cardiovascular Disease (CVD) Measures by 1400 Physician Groups After Adjustment

HbA1c indicates hemoglobin A1c; LDL-C, low-density lipoprotein cholesterol. To convert HbA1c to proportion of total hemoglobin, multiply by 0.01; to convert LDL-C level to millimoles per liter, multiply by 0.0259.

Table 1.  Characteristics of Enrollees With Diabetes or CVDa
Characteristics of Enrollees With Diabetes or CVDa
Table 2.  Distribution of Unadjusted Physician Group Performance on Diabetes and Cardiovascular Disease Measuresa
Distribution of Unadjusted Physician Group Performance on Diabetes and Cardiovascular Disease Measuresa
Table 3.  Movement by 10 Percentiles or More After Adjustment Among 1400 Physician Groups
Movement by 10 Percentiles or More After Adjustment Among 1400 Physician Groups
1.
National Quality Forum.  Risk Adjustment for Socioeconomic Status or Other Sociodemographic Factors: Technical Report. Washington, DC: National Quality Forum; 2014.
2.
US Department of Health and Human Services.  Report to Congress: Social Risk Factors and Performance Under Medicare’s Value-Based Payment Programs: A Report Required by the Improving Medicare Post-Acute Care Transformation (IMPACT) Act of 2014. Washington, DC: Office of the Assistant Secretary for Planning and Evaluation; 2016.
3.
Joynt  KE, De Lew  N, Sheingold  SH, Conway  PH, Goodrich  K, Epstein  AM.  Should Medicare value-based purchasing take social risk into account?  N Engl J Med. 2017;376(6):510-513. doi:10.1056/NEJMp1616278PubMedGoogle ScholarCrossref
4.
Rose  S, Zaslavsky  AM, McWilliams  JM.  Variation in accountable care organization spending and sensitivity to risk adjustment: implications for benchmarking.  Health Aff (Millwood). 2016;35(3):440-448. doi:10.1377/hlthaff.2015.1026PubMedGoogle ScholarCrossref
5.
Fiscella  K, Burstin  HR, Nerenz  DR.  Quality measures and sociodemographic risk factors: to adjust or not to adjust.  JAMA. 2014;312(24):2615-2616. doi:10.1001/jama.2014.15372PubMedGoogle ScholarCrossref
6.
Anderson  RE, Ayanian  JZ, Zaslavsky  AM, McWilliams  JM.  Quality of care and racial disparities in Medicare among potential ACOs.  J Gen Intern Med. 2014;29(9):1296-1304. doi:10.1007/s11606-014-2900-3PubMedGoogle ScholarCrossref
7.
National Quality Forum.  A Roadmap for Promoting Health Equity and Eliminating Disparities: The Four I’s for Health Equity. Washington, DC: National Quality Forum; 2017.
8.
Lewis  VA, Larson  BK, McClurg  AB, Boswell  RG, Fisher  ES.  The promise and peril of accountable care for vulnerable populations: a framework for overcoming obstacles.  Health Aff (Millwood). 2012;31(8):1777-1785. doi:10.1377/hlthaff.2012.0490PubMedGoogle ScholarCrossref
9.
Yasaitis  LC, Pajerowski  W, Polsky  D, Werner  RM.  Physicians’ participation in ACOs is lower in places with vulnerable populations than in more affluent communities.  Health Aff (Millwood). 2016;35(8):1382-1390. doi:10.1377/hlthaff.2015.1635PubMedGoogle ScholarCrossref
10.
Jha  AK, Zaslavsky  AM.  Quality reporting that addresses disparities in health care.  JAMA. 2014;312(3):225-226. doi:10.1001/jama.2014.7204PubMedGoogle ScholarCrossref
11.
Roberts  ET, Zaslavsky  AM, McWilliams  JM.  The value-based payment modifier: program outcomes and implications for disparities.  Ann Intern Med. 2018;168(4):255-265. doi:10.7326/M17-1740PubMedGoogle ScholarCrossref
12.
Austin  JM, Jha  AK, Romano  PS,  et al.  National hospital ratings systems share few common scores and may generate confusion instead of clarity.  Health Aff (Millwood). 2015;34(3):423-430. doi:10.1377/hlthaff.2014.0201PubMedGoogle ScholarCrossref
13.
Committee on Accounting for Socioeconomic Status in Medicare Payment Programs; Board on Population Health and Public Health Practice; Board on Health Care Services; Institute of Medicine; National Academies of Sciences, Engineering, and Medicine.  Accounting for Social Risk Factors in Medicare Payment: Identifying Social Risk Factors. Washington, DC: National Academies Press; 2016.
14.
Krumholz  HM, Bernheim  SM.  Considering the role of socioeconomic status in hospital outcomes measures.  Ann Intern Med. 2014;161(11):833-834. doi:10.7326/M14-2308PubMedGoogle ScholarCrossref
15.
Roberts  ET, Zaslavsky  AM, Barnett  ML, Landon  BE, Ding  L, McWilliams  JM.  Assessment of the effect of adjustment for patient characteristics on hospital readmission rates: implications for pay for performance.  JAMA Intern Med. 2018;178(11):1498-1507. doi:10.1001/jamainternmed.2018.4481PubMedGoogle ScholarCrossref
16.
Zaslavsky  AM, Hochheimer  JN, Schneider  EC,  et al.  Impact of sociodemographic case mix on the HEDIS measures of health plan quality.  Med Care. 2000;38(10):981-992. doi:10.1097/00005650-200010000-00002PubMedGoogle ScholarCrossref
17.
Kim  M, Zaslavsky  AM, Cleary  PD.  Adjusting pediatric Consumer Assessment of Health Plans Study (CAHPS) scores to ensure fair comparison of health plan performances.  Med Care. 2005;43(1):44-52.PubMedGoogle Scholar
18.
Zaslavsky  AM, Zaborski  LB, Ding  L, Shaul  JA, Cioffi  MJ, Cleary  PD.  Adjusting performance measures to ensure equitable plan comparisons.  Health Care Financ Rev. 2001;22(3):109-126.PubMedGoogle Scholar
19.
Zaslavsky  AM, Zaborski  L, Cleary  PD.  Does the effect of respondent characteristics on consumer assessments vary across health plans?  Med Care Res Rev. 2000;57(3):379-394. doi:10.1177/107755870005700307PubMedGoogle ScholarCrossref
20.
Elliott  MN, Swartz  R, Adams  J, Spritzer  KL, Hays  RD.  Case-mix adjustment of the national CAHPS benchmarking data 1.0: a violation of model assumptions?  Health Serv Res. 2001;36(3):555-573.PubMedGoogle Scholar
21.
Durfey  SNM, Kind  AJH, Gutman  R,  et al.  Impact of risk adjustment for socioeconomic status on Medicare Advantage plan quality rankings.  Health Aff (Millwood). 2018;37(7):1065-1072. doi:10.1377/hlthaff.2017.1509PubMedGoogle ScholarCrossref
22.
Chen  LM, Epstein  AM, Orav  EJ, Filice  CE, Samson  LW, Joynt Maddox  KE.  Association of practice-level social and medical risk with performance in the Medicare Physician Value-Based Payment Modifier Program.  JAMA. 2017;318(5):453-461. doi:10.1001/jama.2017.9643PubMedGoogle ScholarCrossref
23.
Markovitz  AA, Ellimoottil  C, Sukul  D,  et al.  Risk adjustment may lessen penalties on hospitals treating complex cardiac patients under Medicare’s bundled payments.  Health Aff (Millwood). 2017;36(12):2165-2174. doi:10.1377/hlthaff.2017.0940PubMedGoogle ScholarCrossref
24.
Joynt  KE, Jha  AK.  Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program.  JAMA. 2013;309(4):342-343. doi:10.1001/jama.2012.94856PubMedGoogle ScholarCrossref
25.
Joynt Maddox  KE.  Financial incentives and vulnerable populations: will alternative payment models help or hurt?  N Engl J Med. 2018;378(11):977-979. doi:10.1056/NEJMp1715455PubMedGoogle ScholarCrossref
26.
Gilman  M, Adams  EK, Hockenberry  JM, Milstein  AS, Wilson  IB, Becker  ER.  Safety-net hospitals more likely than other hospitals to fare poorly under Medicare’s value-based purchasing.  Health Aff (Millwood). 2015;34(3):398-405. doi:10.1377/hlthaff.2014.1059PubMedGoogle ScholarCrossref
27.
Landon  BE, Hicks  LS, O’Malley  AJ,  et al.  Improving the management of chronic disease at community health centers.  N Engl J Med. 2007;356(9):921-934. doi:10.1056/NEJMsa062860PubMedGoogle ScholarCrossref
28.
Grant  RW, Buse  JB, Meigs  JB; University HealthSystem Consortium (UHC) Diabetes Benchmarking Project Team.  Quality of diabetes care in US academic medical centers: low rates of medical regimen change.  Diabetes Care. 2005;28(2):337-442. doi:10.2337/diacare.28.2.337PubMedGoogle ScholarCrossref
29.
Chin  MH, Auerbach  SB, Cook  S,  et al.  Quality of diabetes care in community health centers.  Am J Public Health. 2000;90(3):431-434. doi:10.2105/AJPH.90.3.431PubMedGoogle ScholarCrossref
30.
Centers for Medicare & Medicaid Services. CCW chronic conditions: combined Medicare and Medicaid data. https://www.ccwdata.org/web/guest/home. Accessed November 5, 2018.
31.
Centers for Medicare & Medicaid Services. Two-step attribution for measures included in the value modifier. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Attribution-Fact-Sheet.pdf. Accessed November 5, 2018.
32.
Agency for Healthcare Research and Quality. Prevention quality indicators (PQI) log of ICD-9-CM, ICD-10-CM/PC, and DRG coding updates and revisions to PQI documentation and software through version 6.0. https://www.qualityindicators.ahrq.gov/Downloads/Modules/PQI/V60/ChangeLog_PQI_v60.pdf. Accessed November 5, 2018.
33.
Stone  NJ, Robinson  JG, Lichtenstein  AH,  et al.  ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  Circulation. 2014;129(25)(suppl 2):S1-S45.PubMedGoogle ScholarCrossref
34.
Bernheim  SM, Parzynski  CS, Horwitz  L,  et al.  Accounting for patients’ socioeconomic status does not change hospital readmission rates.  Health Aff (Millwood). 2016;35(8):1461-1470. doi:10.1377/hlthaff.2015.0394PubMedGoogle ScholarCrossref
35.
O’Malley  AJ, Zaslavsky  AM, Elliott  MN, Zaborski  L, Cleary  PD.  Case-mix adjustment of the CAHPS Hospital survey.  Health Serv Res. 2005;40(6, pt 2):2162-2181.PubMedGoogle ScholarCrossref
36.
Goldstein  H, Spiegelhalter  DJ.  League tables and their limitations: statistical issues in comparisons of institutional performance.  J R Stat Soc Ser A Stat Soc. 1996;159(3):385-443. doi:10.1016/j.abb.2005.03.021Google ScholarCrossref
37.
Jacobs  R, Goddard  M, Smith  PC.  How robust are hospital ranks based on composite performance measures?  Med Care. 2005;43(12):1177-1184. doi:10.1097/01.mlr.0000185692.72905.4aPubMedGoogle ScholarCrossref
38.
McDowell  A, Nguyen  CA, Chernew  ME,  et al.  Comparison of approaches for aggregating quality measures in population-based payment models.  Health Serv Res. 2018;53(6):4477-4490. doi:10.1111/1475-6773.13031PubMedGoogle ScholarCrossref
39.
Kilgore  K, Teigland  C, Pulungan  Z. Using aggregate data to proxy individual-level socioeconomic characteristics in research on medication adherence: 9-digit ZIP code vs. census block group. Poster presented at: Academy of Managed Care Pharmacy Nexus 2018; October 22-25, 2018; Orlando FL. http://avalere-health-production.s3.amazonaws.com/uploads/pdfs/1540232294_Aggregate_Data_Poster_Final.pdf. Accessed November 5, 2018.
Original Investigation
Health Policy
March 29, 2019

Social Risk Adjustment of Quality Measures for Diabetes and Cardiovascular Disease in a Commercially Insured US Population

Author Affiliations
  • 1Department of Health Care Policy, Harvard Medical School, Boston, Massachusetts
  • 2Division of Cardiovascular Medicine, Department of Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
  • 3Department of Health Care Policy, The Dartmouth Institute, Dartmouth Medical School, Hanover, New Hampshire
  • 4Division of General Internal Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
  • 5Division of General Medicine and Primary Care, Beth Israel Deaconess Medical Center, Boston, Massachusetts
JAMA Netw Open. 2019;2(3):e190838. doi:10.1001/jamanetworkopen.2019.0838
Key Points español 中文 (chinese)

Question  Is social risk associated with physician group performance on diabetes and cardiovascular disease quality measures?

Findings  In this cross-sectional study of more than 1.6 million enrollees from a large US health insurance plan, adjusting for social risk factors reduced physician group–level variance in performance scores and reordered rankings, particularly for disease control measures for diabetes and use-based outcome measures for both diabetes and cardiovascular disease. Process measure performance did not change significantly following adjustment for social risk factors.

Meaning  Social risk adjustment can affect performance scores for disease control and use-based outcome measures and thus should be considered as a way to mitigate potential unintended consequences of pay-for-performance programs.

Abstract

Importance  Patients’ social risk factors may be associated with physician group performance on quality measures.

Objective  To examine the association of social risk with change in physician group performance on diabetes and cardiovascular disease (CVD) quality measures in a commercially insured population.

Design, Setting, and Participants  In this cross-sectional study using claims data from 2010 to 2014 from a US national health insurance plan, the performance of 1400 physician groups (physicians billing under the same tax identification number) was estimated. After base adjustments for age and sex, changes in variation across groups and reordering of rankings resulting from additional adjustments for clinical, social, or both clinical and social risk factors were analyzed. In all models, only within-group associations were adjusted to distinguish the association of patients’ social risk factors with outcomes while excluding physician groups’ distinct characteristics that could also change observed performance. Data analysis was conducted between April and July 2018.

Main Outcomes and Measures  Process measures (hemoglobin A1c [HbA1c] testing, low-density lipoprotein cholesterol [LDL-C] testing, and statin use), disease control measures (HbA1c and LDL-C level control), and use-based outcome measures (hospitalizations for ambulatory-sensitive conditions) were calculated with base adjustment (age and sex), clinical adjustment, social risk factor adjustment, and both clinical and social adjustments. Quality variance in physician group performance and changes in rankings following these adjustments were measured.

Results  This study identified 1 684 167 enrollees (859 618 [51%] men) aged 18 to 65 years (mean [SD] age, 50 [10.7] years) with diabetes or CVD. Performance rates were high for HbA1c and LDL-C level testing (mean ranged from 79.5% to 87.2%) but lower for statin use (54.7% for diabetes cohort and 44.2% for CVD cohort) and disease control measures (57.9% on LDL-C control for diabetes cohort and 40.0% for CVD cohort). On average, only 8.8% of enrollees with diabetes and 1.0% of enrollees with CVD in a group were hospitalized. The addition of clinical and social risk factors to base adjustment reduced variance across physician groups for most measures (percentage change in SD ranged from −13.9% to 1.6%). Although overall agreement between performance scores with base vs full adjustment was high, there was still substantial reordering for some measures. For example, social risk adjustment resulted in reordering for disease control in the diabetes cohort. Of the 1400 physician groups, 330 (23.6%) had performance rankings for HbA1c control that increased or decreased by at least 10 percentile points after adding social risk factors to age and sex. Both clinical and social risk adjustment affected rankings on hospital admissions.

Conclusions and Relevance  Accounting for social risk may be important to mitigate adverse consequences of performance-based payments for physician groups serving socially vulnerable populations.

Introduction

The US health care system’s shift toward alternative payment models has reinvigorated interest in adjusting performance measures to account for social risk.1-7 Compared with the traditional fee-for-service model, which rewards volume, alternative payment models commonly reward physicians and physician groups that deliver high-quality care while controlling spending. However, those that serve a disproportionate number of socially disadvantaged patients could be penalized by such a system. Adjusting performance for social risk factors is a way to address this issue.6,8-11 For example, US News Best Hospitals rankings account for social risk using hospital-level percentage of Medicaid patients.12

The use of social risk adjustment in payment models is controversial, as raised in reports by the National Academy of Medicine,13 US Department of Health and Human Services,2 and National Quality Forum.1 One argument is that high-risk patients are sicker and have higher health care costs, and the increased difficulty of caring for them is reflected in lower performance scores. Others have argued that because socially disadvantaged patients often receive care from lower-quality institutions, social risk adjustment may obscure true performance or excuse physicians and physician groups that deliver a lower standard of care to disadvantaged patients.1,2,13,14 However, risk adjustment methods that separate patient-level risk factors from group-level factors10,11,13,15 can be used to adjust for within-physician groups and identify those that provide low-quality care.

Much of the existing work on risk-adjusting performance measures focuses on health plans, hospitals, and Medicare populations. Plans such as those participating in Medicare Advantage already assume population-based risk, and previous research has found a significant association of unadjusted performance with social risk–adjusted performance at the health plan level.16-21 However, more recent work has found meaningful score changes and substantial reordering for the most penalized hospitals, physician groups, and physicians despite high correlations.21 Safety-net hospitals and physicians serving a greater number of dual-eligible Medicare enrollees have been shown to receive more penalties under US Centers for Medicare & Medicaid Services programs.22-26 Because most prior studies have focused on associations between unadjusted and adjusted scores, they may miss meaningful changes in observed performance for some physician groups. In addition, the effect of social risk adjustment at the group level and in commercial populations generally has not been well described.

Many population-based payment models have moved away from traditional performance measures, which were dominated by process measures, toward disease control measures, which have been linked to improved life expectancy and physical functioning—outcomes that matter most to patients. These measures can also be advantageous because most clinical outcomes are too rare to be measured reliably at the physician group level (defined as physicians billing under the same tax identification number [TIN]). However, disease control measures may be influenced by nonmedical factors and thus may be particularly sensitive to social risk adjustment.

This study investigated the association of adjusting performance measures for clinical and social risk factors with change in quality measures at the physician group level for a nonelderly, commercially insured population. Adjustment can change absolute scores, rankings, or both. Depending on incentive structures, pay-for-performance programs may take into account any one of these changes. We examined the association of adjustment with changes in scores for disease control measures in addition to other commonly used process measures and use-based outcome measures. We focused on diabetes and cardiovascular disease (CVD), both of which are prevalent and costly chronic diseases in the United States that continue to have suboptimal quality of care.27-29

Methods
Study Overview

This study used data from a large national health insurance plan from January 2010 to December 2014. The member data sets were deidentified, and we obtained institutional review board approval from Harvard University’s Committee on the Use of Human Subjects. The institutional review board did not require informed consent. Data analysis was conducted between April and July 2018. This article is compliant with the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cross-sectional studies.

We constructed 6 diabetes and 4 CVD measures of quality of care: 5 process, 3 disease control, and 2 use-based outcome measures. Adjustment can reduce performance variations across groups (resulting in changes to estimated performance on an absolute scale) and alter groups’ relative rankings. For each measure, we compared the SD, interdecile ranges (the difference between the 90th and 10th percentiles), and changes in rankings across groups resulting from 4 sets of adjustments.

In the base model, we adjusted for age (in 4 age categories: 18-35, 36-45, 46-55, 56-65 years) and sex. In the clinical adjustment model, we added a set of comorbidities (atrial fibrillation, hypertension, chronic obstructive pulmonary disease, heart failure, and chronic kidney disease), coded using the Centers for Medicare & Medicaid Services Chronic Condition Warehouse definitions,30 and a comorbidity score based on a DxCG Intelligence Version 5.0.0 (Cotiviti) prediction model. In the social risk–adjustment model, we added zip code–level variables to age and sex (without clinical variables). We used these area-level variables as proxies because individual-level sociodemographic characteristics are generally limited.1 Enrollees were linked to zip code–level sociodemographic characteristics from the 2010 US census (percentage of population who are black, Hispanic, and college educated) and the 2010-2014 American Community Survey 5-year estimates (percentage of population below poverty). We assigned enrollees to urban, suburban, and rural classifications based on zip code (eAppendix 1 in the Supplement). Finally, in the fully adjusted model, we included both clinical and social risk covariates in addition to age and sex.

We used medical claims, pharmacy claims, and laboratory results data from 2010 to 2014 from a large national health insurance plan. Our sample included adults aged 18 to 65 years with either diabetes or CVD (based on 2013 Healthcare Effectiveness Data and Information Set eligibility criteria: ≥1 inpatient visit or ≥2 outpatient visits with an International Classification of Diseases, Ninth Revision (ICD-9) code for diabetes or CVD who were continuously enrolled in at least 1 calendar (measurement) year between 2010 and 2014. Because pharmacy data were limited to enrollees with pharmacy benefits from the same insurer (47.7% for diabetes cohort and 45.7% for CVD cohort) and laboratory results were available for only some enrollees (52.4% for diabetes cohort and 43.6% for CVD cohort), depending on where they underwent testing, some measures were limited to these subsets.

For each year, we attributed each enrollee to the physician group (defined by TIN) accounting for the plurality of the enrollee’s office visits during that year (eAppendix 2 in the Supplement). Enrollees with the same number of visits to multiple TINs were assigned to the group with the greatest sum of allowed costs.31 To ensure sufficient sample size, we restricted our sample to physician groups with at least 40 attributed enrollees with diabetes and at least 40 with CVD, of whom 20 from each cohort had to have laboratory and pharmacy data for every relevant measure.

Study Variables

We constructed 10 quality measures defined by standard specifications (eAppendix 3 in the Supplement). Diabetes included 3 process measures (hemoglobin A1c [HbA1c] testing, low-density lipoprotein cholesterol [LDL-C] testing, and statin use [≥1 filled prescription over the measurement year]), 2 disease control measures (HbA1c level control [<8%; to convert to proportion of total hemoglobin, multiply by 0.01] and LDL-C level control [<100 mg/dL; to convert to millimoles per liter, multiply by 0.0259]), and 1 use-based outcome measure (no hospital admissions for major adverse cardiovascular events [MACE] or diabetes32). For CVD, we constructed measures for LDL-C testing, statin use, LDL-C level control, and hospital admissions for MACE. Although the blood cholesterol guidelines switched from being LDL-C–based to risk factor–based in 2013,33 we included LDL-C control because it is relevant for most of our data’s time frame (2010-2012 and most of 2013). Measures were dichotomized (1 for adherence, 0 otherwise) and coded so that higher scores indicated better performance.

Statistical Analysis

We examined the association of enrollees’ clinical and social risk factors with group performance. Specifically, we computed an aggregate performance score for each group as an average of its unadjusted scores on the individual measures and then placed groups into quartiles based on this score.

If disadvantaged enrollees are disproportionately served by low-performing physician groups, then social risk adjustment could mask poor quality. We addressed this problem using a 2-step adjustment process31 (eAppendix 4 in the Supplement). First, we fit linear regression models with physician group fixed effects and computed a predicted risk score for each enrollee on each measure based on the relevant covariates. This risk score estimated the likelihood of adherence to the quality metric as a function of patient characteristics, holding group quality constant, as if there were no systematic sorting of enrollees to groups. By basing our adjustments on within-group associations, we captured the associations of patients’ social risk factors while excluding groups’ distinct characteristics associated with observed performance.

Second, to compute group-level performance scores, we estimated mixed-effects logistic regression models that related the actual performance on a given measure to both the risk score computed in step 1 and group random effects. This calculation represents the deviation between observed and expected performance, given a group’s sociodemographic, clinical, and social risk characteristics (depending on the model). Because the mixed-effects logistic model is nonlinear, we standardized across groups using estimated random effects evaluated at the sample mean for all covariates.

Because we identified some systematic differences in observed characteristics between the original cohort and those without pharmacy and/or laboratory data, we standardized the samples across measures. Specifically, we gave each enrollee a weight that was the inverse of the probability that they had laboratory (for disease control measures) or pharmacy (for statin use measures) data. This process gave additional weight to enrollees with data most like those in the full sample and produced samples balanced according to observed characteristics. We estimated probability weights using logistic regression models with availability of data as the dependent variable and the full set of covariates (including all clinical and social risk variables).

We first estimated the SD and interdecile range of the groups’ performances for each of the adjustment types. We then calculated agreement in performance across approaches by computing intraclass correlation coefficients between performance estimates using each type of adjustment and the base adjustment model. Finally, to understand whether some groups would experience meaningful ranking changes, we computed the percentages of physician groups with rankings that increased or decreased at least 10 percentile points after adjustment (for example, a group that moved from the 25th to the 35th percentile).

We tested for differences in patient characteristics across quartiles using trend tests by fitting regression models with quartile as a linear variable. P values are 2-sided, and we considered P < .05 to be statistically significant. All analyses were performed using Stata statistical software, version 15.1 (StataCorp).

Results

Our final sample included 1 684 167 unique enrollees (3 069 277 enrollee-years, including 135 485 enrollee-years of diabetes only, 2 339 949 enrollee-years of CVD only, and 593 843 enrollee-years of both) treated by 1400 physician groups. More than half (859 618 [51%]) of enrollees were male, and the mean (SD) age was 50 (10.7) years. There was variation in individuals’ sociodemographic and social risk factors across physician groups (Table 1). For example, physician groups in the top quartile of performance (based on unadjusted mean performance across measures) had a higher percentage of enrollees who were male, aged 56 to 65 years, and had hypertension. In addition, compared with lower-performing groups, those in the highest performance quartile treated more enrollees from zip codes with a greater percentage of white and college-educated individuals and fewer enrollees from zip codes with high rates of poverty.

Mean performance rates were high for measures of testing in both cohorts (mean ranged from 79.5% to 87.2%) (Table 2). Mean performance was lower—and variation across groups was higher—for statin use (54.7% for diabetes cohort and 44.2% for CVD cohort) and disease control measures (57.9% on LDL-C control for diabetes cohort and 40.0% for CVD cohort). For example, the mean (interdecile range) for HbA1c control was 69.4% (62.5%-75.7%). Only 1.0% of CVD enrollees had an admission for MACE, with little variation across groups. Hospitalization for the diabetes cohort for MACE was higher (8.8%) and more variable (interdecile range, 88.3%-93.6%) across groups in the diabetes cohort.

Most clinical and social risk factors were significantly associated with all quality measures at the individual level (eTables 1 and 5-7 in the Supplement). The explanatory powers of the individual covariates were similar across adjustment models, and their directions were consistent with prior research. For most measures, younger age, male sex, and percentage of college-educated individuals in the zip code were positively associated with higher performance. Most comorbidities, more rural geography, and percentage black and below poverty were negatively associated with performance. The DxCG prediction model composite score was negatively associated with performance on process measures and hospitalizations for ambulatory-sensitive conditions but positively associated with disease control. The percentage of Hispanic individuals in an enrollee’s zip code had inconsistent associations.

Variation Across Physician Groups After Adjustment

Variance across physician groups decreased after full adjustment for most measures (percentage change in SD ranged from −13.9% for HbA1c level control in the diabetes cohort to 1.6% for hospital admission in the CVD cohort) (eTable 2 in the Supplement). Adjustment for social risk factors was associated with variation across groups in disease control and hospital admissions in the diabetes cohort (Figure). For example, the interdecile range for performance on HbA1c level control in the diabetes cohort decreased from 12.8 percentage points with base adjustment to 10.6 percentage points after full adjustment. In contrast, the interdecile range in HbA1c testing in the diabetes cohort changed minimally. Variation across groups in admissions for the diabetes cohort was sensitive to both clinical and social risk adjustment.

Performance Scores and Rankings

Overall agreement between group-level performance scores with base adjustment vs adjustment for clinical, social risk, or both factors was high (eTable 3 in the Supplement). However, correlations between unadjusted and adjusted performance were weaker for admission measures (comparing full with base adjustment, intraclass correlation coefficient for diabetes was 0.84; intraclass correlation coefficient for CVD was 0.76). Performance on these use-based outcomes was most heavily affected by adjustment for clinical variables.

Overall agreement between unadjusted and adjusted scores can be high despite important score changes and rankings for some physician groups, particularly if they treat a disproportionate number of enrollees with greater clinical and social risk. We found performance ranking increases or decreases of at least 10 percentile points after social risk adjustment in 330 physician groups (23.6%) for HbA1c level control and in 129 (9.2%) for LDL-C level control for diabetes (Table 3 and eTable 4 in the Supplement). Both clinical and social risk factors reordered rankings for admissions measures, although the effect of clinical variables was larger.

Discussion

In this study of commercially insured enrollees with diabetes or CVD, we found that adherence to standard quality measures was associated with patients’ social risk factors. Although risk variables were significant at the individual patient level, our results show similar rankings in group-level performance after case-mix adjustment, consistent with previous studies focusing on health plans16,18 and hospitals.34,35 However, high agreement between scores with and without social risk adjustment should not be interpreted as evidence that such adjustments altered performance scores minimally, as we also found that adjustment resulted in changes in performance rankings for a subset of physician groups as well as a substantial reduction in performance variation on some measures that did not result in reordering. Thus, payment programs relying on these measures with limited adjustments could penalize groups serving socially disadvantaged patients.

Moreover, the method of risk adjustment is important. Traditional regression approaches to social risk adjustment could mistakenly attribute low quality to social risk factors, masking truly poor performance.1,2,13 This would result in a lower standard of care for patients with greater clinical and social risk. However, two 2018 articles by Roberts et al11,15 demonstrated how risk adjustment can be based on within-group associations between patient characteristics and quality outcomes, thereby excluding physician groups’ distinct contributions. Thus, it is important, as we did in this study, to estimate the association of social risk factors with quality measures within groups because it adjusts for this sorting and thus avoids this bias. Adjusting for within-group associations differentiated between the association of patients’ social risk factors with quality measures and between-group differences in quality.

Overall, there was greater variance reduction and rank reordering from social risk adjustment for disease control and use-based outcome measures than for process measures. This may be a result of process measures being topped out, in that most physician groups achieve high scores, or it could be because patient outcomes may be more vulnerable to other, unmeasured confounders than process measures. In both cohorts, we observed little variation in statin use, a measure with lower and more variable performance across groups that has not been frequently used in alternative payment models. Disease control measures have become increasingly important in quality measurement systems as proxies for less common outcomes, but they are less controllable than process measures, which are easier to achieve. Thus, the association of social risk adjustment with disease control measures is important to consider when developing programs that evaluate quality performance. In this study, we considered disease control measures that can be determined using standard laboratory claims data. Other important disease control measures, such as blood pressure control, would be more challenging and costly to evaluate, as they would require medical record review.

For some measures, including hospitalizations for ambulatory-sensitive conditions, clinical variables had a greater association with outcome on variation and ranking across physician groups. However, the addition of social risk factors changed rankings in disease control for diabetes even after adjustment for clinical factors, suggesting that the associations of social risk with disease control are not entirely manifested through poor clinical health among socially disadvantaged enrollees. For example, blood glucose levels are largely determined by diet, exercise, and lifestyle (including the ability to adhere to medications and treatment)—all activities associated with socioeconomic factors. In contrast, rankings were reordered less following social risk adjustment for LDL-C performance. This may be because statins are quite effective, and even those patients with poor lifestyle habits can achieve good LDL-C levels. Importantly, our findings refute the assertion that risk adjustment with minimal changes to variance implies a corresponding lack of change in quality ranking.13

The association of social risk adjustment with performance has different implications for physician groups and their patients under various pay-for-performance schemes. If the size of the bonus or penalty depends on the score’s deviation from a mean (eg, Accountable Care Organization programs, Value-Based Payment Modifier), variance is important even if there is no reordering. On the other hand, if rewards and penalties are based on rankings, variance reduction without reordering would not matter. We found evidence of sizable ranking changes among a meaningful minority of physician groups as well as variance reduction in disease control measures, which would affect bonuses and penalties in pay-for-performance programs that base scores on either deviations from a mean or rankings. Both variance reduction and rank reordering are important for public reporting.

Risk adjustment is not the only factor contributing to the validity and reliability of performance measures. Measurement error and statistical noise in small samples are well-known factors, and rankings are particularly sensitive to these issues.36 Differing methods for aggregating individual measures into composite scores can also reorder rankings.37,38

Limitations

Our study has limitations. First, our data were not sufficient to examine the association of individual-level risk factor adjustments with changes in quality measures. Because collection of these data continues to be limited,1 area-level social risk factors are being more widely considered in policy and serve as proxies for both individual-level sociodemographic characteristics and characterizations of patients’ communities. It may be more practical to use area-level factors and avoid adding to the burden of data collection. Although recent work has shown that aggregate proxies at the 9-digit zip code level can provide more precision,39 we unfortunately only had access to 5-digit zip codes in our data. Nevertheless, adjustment for individual-level data would likely produce even larger variance reduction and reordering, and our analysis of area-level factors highlights the importance of improving the collection and availability of data on individual-level risk factors in the future.13 Second, with our data, we could only examine use-based outcomes and not other clinical outcomes. The normative interpretation of use-based outcomes can be ambiguous because some social risk factors may be associated with factors that may predict lower demand and use of care (eg, insurance coverage); for example, increased admissions may not always signal poorer quality of care, particularly for chronic diseases. Third, we found moderate but important differences in this younger, commercially insured population. Adjustment for social risk factors in more diverse populations would likely be associated with larger changes to variance and rankings.

Conclusions

As alternative payment models increasingly rely on standard quality measures of physician and physician group performance to define and reward high-quality care, our findings suggest that inadequate risk adjustment could counterproductively reduce payments to groups whose patient populations would benefit most from additional resources, including social interventions. Physician group rankings on disease control measures are among those most altered by social risk adjustment, particularly for diabetes. Use of these measures to determine group payment without adjustment for social risk factors could lead to fewer resources for physicians caring for populations with greater clinical and social risk and exacerbate disparities in care.

Back to top
Article Information

Accepted for Publication: January 22, 2019.

Published: March 29, 2019. doi:10.1001/jamanetworkopen.2019.0838

Correction: This article was corrected on May 10, 2019, to fix an omission in eAppendix 3 in the Supplement.

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Nguyen CA et al. JAMA Network Open.

Corresponding Author: Mary Beth Landrum, PhD, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115 (landrum@hcp.med.harvard.edu).

Author Contributions: Dr Landrum had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: All authors.

Acquisition, analysis, or interpretation of data: Nguyen, Gilstrap, Chernew, McWilliams, Landrum.

Drafting of the manuscript: Nguyen.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Nguyen, Gilstrap, Chernew, McWilliams, Landrum.

Obtained funding: Chernew.

Administrative, technical, or material support: Nguyen, Gilstrap.

Supervision: Nguyen, Gilstrap, Chernew.

Conflict of Interest Disclosures: Ms Nguyen and Drs Chernew, McWilliams, and Landrum reported grants from the Laura and John Arnold Foundation during the conduct of the study. Dr Landrum reported grants from Pfizer outside the submitted work. No other disclosures were reported.

Funding/Support: This research was supported by a grant from the Laura and John Arnold Foundation.

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Meeting Presentation: This article was presented at the 2018 AcademyHealth Annual Research Meeting; June 25, 2018; Seattle, Washington.

References
1.
National Quality Forum.  Risk Adjustment for Socioeconomic Status or Other Sociodemographic Factors: Technical Report. Washington, DC: National Quality Forum; 2014.
2.
US Department of Health and Human Services.  Report to Congress: Social Risk Factors and Performance Under Medicare’s Value-Based Payment Programs: A Report Required by the Improving Medicare Post-Acute Care Transformation (IMPACT) Act of 2014. Washington, DC: Office of the Assistant Secretary for Planning and Evaluation; 2016.
3.
Joynt  KE, De Lew  N, Sheingold  SH, Conway  PH, Goodrich  K, Epstein  AM.  Should Medicare value-based purchasing take social risk into account?  N Engl J Med. 2017;376(6):510-513. doi:10.1056/NEJMp1616278PubMedGoogle ScholarCrossref
4.
Rose  S, Zaslavsky  AM, McWilliams  JM.  Variation in accountable care organization spending and sensitivity to risk adjustment: implications for benchmarking.  Health Aff (Millwood). 2016;35(3):440-448. doi:10.1377/hlthaff.2015.1026PubMedGoogle ScholarCrossref
5.
Fiscella  K, Burstin  HR, Nerenz  DR.  Quality measures and sociodemographic risk factors: to adjust or not to adjust.  JAMA. 2014;312(24):2615-2616. doi:10.1001/jama.2014.15372PubMedGoogle ScholarCrossref
6.
Anderson  RE, Ayanian  JZ, Zaslavsky  AM, McWilliams  JM.  Quality of care and racial disparities in Medicare among potential ACOs.  J Gen Intern Med. 2014;29(9):1296-1304. doi:10.1007/s11606-014-2900-3PubMedGoogle ScholarCrossref
7.
National Quality Forum.  A Roadmap for Promoting Health Equity and Eliminating Disparities: The Four I’s for Health Equity. Washington, DC: National Quality Forum; 2017.
8.
Lewis  VA, Larson  BK, McClurg  AB, Boswell  RG, Fisher  ES.  The promise and peril of accountable care for vulnerable populations: a framework for overcoming obstacles.  Health Aff (Millwood). 2012;31(8):1777-1785. doi:10.1377/hlthaff.2012.0490PubMedGoogle ScholarCrossref
9.
Yasaitis  LC, Pajerowski  W, Polsky  D, Werner  RM.  Physicians’ participation in ACOs is lower in places with vulnerable populations than in more affluent communities.  Health Aff (Millwood). 2016;35(8):1382-1390. doi:10.1377/hlthaff.2015.1635PubMedGoogle ScholarCrossref
10.
Jha  AK, Zaslavsky  AM.  Quality reporting that addresses disparities in health care.  JAMA. 2014;312(3):225-226. doi:10.1001/jama.2014.7204PubMedGoogle ScholarCrossref
11.
Roberts  ET, Zaslavsky  AM, McWilliams  JM.  The value-based payment modifier: program outcomes and implications for disparities.  Ann Intern Med. 2018;168(4):255-265. doi:10.7326/M17-1740PubMedGoogle ScholarCrossref
12.
Austin  JM, Jha  AK, Romano  PS,  et al.  National hospital ratings systems share few common scores and may generate confusion instead of clarity.  Health Aff (Millwood). 2015;34(3):423-430. doi:10.1377/hlthaff.2014.0201PubMedGoogle ScholarCrossref
13.
Committee on Accounting for Socioeconomic Status in Medicare Payment Programs; Board on Population Health and Public Health Practice; Board on Health Care Services; Institute of Medicine; National Academies of Sciences, Engineering, and Medicine.  Accounting for Social Risk Factors in Medicare Payment: Identifying Social Risk Factors. Washington, DC: National Academies Press; 2016.
14.
Krumholz  HM, Bernheim  SM.  Considering the role of socioeconomic status in hospital outcomes measures.  Ann Intern Med. 2014;161(11):833-834. doi:10.7326/M14-2308PubMedGoogle ScholarCrossref
15.
Roberts  ET, Zaslavsky  AM, Barnett  ML, Landon  BE, Ding  L, McWilliams  JM.  Assessment of the effect of adjustment for patient characteristics on hospital readmission rates: implications for pay for performance.  JAMA Intern Med. 2018;178(11):1498-1507. doi:10.1001/jamainternmed.2018.4481PubMedGoogle ScholarCrossref
16.
Zaslavsky  AM, Hochheimer  JN, Schneider  EC,  et al.  Impact of sociodemographic case mix on the HEDIS measures of health plan quality.  Med Care. 2000;38(10):981-992. doi:10.1097/00005650-200010000-00002PubMedGoogle ScholarCrossref
17.
Kim  M, Zaslavsky  AM, Cleary  PD.  Adjusting pediatric Consumer Assessment of Health Plans Study (CAHPS) scores to ensure fair comparison of health plan performances.  Med Care. 2005;43(1):44-52.PubMedGoogle Scholar
18.
Zaslavsky  AM, Zaborski  LB, Ding  L, Shaul  JA, Cioffi  MJ, Cleary  PD.  Adjusting performance measures to ensure equitable plan comparisons.  Health Care Financ Rev. 2001;22(3):109-126.PubMedGoogle Scholar
19.
Zaslavsky  AM, Zaborski  L, Cleary  PD.  Does the effect of respondent characteristics on consumer assessments vary across health plans?  Med Care Res Rev. 2000;57(3):379-394. doi:10.1177/107755870005700307PubMedGoogle ScholarCrossref
20.
Elliott  MN, Swartz  R, Adams  J, Spritzer  KL, Hays  RD.  Case-mix adjustment of the national CAHPS benchmarking data 1.0: a violation of model assumptions?  Health Serv Res. 2001;36(3):555-573.PubMedGoogle Scholar
21.
Durfey  SNM, Kind  AJH, Gutman  R,  et al.  Impact of risk adjustment for socioeconomic status on Medicare Advantage plan quality rankings.  Health Aff (Millwood). 2018;37(7):1065-1072. doi:10.1377/hlthaff.2017.1509PubMedGoogle ScholarCrossref
22.
Chen  LM, Epstein  AM, Orav  EJ, Filice  CE, Samson  LW, Joynt Maddox  KE.  Association of practice-level social and medical risk with performance in the Medicare Physician Value-Based Payment Modifier Program.  JAMA. 2017;318(5):453-461. doi:10.1001/jama.2017.9643PubMedGoogle ScholarCrossref
23.
Markovitz  AA, Ellimoottil  C, Sukul  D,  et al.  Risk adjustment may lessen penalties on hospitals treating complex cardiac patients under Medicare’s bundled payments.  Health Aff (Millwood). 2017;36(12):2165-2174. doi:10.1377/hlthaff.2017.0940PubMedGoogle ScholarCrossref
24.
Joynt  KE, Jha  AK.  Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program.  JAMA. 2013;309(4):342-343. doi:10.1001/jama.2012.94856PubMedGoogle ScholarCrossref
25.
Joynt Maddox  KE.  Financial incentives and vulnerable populations: will alternative payment models help or hurt?  N Engl J Med. 2018;378(11):977-979. doi:10.1056/NEJMp1715455PubMedGoogle ScholarCrossref
26.
Gilman  M, Adams  EK, Hockenberry  JM, Milstein  AS, Wilson  IB, Becker  ER.  Safety-net hospitals more likely than other hospitals to fare poorly under Medicare’s value-based purchasing.  Health Aff (Millwood). 2015;34(3):398-405. doi:10.1377/hlthaff.2014.1059PubMedGoogle ScholarCrossref
27.
Landon  BE, Hicks  LS, O’Malley  AJ,  et al.  Improving the management of chronic disease at community health centers.  N Engl J Med. 2007;356(9):921-934. doi:10.1056/NEJMsa062860PubMedGoogle ScholarCrossref
28.
Grant  RW, Buse  JB, Meigs  JB; University HealthSystem Consortium (UHC) Diabetes Benchmarking Project Team.  Quality of diabetes care in US academic medical centers: low rates of medical regimen change.  Diabetes Care. 2005;28(2):337-442. doi:10.2337/diacare.28.2.337PubMedGoogle ScholarCrossref
29.
Chin  MH, Auerbach  SB, Cook  S,  et al.  Quality of diabetes care in community health centers.  Am J Public Health. 2000;90(3):431-434. doi:10.2105/AJPH.90.3.431PubMedGoogle ScholarCrossref
30.
Centers for Medicare & Medicaid Services. CCW chronic conditions: combined Medicare and Medicaid data. https://www.ccwdata.org/web/guest/home. Accessed November 5, 2018.
31.
Centers for Medicare & Medicaid Services. Two-step attribution for measures included in the value modifier. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/PhysicianFeedbackProgram/Downloads/Attribution-Fact-Sheet.pdf. Accessed November 5, 2018.
32.
Agency for Healthcare Research and Quality. Prevention quality indicators (PQI) log of ICD-9-CM, ICD-10-CM/PC, and DRG coding updates and revisions to PQI documentation and software through version 6.0. https://www.qualityindicators.ahrq.gov/Downloads/Modules/PQI/V60/ChangeLog_PQI_v60.pdf. Accessed November 5, 2018.
33.
Stone  NJ, Robinson  JG, Lichtenstein  AH,  et al.  ACC/AHA guideline on the treatment of blood cholesterol to reduce atherosclerotic cardiovascular risk in adults: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.  Circulation. 2014;129(25)(suppl 2):S1-S45.PubMedGoogle ScholarCrossref
34.
Bernheim  SM, Parzynski  CS, Horwitz  L,  et al.  Accounting for patients’ socioeconomic status does not change hospital readmission rates.  Health Aff (Millwood). 2016;35(8):1461-1470. doi:10.1377/hlthaff.2015.0394PubMedGoogle ScholarCrossref
35.
O’Malley  AJ, Zaslavsky  AM, Elliott  MN, Zaborski  L, Cleary  PD.  Case-mix adjustment of the CAHPS Hospital survey.  Health Serv Res. 2005;40(6, pt 2):2162-2181.PubMedGoogle ScholarCrossref
36.
Goldstein  H, Spiegelhalter  DJ.  League tables and their limitations: statistical issues in comparisons of institutional performance.  J R Stat Soc Ser A Stat Soc. 1996;159(3):385-443. doi:10.1016/j.abb.2005.03.021Google ScholarCrossref
37.
Jacobs  R, Goddard  M, Smith  PC.  How robust are hospital ranks based on composite performance measures?  Med Care. 2005;43(12):1177-1184. doi:10.1097/01.mlr.0000185692.72905.4aPubMedGoogle ScholarCrossref
38.
McDowell  A, Nguyen  CA, Chernew  ME,  et al.  Comparison of approaches for aggregating quality measures in population-based payment models.  Health Serv Res. 2018;53(6):4477-4490. doi:10.1111/1475-6773.13031PubMedGoogle ScholarCrossref
39.
Kilgore  K, Teigland  C, Pulungan  Z. Using aggregate data to proxy individual-level socioeconomic characteristics in research on medication adherence: 9-digit ZIP code vs. census block group. Poster presented at: Academy of Managed Care Pharmacy Nexus 2018; October 22-25, 2018; Orlando FL. http://avalere-health-production.s3.amazonaws.com/uploads/pdfs/1540232294_Aggregate_Data_Poster_Final.pdf. Accessed November 5, 2018.
×