eMethods. Present on Admission–Exempt International Classification of Diseases, Ninth Revision, Clinical Modification Code Methodology
eFigure 1. Flowchart to Determine Which Individual ICD-9-CM Codes Were Selected for the Individual-Codes Logistic Regression Model
eFigure 2. Comparison of Receiver Operating Characteristic Curves
eFigure 3. Kernel Density Plots Comparing the Log Odds of the CMS and Individual-Codes Patient-Level 30-Day Mortality Models
eFigure 4. Comparison of Distribution of Hospital Risk-Standardized Mortality Rates for CMS vs Individual-Codes Hospital-Level 30-Day Mortality Models
eTable 1. Shift Tables Comparing the Predicted Risk of the CMS and Individual-Codes Models
eTable 2. Top 50 ICD-9-CM Codes Selected by the Individual-Codes Model for Acute Myocardial Infarction Compared With Their Corresponding Version 22 HCC Codes
eTable 3. Top 50 ICD-9-CM Codes Selected by the Individual-Codes Model for Heart Failure Compared With Their Corresponding Version 22 HCC Codes
eTable 4. Top 50 ICD-9-CM Codes Selected by the Individual-Codes Model for Pneumonia Compared With Their Corresponding Version 22 HCC Codes
eTable 5. Centers for Medicare & Medicaid Services Publicly Reported Performance Categories for the CMS Model Compared With the Individual-Codes Model for 30-Day Mortality Measures Among Hospitals With at Least 25 Cases
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Krumholz HM, Coppi AC, Warner F, et al. Comparative Effectiveness of New Approaches to Improve Mortality Risk Models From Medicare Claims Data. JAMA Netw Open. Published online July 17, 20192(7):e197314. doi:10.1001/jamanetworkopen.2019.7314
Could present on admission indicators and ungrouped diagnostic codes enhance risk models for acute myocardial infarction, heart failure, and pneumonia mortality measures and improve discrimination of hospital-level performance?
In this comparative effectiveness study including all Medicare fee-for-service beneficiaries hospitalized for acute myocardial infarction, heart failure, or pneumonia at acute care hospitals, incorporating present on admission coding and ungrouped historical and index admission International Classification of Diseases, Ninth Revision, Clinical Modification codes was associated with greater discrimination in patient-level and hospital-level 30-day mortality risk models.
Changes incurring no additional cost could enhance the risk adjustment for mortality and increase discrimination of hospital-level performance.
Risk adjustment models using claims-based data are central in evaluating health care performance. Although US Centers for Medicare & Medicaid Services (CMS) models apply well-vetted statistical approaches, recent changes in the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) coding system and advances in computational capabilities may provide an opportunity for enhancement.
To examine whether changes using already available data would enhance risk models and yield greater discrimination in hospital-level performance measures.
Design, Setting, and Participants
This comparative effectiveness study used ICD-9-CM codes from all Medicare fee-for-service beneficiary claims for hospitalizations for acute myocardial infarction (AMI), heart failure (HF), or pneumonia among patients 65 years and older from July 1, 2013, through September 30, 2015. Changes to current CMS mortality risk models were applied incrementally to patient-level models, and the best model was tested on hospital performance measures to model 30-day mortality. Analyses were conducted from April 19, 2018, to September 19, 2018.
Main Outcomes and Measures
The main outcome was all-cause death within 30 days of hospitalization for AMI, HF, or pneumonia, examined using 3 changes to current CMS mortality risk models: (1) incorporating present on admission coding to better exclude potential complications of care, (2) separating index admission diagnoses from those of the 12-month history, and (3) using ungrouped ICD-9-CM codes.
There were 361 175 hospital admissions (mean [SD] age, 78.6 [8.4] years; 189 225 [52.4%] men) for AMI, 716 790 hospital admissions (mean [SD] age, 81.1 [8.4] years; 326 825 [45.6%] men) for HF, and 988 225 hospital admissions (mean [SD] age, 80.7 [8.6] years; 460 761 [46.6%] men) for pneumonia during the study; mean 30-day mortality rates were 13.8% for AMI, 12.1% for HF, and 16.1% for pneumonia. Each change to the models was associated with incremental gains in C statistics. The best model, incorporating all changes, was associated with significantly improved patient-level C statistics, from 0.720 to 0.826 for AMI, 0.685 to 0.776 for HF, and 0.715 to 0.804 for pneumonia. Compared with current CMS models, the best model produced wider predicted probabilities with better calibration and Brier scores. Hospital risk-standardized mortality rates had wider distributions, with more hospitals identified as good or bad performance outliers.
Conclusions and Relevance
Incorporating present on admission coding and using ungrouped index and historical ICD-9-CM codes were associated with improved patient-level and hospital-level risk models for mortality compared with the current CMS models for all 3 conditions.
Risk models using administrative claims–based data play a central role in evaluating health care performance, setting payments, and conducting research.1-7 We hypothesized that 2 approaches could improve the performance of these models. First, the models could potentially improve by using present on admission (POA) codes. In 2014, the US Centers for Medicare & Medicaid Services (CMS) began mandating hospitals to add POA designations, which denoted conditions that predated the hospitalization.8 Many diagnosis codes used exclusively on the index hospitalization were excluded from models predicting outcomes. For example, the CMS performance models excluded diagnoses because they might have represented complications associated with clinical quality. The use of these codes with the knowledge that they were present on admission would increase the number of codes available for risk adjustment. Second, claims-based models often combine diagnoses into clinically coherent groups to reduce the number of variables. For example, CMS bases many of its models on modifications of the hierarchical condition categories (HCCs) to group codes.9,10 For example, in the CMS mortality model, the codes for historical diagnoses and procedures from the previous 12 months are combined with codes from the index admission into 1 set of risk variables. A potential limitation is that codes that comprise a group may have different associations with the outcome but the group’s overall effect is a weighted mean. Also, the effect of lower-frequency codes could be overwhelmed by that of higher-frequency codes. Advancements in computational capabilities and analytical methods enable us to handle much larger amounts of information efficiently and provide an opportunity to consider risk variables using ungrouped codes.11
Accordingly, we tested whether leveraging POA coding and using individual codes rather than grouped codes could be associated with improved model performance. These changes use data already available in claims and incur no additional marginal cost. Specifically, we explored 3 changes to current patient-level risk models: (1) incorporating POA coding to distinguish conditions present at the time of admission from those emerging during hospitalization, (2) separating POA diagnosis codes from those present in encounters during the prior 12 months, and (3) disaggregating codes currently used in the risk variable groupers. The best-performing models were compared with publicly reported CMS hospital performance measures. This assessment focused on the publicly reported 30-day mortality measures for acute myocardial infarction (AMI), heart failure (HF), and pneumonia.
We applied the cohort definitions for the CMS 30-day all-cause mortality measures after hospitalization for AMI,5 HF,6 or pneumonia.12 Medicare Standard Analytic and denominator files identified all hospitalizations at acute care hospitals with a principal discharge diagnosis of AMI, HF, or pneumonia from July 1, 2013, through September 30, 2015. We defined cohorts with the same International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM)13 codes used in the CMS publicly reported mortality measures. We chose to focus on a period exclusively using ICD-9-CM codes because it was not practical to combine both ICD-9-CM and International Statistical Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) codes in the same models. We did not have enough data coded in ICD-10-CM to conduct these analyses exclusively in the ICD-10-CM data. We included hospitalizations for patients 65 years or older. We excluded hospitalizations from which patients were discharged against medical advice and for patients with less than 1 year of prior enrollment in Medicare fee-for-service. We further excluded records for which POA coding was missing for the principal diagnosis of the index admission. We linked transfers into a single episode of care and assigned patients to the index admitting hospital. We used Medicare claims for 12 months before the index admission. The Human Investigation Committee at Yale University approved an exemption for this study to use CMS claims and enrollment data and waived the requirement for informed consent because the research involved no more than minimal risk and could not be practicably carried out without the waiver. Data analyses were conducted from April 19, 2018, to September 19, 2018.
The outcome was death from any cause within 30 days of the hospital admission date for AMI, HF, and pneumonia. Death date was identified by data in CMS enrollment files or from inpatient claim discharge status for death during the index hospitalization.
We tested 3 changes to current risk models: (1) incorporating POA coding to better distinguish conditions that were POA from those that were complications of care, (2) separating diagnosis codes in the index admission from those coded within the prior 12 months, and (3) disaggregating codes within risk variable groupers and using individual ICD-9-CM codes instead. The changes were applied incrementally to patient-level models, with the best model then tested at the hospital level. To develop and test the potential changes, we used 5-fold cross-validation to control for model overfitting, with each iteration training generalized linear models (GLMs) on 80% of the cohort and then testing on the remaining 20%.
Present on admission coding identifies conditions present at admission. The current CMS mortality measures’ risk models, developed before the use of POA indicators, exclude specific diagnoses coded in the index admission that could represent complications of care.14 We tested whether incorporating POA coding into the CMS model could be associated with improved patient-level mortality model discrimination. The official CMS POA exempt list was reviewed by a panel of clinical experts, and conditions that were judged to have any significant potential to be a complication of care were removed (eMethods in the Supplement). For example, code V66.7 (encounter for palliative care) was in the 2016 CMS POA exempt list but we removed it from the clinically vetted modified list.
In the CMS model, each grouper-based variable indicates whether at least 1 diagnosis code of a category was found in claims during 12 months before admission or within the index admission. We tested whether discrimination improved when separating the historical conditions from the POA conditions of the index admission, treating them as distinct variables.
We incrementally compared patient-level mortality discrimination among risk variables of different types. First, we used the CMS risk variables. The risk variables of the publicly reported measure models use modified condition categories (MCCs) to capture patient-level severity and to reduce dimensionality and 4 non–diagnosis-based variables (age, sex, history of percutaneous coronary intervention, and history of coronary artery bypass). The MCCs were constructed from the 201 condition categories (CCs) that are the bottom level of the version 22 HCCs using previously established methods.10 Modified condition categories may include individual CCs, groups of CCs, or subsets of codes from within CCs. For example, in the AMI model, the diabetes variable consists of any ICD-9-CM code within CC 17, CC 18, CC 19, or CC 123.
Next, we considered using the full set of CCs. To incrementally explore using more disaggregated groupers, we replaced the current MCCs with all 201 individual CCs. Finally, we investigated using individual ICD-9-CM codes, eliminating the use of a grouper. We replaced the MCCs with a subset of individual ICD-9-CM codes as risk variables. We restricted index admission ICD-9-CM codes to those that were identified as POA or those that were not explicitly identified as not POA while also on the modification of CMS’s POA-exempt list. Also, we considered all individual ICD-9-CM codes in claims of the 12-month history, pooling together inpatient, outpatient, and Part B settings. For CMS risk models, any groupers or risk factors with less than 1% frequency among all admissions are considered low frequency. For our study, we restricted to codes with a frequency of more than 0.5%, keeping index and history ICD-9-CM codes separate.
We incrementally incorporated the 3 proposed model changes in risk-adjusted GLMs at the patient level with the goal of choosing the best one to explore hospital-level performance within the practical constraints of fitting hospital-level hierarchical GLMs (HGLMs), given the large number of hospitals and admission records. All the GLM and HGLM models use a binary distribution and a logit link function. In our experience, these hospital-level HGLMs need to be limited to a maximum of approximately 200 variables with low associated P values to reliably converge and be practical for the time-consuming bootstrapping required in CMS methodology. Hence, with the end goal of demonstrating a better separation of performance of the hospitals, we limited the modified patient-level models tested at the hospital level to 200 or fewer variables. The likelihood-ratio test was used to find P values. To further improve HGLM convergence issues, we required variables to have a 2-tailed P value less than .005 in the patient-level GLM.
To meet these requirements with a model based on individual ICD-9-CM codes, we used a basic variable-selection strategy. After excluding codes with less than 0.5% frequency, we used a least absolute shrinkage and selection operator parameter grid search to select the best set of variables, with total count not exceeding 200, from the combined set of index admission ICD-9-CM codes, history ICD-9-CM codes, and non–diagnosis-based variables in the CMS models. Next, a GLM was fit using these selected codes, and only those with a P value less than .005 were retained for the final model, hereafter the individual-codes model (eFigure 1 in the Supplement). Whenever 5-fold cross-validation was used, we only used the training set for code selection. The discrimination performance of each of the previously described model modifications was compared using the C statistic, testing the statistical significance of the differences using the method of DeLong et al.15
We compared patient-level performance of the individual-codes model to CMS risk adjustment using the C statistic and the Brier score. Following CMS methodology, models were trained and tested on the full cohorts. The Brier score can be considered a measure of the calibration of a prediction model (lower is better) and is a combination of several components, including reliability, a measure of the error in a calibration curve (lower is better), and resolution, reflecting how well a model separates predictions from the mean event rate (higher is better). We also compared the calibration slope (closer to 1 is better) and predictive range (wider is better) across both models. We included additional model comparisons, such as receiver operating characteristic curves, calibration plots (eFigure 2 in the Supplement), log odds plots (eFigure 3 in the Supplement), and shift tables (eTable 1 in the Supplement), and listed the top codes that each individual-codes model selected (eTables 2, 3, and 4 in the Supplement).
To assess the association of the proposed changes to patient-level risk adjustment with hospital performance profiling, we used the CMS HGLM approach to calculate the hospital risk-standardized mortality rates (RSMRs).16,17 We compared hospital-level performance using the CMS models with those incorporating the risk factors of the individual-codes models. The individual-codes models included 150, 182, and 186 variables for the AMI, HF, and pneumonia condition cohorts, respectively. We compared the distributions and between-hospital variances of RSMRs. A larger SD, interquartile range (IQR), and range would indicate that the RSMRs of different hospitals are spread out wider and thus more distinguishable. We used the F test (number of hospitals as degree of freedom for both numerator and denominator) to examine whether between-hospital variances from HGLM models were equal. We used folded F tests to compare variances of the RSMR distributions. We also calculated the weighted Pearson correlation coefficient between the RSMRs using the CMS model and the individual-codes model, using the inverse of the variance of ICD-9-CM RSMRs as weights.18,19 Hospital RSMR variances were calculated using bootstrap methods defined in CMS public reporting.14 We used 5000 bootstraps for each condition for the CMS and individual-codes models.
On its public Hospital Compare website,20 CMS reports annually on hospitals with at least 25 cases of AMI, HF, or pneumonia, using the 3 performance categories of no different than the national rate, better than the national rate, or worse than the national rate. In addition to point estimate and CI, we classified hospitals into these categories by whether RSMR CIs contained the national rate.14 We compared the frequency of hospitals in each category and studied the hospital category shifts between the CMS and individual-codes model.
All GLM-related analyses were conducted in R statistical software version 3.3 (R Project for Statistical Computing) using the stats package or in Python version 2.7 (Python Software Foundation) using the scikit-learn or StatsModels packages. Our least absolute shrinkage and selection operator analysis was implemented using the GLMNET library in both R and Python. All hospital-level analyses were performed using SAS statistical software version 9.4 (SAS Institute).
The AMI, HF, and pneumonia cohorts are described in Table 1. There were 361 175 hospital admissions (mean [SD] age, 78.6 [8.4] years; 189 225 [52.4%] men) for AMI, 716 790 hospital admissions (mean [SD] age, 81.1 [8.4] years; 326 825 [45.6%] men) for HF, and 988 225 hospital admissions (mean [SD] age, 80.7 [8.6] years; 460 761 [46.6%] men) for pneumonia. The mean 30-day mortality rates were 13.8% for AMI, 12.1% for HF, and 16.1% for pneumonia.
Each additional change to the models resulted in incremental gains in C statistics (Table 2). The highest overall gains were achieved by the combination of changes incorporated in the individual-codes models, improving the CMS C statistics from 0.720 to 0.826 for AMI (P < .001), 0.685 to 0.776 for HF (P < .001), and 0.715 to 0.804 for pneumonia (P < .001).
Table 3 compares risk predictions and metrics of the patient-level CMS and individual-codes models. For these results, we did not use the 5-fold cross-validation used in Table 2 because we were comparing with the CMS model as implemented, which, following CMS methodology,14 is trained and tested on the full cohort. Using the individual index POA and history codes selected by our method (150 for AMI, 182 for HF, and 186 for pneumonia) was associated with substantially improved model performance compared with the CMS risk model for all 3 mortality measures. We found significantly higher C statistics for the individual-codes models for all 3 conditions, increasing from 0.720 to 0.828 for AMI, 0.685 to 0.778 for HF, and 0.715 to 0.805 for pneumonia. Brier scores for the individual-codes models were 0.092 for AMI, 0.102 for HF, and 0.111 for pneumonia; for the CMS models, the Brier scores were 0.110 for AMI, 0.102 for HF, and 0.126 for pneumonia. Resolution scores were 0.027 for AMI, 0.013 for HF, and 0.025 for pneumonia for individual-codes models and 0.0086 for AMI, 0.0044 for HF, and 0.001 for pneumonia for the CMS models. The 2 models had similar reliability for all 3 conditions.
The range of predicted probabilities of 30-day mortality was wider for the individual-codes models than for the CMS models (AMI, 0.003-0.999 vs 0.019-0.850; HF, 0.001-0.985 vs 0.015-0.724; pneumonia, 0.002-0.996 vs 0.016-0.852). Standard deviations of the predicted probabilities were also higher for the individual-codes models than for CMS models (AMI, 0.169 vs 0.099; HF, 0.122 vs 0.073; pneumonia, 0.163 vs 0.106).
A total of 4036 hospitals contributed claims for the AMI cohort, 4380 hospitals contributed claims for HF, and 4462 hospitals contributed claims for pneumonia. Of these, there were at least 25 cases at 2218 hospitals for AMI, 3323 hospitals for HF, and 3877 hospitals for pneumonia.
The RSMRs calculated using individual-codes models had a wider distribution compared with those using CMS models for HF and pneumonia measures (t test comparing the RSMRs from the 2 models had P < .001), while the AMI RSMR distribution appeared to be wider but not statistically significant (Table 4; eFigure 4 in the Supplement). The between-hospital variances were higher in HGLMs using individual-codes models compared with CMS models, increasing from 0.035 to 0.047 for AMI (P < .001), 0.051 to 0.071 for HF (P < .001), and 0.048 to 0.180 for pneumonia (P < .001). The SDs, IQRs, and ranges for RSMRs using individual-codes models were also higher for all 3 conditions (Table 4). The model for AMI had the smallest change, while the model for pneumonia had the largest change, and its IQR almost doubled from 2.4 percentage points (IQR, 15.0%-17.4%) to 4.0 percentage points (IQR, 14.2%-18.3%). The weighted Pearson correlations between RSMRs for CMS models and individual-codes models were 0.83 for AMI, 0.83 for HF, and 0.73 for pneumonia.
When comparing performance categories assigned by the 2 models (Table 5), the individual-codes model identified more hospitals as better than or worse than the national mean for all 3 conditions. The extent of variation between models was different for all 3 conditions, with pneumonia having the largest increase. For example, using the individual-codes model compared with the CMS model, we identified 34 (1.5%) vs 30 hospitals (1.4%) as better than the national rate for AMI mortality, 185 (5.6%) vs 124 hospitals (3.7%) as better than the national rate for HF mortality, and 492 (12.7%) vs 192 hospitals (5.0%) as better than the national rate for pneumonia mortality. Similar differences were noted for hospitals identified as worse than the national rate. The shift in the performance categories using each model is shown in eTable 5 in the Supplement. Most of the better and worse outliers reported by CMS remained in the same categories when assigned by individual-codes models.
We introduce a new approach to the risk adjustment used in conducting research and profiling hospital mortality performance. The proposed methods use data already available in claims and therefore incur no additional marginal cost to include in risk models. By leveraging the introduction of POA coding, separating codes first noted on the index admission from those of the prior year, and disaggregating previously grouped codes, we markedly improved the discrimination of the risk-adjusted models for 30-day AMI, HF, and pneumonia mortality. In hospital profiling, this approach allowed us to detect more variation, identifying many more hospitals as better or worse than the national mean, which is important because lack of variation in performance has been identified as a problem with the current measures.21
The improvement associated with the use of POA coding would be easy to implement. For example, CMS 30-day mortality outcome measures, developed before POA information was reliably available, exclude a list of diagnoses that have the potential to be complications of care but could also convey valuable information about preadmission risk. Using the POA coding that is now available yields substantially improved patient-level model performance across all 3 conditions. Also, separating historical and index diagnoses was associated with model improvement. Incorporating these changes into the risk models is not difficult. Experiments using custom, data-driven groupers showed some improvement over traditional groupers, with better results as the number of categories increased and the sizes of the individual categories decreased. However, the most substantial improvement was associated with replacing risk factors based on traditional groupers with a larger number of variables consisting of individual diagnosis codes. Groupers may obscure low-frequency codes and the heterogeneity of effect among the constituent codes.
There are several implications of the findings of our approach to use ungrouped diagnostic codes with their associated POA designations and index or prior timing. They provide a rationale for revisiting the prior assumptions about our approach to risk adjustment for outcomes performance models. Groupers have been used to efficiently handle extremely large sets of variables and simplify the computations to model them, but we found that the grouping of variables may not be necessary owing to the availability of computationally advanced algorithms. Many stakeholders potentially benefit from more accurate measures.
Although examination of improvements in measure score validity resulting from these methods was beyond the scope of this study, the gains demonstrated in discrimination of the patient-level models could translate into more precise characterization of hospitals’ performance. More investigation is required to reach such a conclusion. However, because these measures are currently used to profile hospital quality and to assess payments for better or worse performance, there would be a clear benefit to patients and hospitals if gains in the performance of the patient-level model also enhanced the measures’ accuracy.
There are several potential limitations to consider. We restricted our analysis to claims using ICD-9-CM codes because of our interest in using a large sample and not mixing ICD-9-CM and ICD-10-CM codes. Our approach would need to be thoroughly tested with ICD-10-CM codes, although, from preliminary experiments, we do not expect the results to change substantially, even with a potentially greater impact from the 0.5% frequency restriction. With the greater specificity of ICD-10-CM codes, performance may improve further. Another potential issue is that our approach could ignore low-frequency codes; however, they do not exert much influence in groupers even when included. Additionally, using least absolute shrinkage and selection operator analysis for code selection could result in some relevant codes being dropped owing to correlation, possibly diminishing predictive power. We chose this method owing to its ease in selecting a specified number of codes as required by our hospital-level algorithm and to avoid possible issues that multicollinearity can cause in HGLM.22 Nevertheless, we were consistent in our approach in all of the new models, so it should not have biased our comparative assessment. Also, we chose not to modify the currently used hospital-level HGLM method. However, a 2-stage HGLM or other novel methods would have allowed us to consider more sophisticated machine learning methods for patient-level risk adjustment.23,24 In addition, our approach may not be applicable for data with extreme outcome rates in combination with extreme risk factor prevalence in which using logistic regression would be problematic owing to the complete or quasi-complete separation. Moreover, the large number of variables included in the final models reduce the computational efficiency. Also, to our knowledge, there are few contemporary studies on the accuracy of POA codes, and the studies that have been published provide conflicting results on their accuracy.25-27 As such, there is a need for more research on this topic. Additionally, the use of individual diagnostic codes could inadvertently augment the effect of clinically insignificant differences in code use among hospitals. To address this possibility, it might be necessary to monitor the coding and potentially reselect codes and refit the model at more frequent intervals.
We identify several key strategies, made possible in part by an evolution of claims coding, to improve prediction models with implications for research and performance measurement. We found that incorporating POA coding into risk variable definitions, distinguishing between diagnoses coded in the claims within 12 months before the index admission from those coded at admission, and disaggregating from groupers into individual ICD-9-CM codes was associated with substantially improved patient-level and hospital-level models for AMI, HF, and pneumonia mortality. Disaggregating the groupers appeared to provide the largest improvement in many of the risk models. It also enabled hospital performance measures to identify more hospitals as better or worse than the national mean for all 3 conditions. These findings suggest that there may be opportunities to improve the risk models for research and outcome performance measures. In particular, further investigation of the effect of such changes on measure score validity might provide additional support for incorporating these methods in the future.
Accepted for Publication: May 28, 2019.
Published: July 17, 2019. doi:10.1001/jamanetworkopen.2019.7314
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2019 Krumholz HM et al. JAMA Network Open.
Corresponding Author: Harlan M. Krumholz, MD, SM, Section of Cardiovascular Medicine, Department of Internal Medicine, Yale School of Medicine, One Church Street, Ste 200, New Haven, CT 06510 (email@example.com).
Author Contributions: Drs Coppi and S.-X. Li had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Krumholz, Coppi, Warner, Triche, S.-X. Li, Bernheim, Dorsey.
Acquisition, analysis, or interpretation of data: Krumholz, Coppi, Warner, Triche, S.-X. Li, Mahajan, Y. Li, Grady, Lin, Normand.
Drafting of the manuscript: Krumholz, Coppi, Triche, S.-X. Li, Mahajan, Y. Li, Grady.
Critical revision of the manuscript for important intellectual content: Krumholz, Coppi, Warner, Triche, S.-X. Li, Mahajan, Bernheim, Dorsey, Lin, Normand.
Statistical analysis: Coppi, S.-X. Li, Y. Li, Normand.
Obtained funding: Krumholz, Dorsey.
Administrative, technical, or material support: Krumholz, Coppi, Triche, Mahajan, Grady, Dorsey, Lin.
Supervision: Krumholz, Coppi, S.-X. Li, Bernheim, Dorsey, Lin.
Conflict of Interest Disclosures: Dr Krumholz reported grants from Medtronic and the US Food and Drug Administration to develop methods for postmarket surveillance of medical devices (paid to Yale University); research agreements with Medtronic and Johnson and Johnson (Janssen) to develop methods of clinical trial data sharing (paid to Yale University); grants from Shenzhen Center for Health Information (paid to Yale University) outside the conduct of this study; personal fees from the Arnold and Porter Kaye Scholer for work related to the Sanofi clopidogrel patent litigation and the Law Offices of Ben C. Martin for work related to the Cook IVC filter litigation and from the Chinese National Center for Cardiovascular Diseases; serving as chair on the Cardiac Scientific Advisory Board for UnitedHealth Group, a participant or participant representative of the IBM Watson Health Life Sciences Board, a member of the advisory boards of Element Science and Facebook, and a member of the Physician Advisory Board for Aetna; and being the founder and co-owner of Hugo Health. Drs Krumholz, Coppi, Warner, Triche, S.-X. Li, Bernheim, Dorsey, Lin, and Normand and Mss Y. Li and Grady reported research funding from the US Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. Dr Normand reported a patent 201810345624.5 pending. No other disclosures were reported.
Funding/Support: The analyses on which this publication is based were performed under Measure and Instrument Development and Support contract HHSM-500-2013-13018I, Task Order HHSM-500-T0001–Development, Reevaluation, and Implementation of Outcome/Efficiency Measures for Hospital and Eligible Clinicians, Option Year 5, funded by CMS, an agency of the US Department of Health and Human Services.
Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; management, analysis, and interpretation of the data; preparation of the manuscript; and decision to submit the manuscript for publication. The funding organization reviewed and approved the manuscript for publication, and the claims data used for this study were collected through CMS administrative billing.
Disclaimer: The content of this article does not necessarily reflect the views or policies of the US Department of Health and Human Services nor does the mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.
Create a personal account or sign in to: