Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Castro VM, McCoy TH, Perlis RH. Assessment of the Performance Consistency of an Adverse Outcome Prediction Tool for Patients Hospitalized With COVID-19. JAMA Netw Open. 2021;4(7):e2118413. doi:10.1001/jamanetworkopen.2021.18413
The challenge of managing limited resources during the COVID-19 pandemic has sparked efforts to stratify risk among hospitalized patients.1 Few risk models have been validated or investigated for potential bias2 even though inpatient populations, treatments, and outcomes for COVID-19 have changed over time. We previously3 reported and validated a risk prediction tool based on COVID-19 hospitalizations during the initial wave of the pandemic. In this study, we report the performance of that same model on subsequent data from 6 hospitals collected during the second wave of patients with COVID-19.
In this prognostic study, we included individuals aged 18 years or older who were hospitalized at 1 of 2 academic medical centers and 4 community hospitals from June 7, 2020, through January 22, 2021, with a positive polymerase chain reaction test for SARS-CoV-2 within 5 days of admission, excluding those with an outcome on the day of hospitalization. The study protocol was approved by the Mass General Brigham Human Research Committee, which waived informed consent given that this is a minimal risk study using deidentified data. The Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) reporting guideline for validation studies was applied.
Features of hospital course were extracted from the Mass General Brigham Data Registry4 and the Enterprise Data Warehouse, including laboratory values and high and low flags. The Charlson Comorbidity Index was calculated using coded International Statistical Classification of Diseases and Related Health Problems, Tenth Revision (ICD-10) diagnostic codes.5 Race and ethnicity were defined by patient self-report using US Census categories and were included to allow assessment of bias in model performance.
Patients were followed up from admission to hospital discharge or death, with follow-up censored at discharge. Primary outcomes were (1) a composite severe illness outcome, including admission to the intensive care unit (ICU), mechanical ventilation, or mortality and (2) mortality. Coefficients from our previously reported least absolute shrinkage and selection operator risk models were applied to estimate the probability of each outcome without recalibration; these coefficients were drawn from sociodemographic features, the comorbidity index, and laboratory values.3 We applied median imputation of missing data. We characterized model performance with standard metrics of discrimination and calibration. All analyses were conducted with R version 4 (R Project for Statistical Computing).
Features of the new cohort are summarized in Table 1 and compared with those of the previously reported cohort in which the predictive model was trained. For the 2892 individuals in the new cohort, the mean (SD) age was 63.0 (19.1) years; they included 1460 (50.5%) women, 673 (23.3%) Hispanic individuals, and 344 (11.9%) Black individuals. The mean (SD) length of hospital stay was 6.2 (5.3) days; 126 patients (4.4%) required an ICU stay and 68 (2.4%) mechanical ventilation, while 167 (5.8%) died prior to discharge. Overall model performance for mortality included an area under the receiver operating characteristic curve (AUC) of 0.83 (95% CI, 0.80-0.87), with a positive predictive value (PPV) of 0.22 and a negative predictive value (NPV) of 0.98 when using a cutoff corresponding to the highest 20% of predicted risk derived in the training set. By comparison, in the original model period,3 AUC was 0.85; PPV, 0.46; and NPV, 0.97. For the composite severe outcome, AUC was 0.78 (95% CI 0.75-0.81); PPV, 0.25; and NPV, 0.95 in the top 20% risk group vs an AUC of 0.81, PPV of 0.55, and NPV of 0.91 in the original period.3 Among subgroups (Table 2), model discrimination for both outcomes was generally similar among sex and race/ethnicity groups but poorer for younger age groups.
Applying a previously validated model to 2892 new COVID-19 admissions in the same 6 hospitals, we found that model performance decreased only modestly from the initial validation study.3 A key exception was PPV, likely reflecting substantial diminution in mortality and mechanical ventilation between the original and the subsequent study periods. Discrimination was generally consistent across subgroups, with the notable exception of younger age groups in whom performance was poorer.
Our results indicate that the population of individuals hospitalized for COVID-19 has shifted and the prevalence of the studied outcomes changed. However, they suggest that prediction models derived earlier in the pandemic may maintain discrimination after recalibration. A limitation is the reliance on 2 health systems in the same region. Our results also illustrate the importance of investigating risk stratification models across patient subgroups as a step toward ensuring that particular groups are not adversely affected by the application of such tools, particularly in settings of potential resource constraints.
Accepted for Publication: May 23, 2021.
Published: July 27, 2021. doi:10.1001/jamanetworkopen.2021.18413
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Castro VM et al. JAMA Network Open.
Corresponding Author: Roy H. Perlis, MD, MSc, Center for Quantitative Health, Division of Clinical Research, Massachusetts General Hospital, 185 Cambridge St, 6th Floor, Boston, MA 02114 (email@example.com).
Author Contributions: Dr Perlis had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: All authors.
Acquisition, analysis, or interpretation of data: Castro, Perlis.
Drafting of the manuscript: All authors.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: All authors.
Administrative, technical, or material support: Castro, McCoy.
Conflict of Interest Disclosures: Dr McCoy reported receiving grants from the Brain and Behavior Research Foundation, the National Institute of Mental Health, the National Institute of Nursing Research, the National Human Genome Research Institute, and Telefonica Alfa outside the submitted work. Dr Perlis reported holding equity in Psy Therapeutics and Outermost Therapeutics and receiving consulting fees from Belle Artificial Intelligence, Burrage Capital, Genomind, and RID Ventures outside the submitted work. No other disclosures were reported.
Funding/Support: This study was supported by grant R01MH116270 from the National Institute of Mental Health to Dr Perlis.
Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: Dr Perlis is associate editor of JAMA Network Open, but he was not involved in any of the decisions regarding review of the manuscript or its acceptance.
Create a personal account or sign in to: