Prediction Models for COVID-19 Need Further Improvements | JAMA Internal Medicine | JAMA Network
[Skip to Navigation]
Comment & Response
November 9, 2020

Prediction Models for COVID-19 Need Further Improvements

Author Affiliations
  • 1China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
  • 2Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
JAMA Intern Med. 2021;181(1):143-144. doi:10.1001/jamainternmed.2020.5740

To the Editor Liang and colleagues1 developed a prediction model and web-based calculator to estimate the probability of development of critical illness in hospitalized patients with coronavirus 2019 (COVID-19). The model could benefit treatment decision and resource optimization. However, we have several comments on this model.

First, some of the 24 patients with severe illness in the development cohort might have already developed critical illness at admission, according to one of the major criteria for defining severity by American Thoracic Society guideline, “respiratory failure requiring mechanical ventilation.”2(e48) Therefore, these patients, if any, should be excluded from the development cohort. Otherwise, the model performance was upward biased.

Second, 3 continuous laboratory predictors (neutrophil-to-lymphocyte ratio, lactate dehydrogenase, and direct bilirubin) were included in the model naively without the normality and linearity assumptions having been first checked. Laboratory values are usually skewed and may have a J-shaped or U-shaped relationship with the outcome; thus log-transformation and exploration of nonlinear relationship are recommended steps that were missing in the model development.

Third, the process of variable selection is somewhat weird. A total of 19 of 72 variables were selected by lasso regression first, and then 10 of the 19 variables were identified by logistic regression. It was not clearly reported how this further selection was done and the rationale of not using lasso regression to select out the final predictors directly in one step.

Fourth, the model was not evaluated for its calibration performance,3 such as calibration intercept and calibration slope. Although the area under the receiver operating characteristic curve was reported, it is a measure of concordance reflecting the ability of a model to rank patients from high to low probability but does not assess the ability of a model to assign an accurate probability of an event.4,5 Therefore, area under receiver operating curve alone is insufficient to evaluate the capability of a prediction model, and calibration measurements are needed to assess the accuracy of absolute risk estimates.

Back to top
Article Information

Corresponding Author: Hong-Qiu Gu, PhD, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4th Ring West Road, Beijing 10070, China (guhongqiu@yeah.net).

Published Online: November 9, 2020. doi:10.1001/jamainternmed.2020.5740

Conflict of Interest Disclosures: None reported.

Editorial Note: This letter was shown to the corresponding author of the original article, who declined to reply on behalf of the authors.

References
1.
Liang  W, Liang  H, Ou  L,  et al; China Medical Treatment Expert Group for COVID-19.  Development and validation of a clinical risk score to predict the occurrence of critical illness in hospitalized patients with COVID-19.   JAMA Intern Med. 2020;180(8):1081-1089. doi:10.1001/jamainternmed.2020.2033PubMedGoogle ScholarCrossref
2.
Metlay  JP, Waterer  GW, Long  AC,  et al.  Diagnosis and treatment of adults with community-acquired pneumonia: an official clinical practice guideline of the American Thoracic Society and Infectious Diseases Society of America.   Am J Respir Crit Care Med. 2019;200(7):e45-e67. doi:10.1164/rccm.201908-1581STPubMedGoogle ScholarCrossref
3.
Steyerberg  EW, Vergouwe  Y.  Towards better clinical prediction models: seven steps for development and an ABCD for validation.   Eur Heart J. 2014;35(29):1925-1931. doi:10.1093/eurheartj/ehu207PubMedGoogle ScholarCrossref
4.
Alba  AC, Agoritsas  T, Walsh  M,  et al.  Discrimination and calibration of clinical prediction models: users’ guides to the medical literature.   JAMA. 2017;318(14):1377-1384. doi:10.1001/jama.2017.12126PubMedGoogle ScholarCrossref
5.
Pencina  MJ, D’Agostino  RB  Sr.  Evaluating discrimination of risk prediction models: the C statistic.   JAMA. 2015;314(10):1063-1064. doi:10.1001/jama.2015.11082PubMedGoogle ScholarCrossref
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    ×