Customize your JAMA Network experience by selecting one or more topics from the list below.
To the Editor Liang and colleagues1 developed a prediction model and web-based calculator to estimate the probability of development of critical illness in hospitalized patients with coronavirus 2019 (COVID-19). The model could benefit treatment decision and resource optimization. However, we have several comments on this model.
First, some of the 24 patients with severe illness in the development cohort might have already developed critical illness at admission, according to one of the major criteria for defining severity by American Thoracic Society guideline, “respiratory failure requiring mechanical ventilation.”2(e48) Therefore, these patients, if any, should be excluded from the development cohort. Otherwise, the model performance was upward biased.
Second, 3 continuous laboratory predictors (neutrophil-to-lymphocyte ratio, lactate dehydrogenase, and direct bilirubin) were included in the model naively without the normality and linearity assumptions having been first checked. Laboratory values are usually skewed and may have a J-shaped or U-shaped relationship with the outcome; thus log-transformation and exploration of nonlinear relationship are recommended steps that were missing in the model development.
Third, the process of variable selection is somewhat weird. A total of 19 of 72 variables were selected by lasso regression first, and then 10 of the 19 variables were identified by logistic regression. It was not clearly reported how this further selection was done and the rationale of not using lasso regression to select out the final predictors directly in one step.
Fourth, the model was not evaluated for its calibration performance,3 such as calibration intercept and calibration slope. Although the area under the receiver operating characteristic curve was reported, it is a measure of concordance reflecting the ability of a model to rank patients from high to low probability but does not assess the ability of a model to assign an accurate probability of an event.4,5 Therefore, area under receiver operating curve alone is insufficient to evaluate the capability of a prediction model, and calibration measurements are needed to assess the accuracy of absolute risk estimates.
Corresponding Author: Hong-Qiu Gu, PhD, China National Clinical Research Center for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, No. 119 South 4th Ring West Road, Beijing 10070, China (email@example.com).
Published Online: November 9, 2020. doi:10.1001/jamainternmed.2020.5740
Conflict of Interest Disclosures: None reported.
Editorial Note: This letter was shown to the corresponding author of the original article, who declined to reply on behalf of the authors.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Gu H, Wang J. Prediction Models for COVID-19 Need Further Improvements. JAMA Intern Med. 2021;181(1):143–144. doi:10.1001/jamainternmed.2020.5740
Coronavirus Resource Center