[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 54.197.90.95. Please contact the publisher to request reinstatement.
[Skip to Content Landing]
Views 240
Citations 0
Comment & Response
September 2015

Some Important Deficiencies in the Development, Validation, and Reporting of a Prediction Model

Author Affiliations
  • 1Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Botnar Research Centre, Nuffield Orthopaedic Centre, Oxford, Oxfordshire, England
JAMA Surg. 2015;150(9):915. doi:10.1001/jamasurg.2015.1652

To the Editor We read with great interest the study by Tevis et al1 describing the development of a nomogram to predict the 30-day risk of readmission for patients following hospital discharge after general surgery. However, deficiencies in the methods and in the reporting limit the usefulness and usability of this study.

It is vital that prediction models, at a minimum, should be assessed and reported in terms of discrimination and calibration.2 While the authors evaluated the discrimination of the new model (using the C statistic), no assessment of calibration was reported. Calibration is the agreement between the model predictions and what was observed. Plotting the predictions against observed outcomes, overlaid with a smoothed regression line,3 allows for an assessment of miscalibration (ie, overprediction or underprediction) across the spectrum of predictions. Poor calibration substantially limits the usefulness of a model and may require a recalibration to try salvage the model.

First Page Preview View Large
First page PDF preview
First page PDF preview
×