The growth in the publication of clinical prediction models (CPMs) has been exponential, largely as a result of an ever-increasing availability of clinical data, inexpensive computational power, and an expanding tool kit for constructing predictive algorithms. Such an abundance of CPMs has led to an overcrowded, confusing landscape in which it is difficult to identify and select the best, most useful models.1 Few models are externally validated by the same researchers who developed them, and even fewer by independent investigators. Only 592 (43.3%) of 1366 cardiovascular CPMs in the Tufts PACE Clinical Predictive Model Registry reported at least 1 validation.2 The proportions of models in the Tufts registry that reported at least 2, 3, and 10 validations were 20.1%, 12.8%, and 2.9%, respectively.2 A few select CPMs, such as the Framingham Risk Score and EuroSCORE, have had numerous validations.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Adibi A, Sadatsafavi M, Ioannidis JPA. Validation and Utility Testing of Clinical Prediction Models: Time to Change the Approach. JAMA. 2020;324(3):235–236. doi:10.1001/jama.2020.1230
Coronavirus Resource Center
Customize your JAMA Network experience by selecting one or more topics from the list below.
Create a personal account or sign in to: