Expeditious hemorrhage control is a fundamental tenet of trauma surgery. Every minute delayed before an emergency laparotomy for hemorrhage control has been associated with an increased probability of mortality.1 Gutierrez et al2 developed and validated a much needed tool to identify patients at risk of requiring early laparotomy within 2 hours of hospital arrival after injury. The Prehopsital Preparation for Surgery (PREPS) Score was derived using encounters of adults aged 18 years or older in the 2017 Trauma Quality Improvement Program database. The PREPS Score (0-20) comprised 7 binary variables, with factors more strongly associated with early laparotomy contributing more points and input variables that could be used by emergency medical service (EMS) technicians in the field.
Building this type of clinical tool requires thoughtful study design. First, the tool should have the potential to alter clinical decision-making. The PREP Score certainly meets this criterion; early identification of patients requiring emergency laparotomy may help hospitals mobilize resources, ensure patients are triaged to appropriate centers, and alert clinicians to patients with a higher likelihood of deterioration. Second, variable selection must follow an a priori determined and statistically sound approach. Including too many variables in a model risks overfitting and lacks practicality. In addition to the variable selection strategy used by Gutierrez et al,2 other strategies can be used, ranging from removing variables with near-zero variance to regularization and other machine learning algorithms.3-5 Third, model validity must be assessed using appropriate performance metrics. Various performance metrics exist, including the C-statistic, which the authors used. The C-statistic is equivalent to area under the receiver operating characteristic curve (AUROC) for binary outcomes, a frequently reported yet commonly misunderstood value. Derived from the area under the curve plotting sensitivity (true positive rate) vs 1 − specificity (false positive rate), AUROC denotes the probability that a randomly selected point in the positive class will be scored higher than a randomly selected point in the negative class. While the concept that a higher C-statistic or AUROC close to 1 indicates good performance, applying the value itself for clinical decision-making is challenging.
Alternative performance metrics rather than AUROC or C-statistic may be more interpretable. For example, if the cost of a false negative is high (eg, most screening tools), sensitivity or recall would be an appropriate performance metric to evaluate, while precision would be the more informative metric if the cost of a false positive is high (eg, if positive result necessitates invasive or high-risk intervention). Class imbalance (the ratio of positives to negatives in the study population) is an important consideration when choosing the performance metric. For example, in scenarios with few positives, accuracy would be a misleading metric. If a tool always predicted negative in the authors’ study population (1.1% underwent emergency laparotomy), the tool would boast 99% accuracy. In imbalanced data sets, the area under the precision-recall curve is an informative metric to consider.
After confirming validity (ie, does the tool work?), utility (ie, is the tool useful?) must be assessed. A useful clinical prediction tool should translate to improved patient outcomes or another tangible benefit. Evaluating clinical utility usually requires further prospective validation across multiple settings (eg, efficacy and incremental value assessment), further tool calibration, and societal utility assessment (eg, cost-benefit or cost-effectiveness analysis), which we hope the authors will pursue. Bedside usability (eg, mobile applications3,6,7) should be considered at the onset of prediction tool design to facilitate these real-world evaluations. Ensuring the tools that impact clinical decision-making are trustworthy will require developers, reviewers, and users to be familiar with the rigor required for thoughtful prediction tool development and validation.
Prediction tools in medicine are currently undergoing a renaissance. As computing speed has accelerated, complex statistical and machine learning prediction tools have proliferated, and the desire to apply these tools to common clinical scenarios has increased. We, as trauma clinicians, are rapidly approaching a future of prediction-first care, particularly in the prehospital arena—if a universal health care digital platform is developed and implemented. In this future state, the following scenario becomes possible. A patient is injured, and hemorrhage threatens their life. EMS technicians rapidly input point-of-care variables that estimate the severity of injury at the scene. This data is seamlessly integrated with traffic data, emergency department census information, and operative room availability. Then, the output is provided to the EMS technician giving them real-time recommendations on the fastest route to the center with the greatest availability to get the patient definitive hemorrhage control as expeditiously as possible. When minutes matter, having the ability to rapidly triage injured patients to the care they need is a much-needed application of the prediction tool renaissance.
Published: January 31, 2022. doi:10.1001/jamanetworkopen.2021.45867
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2022 Choi J et al. JAMA Network Open.
Corresponding Author: Joseph D. Forrester, MD, MSc, Department of Surgery, Stanford University, 300 Pasteur Dr, H3638, Stanford, CA 94305 (email@example.com).
Conflict of Interest Disclosures: Dr Forrester reported receiving unrestricted research funding from Varian and receiving grant funding from the Surgical Infections Society outside the submitted work. No other disclosures were reported.
Choi J, Forrester JD. Clinical Prediction Tools in Trauma: Where Do We Go From Here? JAMA Netw Open. 2022;5(1):e2145867. doi:10.1001/jamanetworkopen.2021.45867
Customize your JAMA Network experience by selecting one or more topics from the list below.