[Skip to Navigation]
Sign In
Figure 1.  Model Discrimination for Kaiser Permanente Northern California Sample
Model Discrimination for Kaiser Permanente Northern California Sample
Figure 2.  Estimated Daily Alerts
Estimated Daily Alerts
1.
Hedegaard  H, Curtin  SC, Warner  M.  Increase in suicide mortality in the United States, 1999–2018. National Center for Health Statistics; 2020. NCHS Data Brief No 362. Accessed April 9, 2020. https://www.cdc.gov/nchs/data/databriefs/db362-h.pdf
2.
Brodsky  BS, Spruch-Feiner  A, Stanley  B.  The Zero Suicide model: applying evidence-based suicide prevention practices to clinical care.   Front Psychiatry. 2018;9:33. doi:10.3389/fpsyt.2018.00033PubMedGoogle ScholarCrossref
3.
Simon  GE, Johnson  E, Lawrence  JM,  et al.  Predicting suicide attempts and suicide deaths following outpatient visits using electronic health records.   Am J Psychiatry. 2018;175(10):951-960. doi:10.1176/appi.ajp.2018.17101167PubMedGoogle ScholarCrossref
4.
Simon  GE, Shortreed  SM, Coley  RY.  Positive predictive values and potential success of suicide prediction models.   JAMA Psychiatry. 2019;76(8):868-869. doi:10.1001/jamapsychiatry.2019.1516PubMedGoogle ScholarCrossref
5.
Gordon  NP; Kaiser Permanente Division of Research.  Similarity of the adult Kaiser Permanente membership in Northern California to the insured and general population in Northern California: statistics from the 2011-2012 California Health Interview Survey. Accessed March 24, 2020. https://divisionofresearch.kaiserpermanente.org/projects/memberhealthsurvey/SiteCollectionDocuments/chis_non_kp_2011.pdf
6.
Kroenke  K, Spitzer  RL, Williams  JB, Löwe  B.  The Patient Health Questionnaire somatic, anxiety, and depressive symptom scales: a systematic review.   Gen Hosp Psychiatry. 2010;32(4):345-359. doi:10.1016/j.genhosppsych.2010.03.006PubMedGoogle ScholarCrossref
Research Letter
Psychiatry
October 21, 2020

Estimates of Workload Associated With Suicide Risk Alerts After Implementation of Risk-Prediction Model

Author Affiliations
  • 1Division of Research, Kaiser Permanente Northern California, Oakland
  • 2Kaiser Permanente Washington Health Research Institute, Seattle
  • 3The Permanente Medical Group, Kaiser Permanente, Oakland, California
JAMA Netw Open. 2020;3(10):e2021189. doi:10.1001/jamanetworkopen.2020.21189
Introduction

From 1999 to 2017, the US suicide rate increased by 35% across age, sex, and geographic groups.1 Suicide risk–prediction models using data from electronic health records provide a promising approach for identifying and assisting individuals at risk.2 The Mental Health Research Network (MHRN) developed highly discriminative suicide risk–prediction models using data from 20 million mental health care visits across 7 health systems.3,4 However, the clinical and operational requirements for implementing the models in practice are unknown. We sought to externally validate the MHRN risk model and provide clinical workload estimates for implementation.

Methods

The Kaiser Permanente Northern California institutional review board approved this cohort study and waived the need for informed consent because the research involved no more than minimal risk to the participants, the research could not be practicably carried out without the waiver, and the waiver did not adversely affect the rights and welfare of the participants. As such, the study met the waiver criteria under 45 CFR 46.116(f). This report followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cohort studies. Kaiser Permanente Northern California is an integrated health care–delivery system serving 4.3 million members, 12% of whom are Medicaid beneficiaries and 58% of whom are members of racial/ethnic minority groups.5 We included all mental health encounters at Kaiser Permanente Northern California from October 1, 2016, to September 30, 2017. The MHRN suicide-attempt model used electronic health record measures, including demographic characteristics, Patient Health Questionnaire-9 item 9 scores,6 comorbidities, medications, mental health visits, and suicide attempts in the 5 years before the encounter date. Differences in data systems required minor modifications (ie, dropping census-derived information, combining race/ethnicity and insurance categories, and simplifying Patient Health Questionnaire-9 variables). We assessed discrimination of the MHRN model by examining suicide attempts within 90 days of encounters with an area under the receiver operating characteristic curve. To examine potential workload arising from risk alerts, we calculated the expected number of alerts at differing risk thresholds, ranging from the top 5% to the top 0.5% of scores. Analyses were conducted between January 2019 and March 2020. Stata/SE statistical software version 14.2 (StataCorp) and SAS statistical software version 9.4 (SAS Institute) were used for calculating summary statistics and area under the receiver operating characteristic curve.

Results

Over 1 year, we identified 1 408 683 mental health encounters (254 779 unique patients with mean [SD] age 40.7 [18.7] years, including 89 857 men [35.3%], 63 110 individuals who were Hispanic or Black [24.8%], and 35 267 individuals who had Medicare coverage in the previous year [13.8%]). Patients had a median (interquartile range [IQR]) 11 (5-24) visits, and 9054 patients attempted suicide within 90 days of a visit (0.6%). Model discrimination (area under the curve, 0.82; 95% CI, 0.81-0.82; Figure 1) was comparable to that found by Simon et al3 using the original MHRN sample (area under the curve, 0.851). The 95th percentile cut point had a sensitivity of 41.3% (95% CI, 39.5%-43.3%) and positive predictive value of 6.4% (95% CI, 6.2%-6.7%). The median number of daily risk alerts varied widely based on specified thresholds (Figure 2).

Excluding weekend mental health visits (1875 visits [0.1%]), the median [IQR] daily number of visits with suicide risk alerts across the region was 162 (117-182) visits for the 95th percentile and above, 14 (10-18) visits for the 99th percentile and above, and 4 (2-6) visits for the 99.5th percentile and above cutoffs. When limited to patients’ first visits through the end of the study period, daily median alerts were significantly lower, at 2 (0-4) visits for the 95th percentile and above cutoff and 0 visits for the 99th percentile and above and 99.5th percentile and above cutoffs.

Discussion

The findings of this cohort study suggest that suicide risk models can accurately stratify risk but will generate additional workload for clinicians. The degree of added workload depends on the risk threshold selected, the strategy for responding to repeated alerts per patient, and the overlap of electronic alerts with risks already identified by treating clinicians. Selecting a risk threshold also depends on the relative importance of avoiding false-negative and false-positive errors. Without estimates for these factors, attempts to design a system to respond to alerts will be hampered. However, understanding alert characteristics alone is insufficient for developing these programs, as there remain many key effectiveness, clinical, operational, ethical, and legal questions regarding implementation of these programs .

This cohort study has several limitations. Findings may not generalize to all health care systems. Available electronic health record data did not include measures of relevant life events or patient-reported outcomes, except for data from the Patient Health Questionnaire-9. Additionally, the efficacy of interventions associated with suicide risk alerts remains uncertain.

This study adds to the evidence supporting the use of suicide risk-prediction models to augment traditional clinician assessment. The study’s new data further suggest that an appropriate alert threshold could limit the burden on clinicians.

Back to top
Article Information

Accepted for Publication: August 8, 2020.

Published: October 21, 2020. doi:10.1001/jamanetworkopen.2020.21189

Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Kline-Simon AH et al. JAMA Network Open.

Corresponding Author: Andrea H. Kline-Simon, MS, Division of Research, Kaiser Permanente Northern California, 2000 Broadway, Third Floor, Oakland, CA 94612 (andrea.h.kline-simon@kp.org).

Author Contributions: Ms Kline-Simon and Dr Liu had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Kline-Simon, Sterling, Young-Wolff, Simon, Does, Liu.

Acquisition, analysis, or interpretation of data: Kline-Simon, Sterling, Young-Wolff, Simon, Lu, Liu.

Drafting of the manuscript: Kline-Simon, Sterling, Young-Wolff, Lu, Liu.

Critical revision of the manuscript for important intellectual content: Kline-Simon, Young-Wolff, Simon, Does, Liu.

Statistical analysis: Kline-Simon, Lu.

Obtained funding: Sterling.

Administrative, technical, or material support: Sterling, Young-Wolff, Does.

Supervision: Sterling, Liu.

Conflict of Interest Disclosures: Dr Simon reported receiving grants from the National Institute of Mental Health during the conduct of the study. No other disclosures were reported.

Funding/Support: This project was funded by Kaiser Foundation Hospitals and The Permanente Medical Group.

Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contributions: Agatha Hinman, BA (Division of Research, Kaiser Permanente) provided editorial assistance. Rebecca Ziebell, BS, and Eric Johnson, MS (Kaiser Permanente Washington Health Research Institute) provided technical assistance regarding adaptation of the Mental Health Research Network prediction models. Gabriel Escobar, MD (Division of Research, Kaiser Permanente Northern California) provided assistance in obtaining funding and guidance regarding model structure and data extraction. Deepmala Budhija, MS; Mei Lee, BS; Jamila Gul, BS; and Varun Sirikonda, MS (Division of Research Strategic Programming Group, Kaiser Permanente Northern California[KPNC]) located the data elements within KPNC and extracted the data. None of these individuals received compensation for their contributions.

References
1.
Hedegaard  H, Curtin  SC, Warner  M.  Increase in suicide mortality in the United States, 1999–2018. National Center for Health Statistics; 2020. NCHS Data Brief No 362. Accessed April 9, 2020. https://www.cdc.gov/nchs/data/databriefs/db362-h.pdf
2.
Brodsky  BS, Spruch-Feiner  A, Stanley  B.  The Zero Suicide model: applying evidence-based suicide prevention practices to clinical care.   Front Psychiatry. 2018;9:33. doi:10.3389/fpsyt.2018.00033PubMedGoogle ScholarCrossref
3.
Simon  GE, Johnson  E, Lawrence  JM,  et al.  Predicting suicide attempts and suicide deaths following outpatient visits using electronic health records.   Am J Psychiatry. 2018;175(10):951-960. doi:10.1176/appi.ajp.2018.17101167PubMedGoogle ScholarCrossref
4.
Simon  GE, Shortreed  SM, Coley  RY.  Positive predictive values and potential success of suicide prediction models.   JAMA Psychiatry. 2019;76(8):868-869. doi:10.1001/jamapsychiatry.2019.1516PubMedGoogle ScholarCrossref
5.
Gordon  NP; Kaiser Permanente Division of Research.  Similarity of the adult Kaiser Permanente membership in Northern California to the insured and general population in Northern California: statistics from the 2011-2012 California Health Interview Survey. Accessed March 24, 2020. https://divisionofresearch.kaiserpermanente.org/projects/memberhealthsurvey/SiteCollectionDocuments/chis_non_kp_2011.pdf
6.
Kroenke  K, Spitzer  RL, Williams  JB, Löwe  B.  The Patient Health Questionnaire somatic, anxiety, and depressive symptom scales: a systematic review.   Gen Hosp Psychiatry. 2010;32(4):345-359. doi:10.1016/j.genhosppsych.2010.03.006PubMedGoogle ScholarCrossref
×