Values above each bar indicate the number needed to screen.
eMethods. Predictive Modeling Details
eTable. Reference Codes From International Classification of Diseases (ICD), Version 10, Clinical Modification
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Walsh CG, Johnson KB, Ripperger M, et al. Prospective Validation of an Electronic Health Record–Based, Real-Time Suicide Risk Model. JAMA Netw Open. 2021;4(3):e211428. doi:10.1001/jamanetworkopen.2021.1428
How well do electronic health record–based suicide risk models perform in the clinical setting, and is performance generalizable?
This cohort study of 30-day suicide attempt risk among 77 973 patients showed good performance in nonpsychiatric clinical settings at scale and in real time. Numbers needed to screen were reasonable for an algorithmic screening test that required no additional data collection or face-to-face screening to calculate.
Suicide attempt risk models can be implemented with accurate performance at scale, but performance is not equal in all clinical settings, which requires model recalibration and updating prior to deployment in new settings.
Numerous prognostic models of suicide risk have been published, but few have been implemented outside of integrated managed care systems.
To evaluate performance of a suicide attempt risk prediction model implemented in a vendor-supplied electronic health record to predict subsequent (1) suicidal ideation and (2) suicide attempt.
Design, Setting, and Participants
This observational cohort study evaluated implementation of a suicide attempt prediction model in live clinical systems without alerting. The cohort comprised patients seen for any reason in adult inpatient, emergency department, and ambulatory surgery settings at an academic medical center in the mid-South from June 2019 to April 2020.
Main Outcomes and Measures
Primary measures assessed external, prospective, and concurrent validity. Manual medical record validation of coded suicide attempts confirmed incident behaviors with intent to die. Subgroup analyses were performed based on demographic characteristics, relevant clinical context/setting, and presence or absence of universal screening. Performance was evaluated using discrimination (number needed to screen, C statistics, positive/negative predictive values) and calibration (Spiegelhalter z statistic). Recalibration was performed with logistic calibration.
The system generated 115 905 predictions for 77 973 patients (42 490 [54%] men, 35 404 [45%] women, 60 586 [78%] White, 12 620 [16%] Black). Numbers needed to screen in highest risk quantiles were 23 and 271 for suicidal ideation and attempt, respectively. Performance was maintained across demographic subgroups. Numbers needed to screen for suicide attempt by sex were 256 for men and 323 for women; and by race: 373, 176, and 407 for White, Black, and non-White/non-Black patients, respectively. Model C statistics were, across the health system: 0.836 (95% CI, 0.836-0.837); adult hospital: 0.77 (95% CI, 0.77-0.772); emergency department: 0.778 (95% CI, 0.777-0.778); psychiatry inpatient settings: 0.634 (95% CI, 0.633-0.636). Predictions were initially miscalibrated (Spiegelhalter z = −3.1; P = .001) with improvement after recalibration (Spiegelhalter z = 1.1; P = .26).
Conclusions and Relevance
In this study, this real-time predictive model of suicide attempt risk showed reasonable numbers needed to screen in nonpsychiatric specialty settings in a large clinical system. Assuming that research-valid models will translate without performing this type of analysis risks inaccuracy in clinical practice, misclassification of risk, wasted effort, and missed opportunity to correct and prevent such problems. The next step is careful pairing with low-cost, low-harm preventive strategies in a pragmatic trial of effectiveness in preventing future suicidality.
Suicide prevention begins with risk identification and prognostication. The standard of care remains face-to-face screening and routine clinical interaction. Yet rates of suicidal ideation, attempts, and deaths continue to rise internationally despite increased monitoring and intervention efforts.1 The coronavirus disease 2019 (COVID-19) pandemic exacerbated contributing factors for suicide and will continue to do so in the post–COVID-19 era.2-4 Numerous prognostic models of suicide risk have been published.5 Few have been implemented into real-world clinical systems outside of integrated managed care settings.5-7 In some settings, universal screening might reduce risk of downstream suicidality.8 But in-person screening takes time and attention and can be conducted with variable quality.9 Concealed distress also subverts risk identification in face-to-face screening.10 Furthermore, those at risk might not be identified despite health care encounters as recently as the day they die from suicide.11-13
Linking scalable, automated risk prognostication with real-world clinical processes might improve suicide prevention.14 The most prominent example of an operational suicide risk prediction with implemented prevention is REACH VET (Recovery Engagement and Coordination for Health—Veterans Enhanced Treatment) from the Veterans Health Administration.6 Similarly, Army STARRS (Study to Assess Risk and Resilience in Servicemembers) demonstrated algorithmic potential in active duty service members.15,16 A number of groups, including ours, have published modeling studies for civilians both nationally (eg, the Mental Health Research Network)5,17-20 and internationally.21 A recent brief report7 estimated the increased potential workload of a suicide risk prediction model to generate alerts in an integrated managed care setting, Kaiser Permanente. In Europe, linking mobile health and predictive modeling for suicide prevention has been described,22 as have predictive modeling studies developed for national and single-payer cohorts.21,23
While some risk models rely on face-to-face screening data (eg, the Patient Health Questionnaire–9) to calculate risk,17 generating these important predictors relies on existing or changing clinical workflow—a difficult task. In some hospitals, universal screening occurs in the emergency department alone. A model reliant solely on routine, passively collected clinical data, such as medication and diagnostic data, might scale to any clinical setting regardless of screening practices. Few real-world data exist on successes and pitfalls of translating such models into operational clinical systems in the presence or absence of universal screening.7
Like any prognostic test, such as radiographic imaging and laboratory studies, electronic health record (EHR)–based risk models serve as an additional data point for clinical decision-making. When linked to guideline-informed and evidence-based education along with actionable, user-centered decision support, they might improve provision of suicide prevention. Such systems might prompt care outside of routine health encounters, eg, a prioritized telephone call to a high-risk patient who missed an appointment or guidance on assessing means to a primary care clinician who does not do so regularly. Ideally, these systems would improve quality of care while reducing burden on clinicians to respond appropriately at the right times.
Part of a larger technology-enabled suicide prevention program, our work applied the multiphase framework for action-informed artificial intelligence24 to suicide attempt prognostication. We completed phases 1 and 2 in initial model development20 followed by phase 3, replicative studies.18,25 The fourth phase includes design, usability, and feasibility testing for the operational platform before effectiveness testing and practice improvement in the final phase.
We evaluate prospectively the real-time EHR risk prediction platform here (fourth phase) to answer the question, “How well do EHR-based suicide risk models perform in the clinical setting, and is performance generalizable?” Models that fail to validate at this phase, or those not studied in this fashion prior to implementation, might covertly hinder clinical decision-making. Predictive models might be evaluated similarly to any novel prognostic data point (eg, a laboratory or imaging result).26 This validation should account for clinical context, setting, and the presence of universal screening.8
We studied an observational, prospective cohort of clinical inpatient, emergency department, and ambulatory surgery encounters at a major academic medical center in the mid-South, Vanderbilt University Medical Center (VUMC), from June 2019 to April 2020. Predictions were prompted by the start of routine clinical visits in the EHR. Because model validity was untested outside of research systems,27 model predictions did not trigger EHR alerts or decision support.
The VUMC Institutional Review Board approved 2 protocols with waiver of consent given the infeasibility of consenting these EHR-driven analyses across a health system. Only clinical production-grade systems were used to protect privacy and demonstrate feasibility. This study followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines.28
In this study, the predictive model trained on suicide attempt risk was used to predict both suicide attempt (primary) and suicidal ideation (secondary) within 30 days of discharge.25 Outcomes were ascertained via reference codes (International Classification of Diseases, Tenth Revision, Clinical Modification [ICD-10-CM]29; eTable in the Supplement). Because we and others have shown imperfect ascertainment from ICD codes for suicide attempt,20,30 our team (C.G.W., J.H., S.S.) manually reviewed coded suicide attempts and verified suicidal behaviors with intent to die.
Our previously published approach was internally valid at multiple time points (eg, 30 days vs 90 days).20 Thirty-day outcomes were selected as the prediction target with input from behavioral health experts involved in local suicide prevention.
We included all adult patients seen in inpatient, emergency department, and ambulatory surgery settings. Individuals with death dates in the Social Security Death Index were right-censored if deaths occurred within 30 days of discharge. Cause-of-death data were not available across the enterprise, so deaths from suicide were not included as prediction targets.
Full modeling details have been published20 and are in the eMethods in the Supplement. Briefly, we trained random forests, a nonparametric ensemble machine learning algorithm, on a heterogeneous, retrospective group of adult cases and controls prior to 2017 stored in a deidentified research repository, the Synthetic Derivative.31 These models were validated with a variant of bootstrapping with optimism adjustment, in which each bootstrap iteration was tested against a true holdout set to lessen overfitting.32 Model performance in training to predict suicide attempt within 30 days showed area under the receiver operating characteristic curve (AUROC) of 0.9 (95% CI, 0.9-0.91) on a deidentified data set of 3250 cases of manually validated suicide attempts and 12 695 adults with no history of suicide attempt.20
Predictors included the following:
Demographic data (age, sex, race)
Diagnostic data grouped to Centers for Medicare & Medicaid Services Hierarchical Condition Categories (HCC) (eg, schizophrenia-related ICD codes mapped to HCC 57)
Medication data grouped to the Anatomic Therapeutic Classification, level IV (eg, citalopram N06AB04 [level V] maps to Selective Serotonin Reuptake Inhibitors N06AB [level IV])
Past health care utilization (counts of inpatient, emergency department, and ambulatory surgery visits over the preceding 5 years)
Area Deprivation Indices33 by patient zip code
At registration for inpatient, emergency department, or ambulatory surgery visits, the modeling pipeline used 5 years of historical data to build a vector of predictors. Preliminary analyses showed the 5-year lookback window to perform similarly to models using all historical data. The predictive model then generated a probability of subsequent suicide attempt in 30 days. Here, we validate that probability to predict encounters for suicide attempt or suicidal ideation in the subsequent 30 days.
Calibration measures how well predicted probabilities reflect real-world outcome (eg, a 1% risk of suicide attempt means 1 of 100 similar individuals from that population should have the outcome). Miscalibrated models hamper clinical decision-making.34
To enrich signal, the research-grade model20 was trained with a sample of the larger population, increasing potential miscalibration. We anticipated miscalibration and corrected it with logistic calibration,35,36 a univariate logistic regression model with uncalibrated predictions trained on outcomes from June to October to recalibrate predictions from November to April.
Performance evaluation included discrimination: AUROC, sensitivity, specificity, positive predictive value (PPV), risk concentration; calibration: calibration slope and intercept, Spiegelhalter z statistic (P > .05 indicates the model is well calibrated37); and usefulness: number needed to screen (NNS, the reciprocal of PPV). Evaluation accounted for the presence or absence of universal screening. To generate CIs and analyze sensitivity to temporal length of EHRs, we varied the minimum length of medical records per performance analysis from all medical records (ie, including records for new patients up to medical records at least 2 years in length). Statistical analyses were conducted in Python, version 3.7 (Python Software Foundation), and in R, version 3.6.1 (R Foundation).
The study included 115 905 predictions for 77 973 patients (42 490 [54%] men, 35 404 [45%] women, 60 586 [78%] White, 12 620 [16%] Black) over 296 days, approximately 392 predictions per day (Table 1). Our analysis right-censored 1326 patients for all-cause death within 30 days of the preceding encounter. Because patients might be directly admitted without emergency department care, the subtotals per setting sum to greater than enterprisewide totals.
Recorded outcomes numbered 129 suicide attempts across 85 individuals (sex: 39 men [46%], 46 women [54%]; race: 64 White [75%]; 18 Black [21%], 3 non-White/non-Black [4%]; 23 repeat attempters) and 946 encounters for suicidal ideation across 395 individuals (sex: 222 men [56%], 156 women [39%]; race: 287 White [73%], 78 Black [20%], 30 non-White/non-Black [8%]; 170 repeat ideators). Manual medical record review of coded suicide attempts had PPV greater than 0.9 in ICD-10-CM with an interrater agreement κ of 1, notably higher than the PPV of 0.58 for ICD-9 in a medical record validation of 5543 medical records in prior work at VUMC.20
Cohort criteria affect model performance, as we and others have shown.38 Analyses considered duration of EHRs per patient, clinical settings (eg, inpatient vs emergency department), and universal screening. Demographic characteristics of sex (not gender, which lacks reliable identification in most EHRs) and race were also considered. Performance by length of EHR is shown in aggregate for each outcome (Table 2).
Risk concentration plots for all encounters are shown (Figure 1) with NNS, the reciprocal of PPV, per quantile. The highest risk quantiles have an NNS of 23 and 271 for suicidal ideation and suicide attempt, respectively.
Metrics by predicted risk quantile are shown for suicide attempt risk (Table 3). In settings with universal screening, the lowest risk quantile (n = 6795) with predicted risk threshold near 0 had a PPV of 0.1% for suicidal ideation and approximately 0 for suicide attempt. The highest risk quantile (n = 5457) above a threshold of 3.2% had a PPV of 3% for suicidal ideation and 0.3% for suicide attempt.
In settings without universal screening, the highest risk quantile (n = 4220) above a threshold of 3.2% had a PPV of 4.3% for suicidal ideation and 0.4% for suicide attempt. The lowest risk quantile (n = 23 589) of predicted risk near 0 had a PPV of 0.1% for suicidal ideation and 0 for suicide attempt.
The NNS for suicide attempt in the highest risk quantiles for men and women in the medical center–wide cohort were 256 and 323, respectively. By race, as coded in the EHR (White, Black), the NNS was 373 for White patients, 176 for Black patients, and 407 for non-White and non-Black patients.
In the first 5 months, predictions were miscalibrated (Spiegelhalter z = −3.1; P = .001). We applied logistic recalibration using those first 5 months, with improved calibration (Spiegelhalter z = 1.1; P = .26) in the subsequent 5-month study period.
This study validated performance of a published suicide attempt risk model20 using real-time clinical prediction in the background of a vendor-supplied EHR. Primary findings include accuracy at scale regardless of face-to-face screening in nonpsychiatric settings. We note feasible NNS in the highest predicted risk quantiles with potential for reduced screening workload for those at lowest risk. Overall performance was not sensitive to temporal length of EHRs. The decision of minimum length of EHR to display an alert or prediction for an individual patient, however, will be the subject of future decision support testing.
The potential implications of this work influence screening practices, clinical decision-making, and care coordination. Regarding screening, both false negatives and false positives have been considered weaknesses of suicide-focused risk models in systematic review.5 Here, we note very low false-negative rates in the lowest risk tiers both within (0.02%) and without (0.008%) universal screening settings (Table 3). Assuming that face-to-face screening takes, on average, 1 minute to conduct, automated screening for the lowest quantile alone would release 50 hours of clinician time per month. Regarding false positives, the NNS of 271 was feasible in the highest risk group. Suicidal ideation, even more common, had a better NNS of 23. For context, NNS was 418 for screening for dyslipidemia to prevent cardiac death when it was introduced.39 The present study provides further evidence that current models might be best suited to direct prevention to suicidal ideation and attempts—more common yet still in the causal pathway for death from suicide.5 A representative screening protocol is shown in Figure 2.
Regarding clinical decision-making, this study suggests clinical utility in identifying those who might not otherwise be assessed for new symptoms, symptomatic worsening, or life stressors not captured in the EHR. Moreover, linking automated risk stratification with evidence-based education on imminent risk, means assessment, and appropriate clinical triage might prove impactful even in the setting of low PPV.40
Regarding care coordination, risk models of rare outcomes enable longitudinal monitoring for those who might be at longer-term risk of attempts, (eg, 1-2 years, not 30 days).40 Current care coordination in many systems relies on manually curated patient tracking and local workflows by individual clinics or clinicians. Automated risk stratification with decision support might ameliorate challenges such as responding to messages from patients unfamiliar to the nurse or covering clinician, prompting telephone calls to those identified at risk who miss scheduled appointments, and facilitating coordinated care across disparate clinical departments.
A useful model must be well calibrated to reflect reality. Published models often originate from data sets that sample from a larger population to reduce case imbalance given rare outcomes, such as suicide attempts. Once trained, such models risk poor calibration where real-world outcome prevalence does not reflect research. Our risk model is one such example. Calibration of our implemented model was improved with facile logistic recalibration using earlier data to recalibrate later predictions. More sophisticated means of diagnosing and correcting miscalibration and drift merit consideration.41-43
One model does not fit all. Model performance was lower in psychiatric compared with nonpsychiatric settings. This model was trained on a heterogeneous mix of medical records prior to 2017. First, prevalent mental health–related risk factors in psychiatric settings might worsen model discrimination compared with the training sample. Second, outcomes were rare in those settings, and cohorts were concomitantly smaller. Third, because psychiatric care is likely to address suicidality, care in those settings confounds pure prognostic accuracy. Such therapeutic differences might be well suited to counterfactual prediction in the future.44 Future work should include development and validation of site-specific predictive models—or models that will be “site aware” in deployment.
Without attention to these differences and analyses conducted here, intervention might be linked to misspecified models. Moreover, it becomes difficult to assess pure model performance once an intervention is prompted by it. Future iterations of these models (1) might be updated based on site-specific cohorts to improve performance and (2) should include the interventions available to prevent suicide within models themselves to prevent model drift even when the care delivered is accomplishing its intended purpose.27
Strengths of this study include a large, real-world vendor-supplied EHR setting. It incorporates prospective validation on natural cohorts of individuals receiving routine care over the study period. The study included visits across the breadth of a major academic medical center, which improves generalizability. These results complete only part of the fourth phase of action-informed artificial intelligence24 to help prevent suicide. We have designed our models with usability and feasibility in mind, but these have not yet been tested. Our modeling requires no additional screening (eg, the Patient Health Questionnaire–9 or Columbia Suicide Severity Rating Scale), although future versions might incorporate them to improve risk prognostication. Yet, impact will not be achieved without careful attention to the people and clinical processes to leverage these predictions to prevent suicide.
Limitations of this work include a single-center study with low outcome prevalence, particularly for suicide attempt. Predictors included in this model were chosen to optimize scalability and potential generalizability. They rely on structured data ubiquitous in EHRs (diagnostic codes, medications, past utilization, demographic characteristics). However, they also limit the model’s ability to predict suicide attempt risk by failing to capture important predictors recorded in unstructured free text notes, for example, or outside the EHR. Implemented risk models were initially trained on a noncomprehensive subsample of the medical center population. Ascertainment is limited to care at a single medical center, so events occurring at external health systems were not captured. However, this bias is conservative for model performance analyses; suicide attempts that occur outside the study site are false negatives, far more likely to affect and worsen apparent performance metrics given the low number of cases than the inverse. Deaths from suicide were not ascertained in this study. Future work to improve ascertainment and continuously evaluate these models in production is paramount.
Multiple opportunities to expand this work remain. Better understanding of misclassification of risk will improve model performance and potential impact. Novel means of ascertaining suicidality both in and out of individual health systems through health information exchange—such as that available in the Veterans Health Administration; in large health systems, such as HCA Healthcare, Tenet Healthcare, or Kaiser Permanente; or in states, such as Connecticut45—might lead to improved model evaluation and improved performance. Through partnerships such as the Tennessee Department of Health–VUMC Experience,46 we are beginning to devise a system that would bridge the current gap preventing ascertainment of death from suicide.
Suicide prevention will not be achieved through a predictive model alone, regardless of its analytic performance. Pragmatic trials to study real-world effectiveness of these predictive models in concert with thoughtful, user-centered clinical decision support remains the path to achieving clinical impact in suicide prevention.
In this study, implementation of validated predictive models of suicide attempt risk showed reasonable performance at scale and feasible NNS for subsequent suicidal ideation or suicide attempt in a large clinical system. Calibration performance of research-derived models was improved with logistic calibration. Scalable, real-time automated prediction of risk of suicidality through multidisciplinary collaboration is an achievable goal for the growing number of predictive models in this domain. It requires careful pairing with low-cost, low-harm preventive strategies in a pragmatic trial to be evaluated for effectiveness in preventing suicidality in the future.
Accepted for Publication: January 20, 2021.
Published: March 12, 2021. doi:10.1001/jamanetworkopen.2021.1428
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2021 Walsh CG et al. JAMA Network Open.
Corresponding Author: Colin G. Walsh, MD, MA, Department of Biomedical Informatics, Vanderbilt University Medical Center, 2525 W End Ave, Ste 1475, Nashville, TN 37203 (email@example.com).
Author Contributions: Dr Walsh had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Walsh, Johnson, Clark, Novak, Robinson, Stead.
Acquisition, analysis, or interpretation of data: Walsh, Ripperger, Sperry, Harris, Fielstein.
Drafting of the manuscript: Walsh, Ripperger, Robinson.
Critical revision of the manuscript for important intellectual content: Walsh, Johnson, Sperry, Harris, Clark, Fielstein, Novak, Stead.
Statistical analysis: Walsh, Sperry.
Obtained funding: Walsh, Johnson, Stead.
Administrative, technical, or material support: Walsh, Ripperger, Sperry, Novak, Robinson.
Supervision: Walsh, Johnson, Clark, Stead.
Conflict of Interest Disclosures: Dr Walsh reported receiving grants from National Institutes of Health (NIH) during the conduct of the study; and receiving grants (research support for unrelated work) from IBM Watson Health, personal fees from the Southeastern Home Office Underwriters Association and Hannover Re, and equity from Sage AI, LLC, outside the submitted work. Dr Johnson reported receiving personal fees (member of scientific advisory board; honorarium) from Perception Health and Taubman Institute and personal fees (national advisory committee; stipend and travel reimbursement) from Robert Wood Johnson Foundation during the conduct of the study; and personal fees (Council of Councils) from NIH; personal fees (chair, Board of Scientific Counselors; stipend and travel reimbursement) from the National Library of Medicine; personal fees (chair, Informatics Advisory Committee; stipend and travel reimbursement) from the American Board of Pediatrics; and personal fees (member of the Leadership Consortium; meeting travel reimbursement) from the National Academy of Medicine outside the submitted work. Mr Ripperger reported receiving grants from NIH during the conduct of the study. Dr Novak reported receiving grants from the Stead Foundation during the conduct of the study; grants from the Military Suicide Research Consortium outside the submitted work; and salary support from IBM Corporation for research unrelated to this project. Ms Robinson reported receiving grants from NIH during the conduct of the study. Dr Stead reported receiving grants from NIH during the conduct of the study; serving as a member of a journal oversight committee and receiving meeting travel reimbursement from the American Medical Association; serving as a member of planning committee and receiving meeting travel reimbursement from the Computer Research Association; receiving personal fees (chair, National Committee on Vital & Health Statistics; paid as special government employee; and reimbursed for travel to committee meetings) from US Department of Health and Human Services/Centers for Disease Control and Prevention; receiving personal fees (member of board of directors, restricted stock grants and director fees) from HealthStream; serving as a member of National Academy of Sciences and National Academy of Medicine governing and study committees and receiving meeting travel reimbursement from National Academy of Sciences, Engineering and Medicine; receiving personal fees for grant reviews from Chan Zuckerberg Biohub; and receiving personal fees (member of strategic planning panel and scientific director review committee; received stipend and reimbursed for travel) from Health and Human Services/National Library of Medicine outside the submitted work. No other disclosures were reported.
Funding/Support: This work was supported by Evelyn Selby Stead Fund for Innovation, Vanderbilt University Medical Center (R01 MH121455: Distinguishing Clinical and Genetic Risk of Suicidal Ideation From Attempts to Inform Prevention; R01 MH116269: Leveraging Electronic Health Records for Pharmacogenomics of Psychiatric Disorders).
Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: This study required multidisciplinary collaboration of many experts, including a team from Vanderbilt Health Information Technology. We are grateful to the many leaders, project managers, developers, and staff that contributed to this work and the overall operational effort, including Stephan Heckers, MD, MSc, chair of the Department of Psychiatry and Behavioral Sciences at VUMC, and Jameson Norton, MBA, chief executive officer of Vanderbilt Psychiatric Hospital.
Create a personal account or sign in to: