Importance
Communication about end-of-life care is a core clinical skill. Simulation-based training improves skill acquisition, but effects on patient-reported outcomes are unknown.
Objective
To assess the effects of a communication skills intervention for internal medicine and nurse practitioner trainees on patient- and family-reported outcomes.
Design, Setting, and Participants
Randomized trial conducted with 391 internal medicine and 81 nurse practitioner trainees between 2007 and 2013 at the University of Washington and Medical University of South Carolina.
Intervention
Participants were randomized to an 8-session, simulation-based, communication skills intervention (N = 232) or usual education (N = 240).
Main Outcomes and Measures
Primary outcome was patient-reported quality of communication (QOC; mean rating of 17 items rated from 0-10, with 0 = poor and 10 = perfect). Secondary outcomes were patient-reported quality of end-of-life care (QEOLC; mean rating of 26 items rated from 0-10) and depressive symptoms (assessed using the 8-item Personal Health Questionnaire [PHQ-8]; range, 0-24, higher scores worse) and family-reported QOC and QEOLC. Analyses were clustered by trainee.
Results
There were 1866 patient ratings (44% response) and 936 family ratings (68% response). The intervention was not associated with significant changes in QOC or QEOLC. Mean values for postintervention patient QOC and QEOLC were 6.5 (95% CI, 6.2 to 6.8) and 8.3 (95% CI, 8.1 to 8.5) respectively, compared with 6.3 (95% CI, 6.2 to 6.5) and 8.3 (95% CI, 8.1 to 8.4) for control conditions. After adjustment, comparing intervention with control, there was no significant difference in the QOC score for patients (difference, 0.4 points [95% CI, −0.1 to 0.9]; P = .15) or families (difference, 0.1 [95% CI, −0.8 to 1.0]; P = .81). There was no significant difference in QEOLC score for patients (difference, 0.3 points [95% CI, −0.3 to 0.8]; P = .34) or families (difference, 0.1 [95% CI, −0.7 to 0.8]; P = .88). The intervention was associated with significantly increased depression scores among patients of postintervention trainees (mean score, 10.0 [95% CI, 9.1 to 10.8], compared with 8.8 [95% CI, 8.4 to 9.2]) for control conditions; adjusted model showed an intervention effect of 2.2 (95% CI, 0.6 to 3.8; P = .006).
Conclusions and Relevance
Among internal medicine and nurse practitioner trainees, simulation-based communication training compared with usual education did not improve quality of communication about end-of-life care or quality of end-of-life care but was associated with a small increase in patients’ depressive symptoms. These findings raise questions about skills transfer from simulation training to actual patient care and the adequacy of communication skills assessment.
Trial Registration
clinicaltrials.gov Identifier: NCT00687349
Observational studies have suggested that communication about end-of-life care is associated with decreased intensity of care, increased quality of life, and improved quality of dying.1,2 In addition, interventions that focus on communication about palliative and end-of-life care, using palliative care specialists, have demonstrated improved quality of life, decreased symptoms of depression, and reduced intensity of care at the end of life.3-5 Whether similar benefits can be obtained by training clinicians other than palliative care specialists in communication about palliative and end-of-life care remains unclear.
Simulation to learn skills for communicating bad news to patients with cancer forms the basis of a 4-day workshop for medical oncology fellows.6 This workshop has been associated with significant improvement in participants’ ability to deliver bad news and discuss transitions to palliative care. Clinicians can learn skills for communicating about palliative care in small-group facilitated settings using simulated patients and family members.6-10 A systematic review of communication skills interventions noted the effectiveness of interventions using simulation but observed that no studies have shown an effect on patient-reported outcomes.11
We conducted a randomized trial to examine whether a communication skills–building workshop aimed at internal medicine and nurse practitioner trainees, using simulation during which trainees practiced skills associated with palliative and end-of-life care communication, had any effect on patient-, family-, and clinician-reported outcomes. Our hypothesis was that this workshop would increase the discussion of palliative and end-of-life care by trainees and improve patient, family, and clinician ratings of the quality of communication about end-of-life care as well as the quality of end-of-life care.
Internal medicine residents, subspecialty fellows, and nurse practitioner trainees were randomized to the simulation-based intervention vs usual education. Randomization was at the level of the trainee, but the primary outcome was assessed at the level of patients clustered under trainees. Randomization was stratified by site, year of training, and profession and occurred in blocks of 4. Outcomes were assessed by surveying 3 types of evaluators: patients, families, and clinicians. Evaluators’ encounters with trainees occurred before or after the time of the intervention. Trainees could not be blinded to group assignment, but outcome evaluators and staff collecting evaluations were.
Human subjects approval was obtained from the University of Washington and Medical University of South Carolina institutional review boards. Trainees provided written consent. Evaluators were provided an information sheet; we obtained a waiver for written documentation of consent. Race/ethnicity, an important predictor of attitudes toward end-of-life care, was based on self-reports using fixed categories.
Trainees were recruited from University of Washington and Medical University of South Carolina between 2007 and 2012. Eligible trainees included all internal medicine residents and fellows in pulmonary and critical care, oncology, geriatrics, nephrology, and palliative medicine subspecialties. Nurse practitioners were eligible if they were currently enrolled in, or had recently completed, training programs that included care for adults with life-threatening or chronic illnesses.
Patients were identified by screening medical records, identifying those who had encounters with an enrolled trainee. Encounters occurred between trainees and patients in primary care clinics or on prespecified inpatient services (eg, general medicine, medical intensive care unit, hematology-oncology). Eligible patients had a high likelihood of having a discussion about end-of-life care, and eligibility criteria included median survival of approximately 1 to 2 years: life-limiting illness (eg, metastatic or stage IV cancer, oxygen-dependent chronic obstructive pulmonary disease, stage III or IV heart failure, Child-Pugh class C liver disease) or comorbidities suggesting severe illness (score ≥5 on the Charlson Comorbidity Index12). We also included patients with documentation of communication about end-of-life care (palliative care consult or do not resuscitate order), an intensive care unit stay of 72 hours or longer, or age 80 years or older with a hospital stay of 72 hours or longer. For outpatients, we required 3 or more visits with the trainee to enhance opportunity to discuss end-of-life care.
We also required that all evaluators remember the trainee well enough to evaluate his or her communication skills, and all surveys included a trainee’s photograph. We used in-person and mail-based recruitment procedures, with 3 contacts for nonrespondents. Patients could be contacted to evaluate up to 2 trainees and provided ratings between October 2007 and January 2013.
Family members were identified 1 of 3 ways: participating patients identified family involved with their care; family of noncommunicative, otherwise-eligible, patients; and family of eligible patients who died. Families could be contacted to evaluate up to 2 trainees and provided ratings between November 2007 and January 2013.
Clinician-evaluators included nurses and attending physicians who observed care provided by the trainee. Nurse-evaluators were identified through screening patient medical records and review of unit schedules. Physician-evaluators were faculty members identified through patient medical records or clinical schedules. Clinician-evaluators were not limited in the number of trainees they could evaluate and provided ratings between April 2008 and January 2013.
Timing of Evaluation Survey Distribution
Surveys were distributed to evaluators based on documented encounters between trainee and evaluator. Encounters for the preintervention phase occurred in the 6-month period preceding the workshop/control phase; encounters for the postintervention phase occurred in the 10 months following the workshop/control phase. The surveys did not reference a specific encounter but asked evaluators to assess trainees across all encounters. We did not require that the encounters include discussion of end-of-life care, because we hypothesized that the intervention would activate trainees to initiate such discussions.
The intervention was adapted from a residential workshop associated with improved communication skills for oncology fellows.6,13 Our intervention comprised eight 4-hour sessions led by 2 faculty: a physician and a nurse. A content outline and facilitator guide were developed. Each session included (1) a brief didactic overview, including a demonstration role-play by faculty; (2) skills practice using simulation (simulated patients, family, or clinicians); and (3) reflective discussions. Each session addressed a specific topic (eg, building rapport; giving bad news; talking about advance directives; nurse-physician conflict; conducting a family conference; do-not-resuscitate status and hospice; and talking about dying).6,14 The intervention used 2 patient stories that unfolded sequentially, starting with diagnosis of serious illness and ending with death. In a before-after analysis of these intervention trainees, the course was associated with significant improvements in communication skills regarding giving bad news and responding to emotion, as assessed by standardized patient encounters.15
Primary Outcome—Quality of Communication
The quality of communication (QOC) questionnaire was developed from qualitative interviews and focus groups with patients, families, and clinicians and is available online.16-18 It is a multi-item survey (18 items for patients and clinicians; 19 for family): 1 item measures the overall quality of communication, and the remaining items measure specific aspects of communication. Each item is rated from 0 (“poor”) to 10 (“absolutely perfect”). The instrument has acceptable internal consistency, and construct validity was supported through correlations with conceptually related measures (eg, number of discussions with the clinician about end-of-life care and extent to which the clinician knows the patient’s treatment preferences).17
For this study, we used a previously validated composite score constructed as the respondent’s mean score for valid responses to all ratings, after first recoding responses of “clinician didn’t do this” to 0.17 If the respondent omitted rating an item, this item contributed to neither numerator nor denominator. For example, a patient who indicated that the trainee had not performed 6 of 17 items, had a rating of 0 on 4 items, a rating of 6 on 5 items, and a rating of 10 on 2 items would receive a composite score of 2.94 ({[0 · 10]+[6 · 5]+[10 · 2]}/17). Although a minimal clinically significant difference (MCID) is not known for the quality of communication questionnaire, a 7-item subscale was responsive in a prior randomized trial of a communication intervention, showing a significant but small improvement (0.6 points, effect size = 0.21).19 In addition to the composite measure, we examined a single-item rating of overall communication.
Secondary Outcomes—Quality of End-of-Life Care
The quality of end-of-life care (QEOLC) questionnaire is a multi-item (26 items for patients and families; 10 for clinicians) survey developed through qualitative studies for assessing the quality of clinician skill at providing end-of-life care.20-23 The instrument has acceptable internal consistency.24 Construct validity was supported through correlation with conceptually related measures: physician knowledge of palliative care; patient and family satisfaction with care; and nurse ratings of physician’s care.24 We used a composite measure constructed as the respondent’s mean for valid responses from all items, similar to the QOC questionnaire described above.
Symptoms of depression were measured using the 8-item Personal Health Questionnaire (PHQ-8), a widely used measure of depressive symptoms, appropriate for populations with chronic medical conditions.25,26 The PHQ-8 has excellent reliability, test-retest stability, and sensitivity and specificity,27 as well as demonstrated validity28 and responsiveness to interventions.29 PHQ-8 scores sum component symptoms (on a 4-point scale) and can range from 0 (no symptoms during the preceding 2 weeks) to 24 (8 symptoms experienced nearly every day), with high scores reflecting greater depression. A score was computed for all respondents who answered at least 7 items, with scores for patients answering only 7 items weighted to compensate for the missing item. The score was defined as missing if fewer than 7 items were answered. The MCID for the PHQ-8 is a 5-point change.30,31
Functional status was measured with the 12-item Short-Form Health Survey (SF-12), which has been used with patients with chronic illness32 and older populations33 and provides a standard composite measure with good psychometric characteristics including internal reliability, test-retest stability, validity,34,35 and responsiveness.36 Valid responses to all 12 items are required for computation of the composite score. Scores on the physical component of the SF-12 can range from 10.5 to 70.1; for the mental component, the potential range is 7.8 to 72.0. For both components, higher scores represent better health. The MCID has been estimated between 4 and 7 points.37
The association of the intervention with all outcomes of interest was tested using regression models. Because trainee randomization was stratified, site and randomization strata (trainee type and level of training) were included as covariates in all models.
Data provided by patient-, family-, and clinician-evaluators were cross-classified, with some evaluators providing ratings for multiple trainees, and trainees receiving ratings from multiple evaluators. Because there was minimal clustering of trainees under patient- or family-evaluators, 1 survey was selected per evaluator, with selections favoring surveys maximizing the number of trainees evaluated. This allowed analysis using simple clustered models, with patient- and family-evaluators clustered under trainees. Each model included only trainees for whom there was at least 1 valid response on the outcome for both the preintervention and postintervention periods. Models regressed each outcome on study period (preintervention or postintervention), randomization group, and the primary predictor of interest: an interaction term for study period and randomization group. For patients and families, scores on the QOC and QEOLC questionnaires demonstrated ceiling effects; therefore, these scores were modeled as censored variables using Tobit regression. All other outcomes were modeled with robust linear regression. All patient and family models were based on restricted maximum likelihood estimation. In addition to the primary analyses, we performed 2 post hoc analyses on patient QOC scores, restricting the samples to patients whose care was provided in the outpatient setting or patients who rated their own health status as “poor” on a single health-status question.
We retained the cross-clustered design of clinician data, given the greater clustering of trainees under evaluators. For these analyses, a clinician-evaluator could evaluate trainees in one or both randomization groups. Each level-1 model regressed the outcome on study period; the level-2 model regressed the level-1 intercept on randomization group and the covariates for randomization strata and regressed the level-1 slope on randomization group only. Of primary interest was the coefficient for the level-1 slope regressed on randomization group. We modeled all clinician outcomes with robust linear regression, using full maximum likelihood. Models were based on surveys with complete data on all predictors and the outcome of interest.
We conducted an additional analysis using propensity scoring to weight patient scores on the primary outcome to examine for potential nonresponse bias. This model was weighted to make surveys used in the analysis representative of all surveys requested from patients (eAppendix in Supplement). These analyses showed no evidence that nonresponse or exclusion of surveys from the analysis produced bias in the primary study finding (eAppendix and eTable 1 in Supplement).
Sample size was determined by the number of trainees in the 2 institutions. Power to find a 2-point change and large effect size (γ = 0.80) on the QOC questionnaire was estimated as 0.80, assuming 200 trainees per group; intraclass correlation coefficient estimates of 0.11 for patients and 0.35 for families; 4 or 5 evaluators per trainee; and 2-sided α = .05. The 2-point change on the QOC questionnaire was based on the hypothesis that the intervention would improve at least 2 QOC items by 1 point.
All inferential statistics were based 2-sided tests, with P < .05 as statistically significant. We used IBM SPSS version 19 (IBM SPSS); Mplus version 7 (http://www.statmodel.com); and HLM version 7.0 (Scientific Software International Inc) for all analyses.
We approached 1068 eligible trainees, of whom 472 (44%) were randomized (Figure 1). Participation rates were higher for physicians than for nurse practitioners (55% vs 18%; P < .001). Among physicians, participation rates were higher for first-year residents than for those in later postgraduate years (81% vs 39%; P < .001) and for women than for men (60% vs 52%; P = .04). Participation rates were also higher for non-Hispanic whites compared with racial/ethnic minorities (61 vs 52%; P = .03). Of the 406 trainees who completed the study, 184 (45%) were randomized to the intervention. Characteristics of the randomized trainees are shown in Table 1.
We received 1866 patient evaluations completed by 1717 patients evaluating 345 trainees: 1569 patients evaluated 1 trainee and 148 patients evaluated 2 trainees. We received 936 surveys completed by 898 family respondents, evaluating 295 trainees: 861 evaluating 1 trainee and 37 evaluating 2 trainees. We also received 2756 surveys completed by 890 clinicians evaluating 325 trainees: 360 evaluating 1 trainee, 176 evaluating 2 trainees, 345 evaluating 3 to 15 trainees, and 9 evaluating 16 to 27 trainees. Table 2 shows characteristics of patient-, family-, and clinician-evaluators.
Evaluator response rates were calculated based on the surveys sent rather than individual participants and excluded from the denominator respondents who indicated that they did not recognize the trainee. The rates were 44% of patient surveys, 68% of family surveys, and 57% of clinician surveys. If it is further assumed that the same proportions of nonrespondents and respondents did not recognize the trainee, the estimated response rates are 56% of patient surveys, 74% of family surveys, and 64% of clinician surveys.
Among patients, response rates differed according to the eligibility criteria, with significantly lower rates for patients in hospice care (27% vs 42%; P < .001), those who had documented communication about end-of-life care (34% vs 42%; P = .002), inpatients older than 80 years (31% vs 42%; P < .001), and those with cancer (36% vs 42%; P = .002) or end-stage liver disease (32% vs 41%; P = .01). Response rates were also lower for minority groups (self-reported) compared with white/non-Hispanic (38% vs 44%; P < .001) and for patients recruited from the inpatient setting compared with the outpatient setting (37% vs 66%; P < .001). The remaining eligibility criteria, patient sex, and study period were not associated with response rates (eTable 2 in Supplement).
Family members were significantly less likely to complete surveys as a result of the patient’s death (29% vs 78%; P < .001) or if the patient was a member of a racial/ethnic minority group (60% vs 69%; P = .003). Family member response rates were not associated with any other patient characteristics or study period (eTable 3 in Supplement).
Among clinician-evaluators, physicians were significantly more likely to return surveys than nurses (65% vs 52%; P < .001). There was no evidence of differential response rates by sex of clinician-evaluator or trainee, trainee type, or setting (eTable 4 in Supplement).
Primary Outcome—QOC Scores
The mean QOC score was 6.5 (95% CI, 6.2 to 6.8) on postintervention patients’ surveys for intervention trainees, compared with 6.3 (95% CI, 6.2 to 6.5) on surveys for all other patient groups (ie, patients of control trainees from both periods and preintervention patients of intervention trainees). After covariate adjustment, there was no significant association between the intervention and QOC score (Table 3 and eTable 5 in Supplement) For the single-item rating of overall QOC, mean scores were 8.4 (95% CI, 8.1 to 8.7) on postintervention ratings of intervention trainees and 8.5 (95% CI, 8.3 to 8.6) for all other ratings. After covariate adjustment, there were no significant differences associated with the intervention. Scores on the QOC questionnaire (both the total score and single-item rating) were significantly higher at the Medical University of South Carolina than at the University of Washington and significantly lower for first-year residents than for other trainees (Table 3).
To explore potential subgroups for whom end-of-life discussions might be more feasible or relevant, we performed 2 post hoc analyses, restricting the sample to outpatients and to patients who rated their health status as “poor” on a single-item health-status question. The intervention was not associated with improvement in QOC score among outpatients (b = 0.041 [95% CI, −1.36 to 1.44]) but was associated with significant improvement in QOC score among patients who rated their health status as “poor” (b = 1.430 [95% CI, 0.28 to 2.58]).
The family- and clinician-rated QOC scores were not associated with the intervention (Table 3). For family surveys, single-item ratings were significantly higher at the Medical University of South Carolina than at the University of Washington, but there was no association with training year; neither site nor training year was associated with differences in QOC scores (eTable 6 in Supplement). For clinician surveys, first-year residents had significantly lower scores on both the QOC score and single-item rating, but site was not associated with either (eTable 7 in Supplement). Propensity modeling showed no evidence that nonresponse or exclusion of surveys from the analysis produced bias in the primary study finding (eTable 1).
Secondary Outcome—QEOLC Scores
Findings for the QEOLC score showed similar results (Table 3). For patient ratings, the mean score for trainees after the intervention was 8.3 (95% CI, 8.1 to 8.5), compared with 8.3 (95% CI, 8.1 to 8.4) for all other surveys. After covariate adjustment there was no association with the intervention, but there was a significant association with study site (higher at Medical University of South Carolina) and training year (lowest for first-year residents). Family ratings showed no association with the intervention and also showed no association with study site or training year. Clinician ratings showed no association with the intervention and no association with study site; however, first-year residents had significantly lower scores.
Patients’ depressive symptoms were significantly associated with the intervention (Table 4). The mean score on the PHQ-8 for patients of trainees who had received the intervention was 10.0 (95% CI, 9.1 to 10.8), compared with 8.8 (95% CI, 8.4 to 9.2) for patients of control trainees and preintervention trainees in the intervention group. After covariate adjustment, the intervention was associated with a significant increase in depressive symptoms, with a preintervention-to-postintervention increase in the intervention group of 2.2 PHQ-8 points (95% CI, 0.6 to 3.8) compared with the control group (less than the MCID of 5 points.) There was no association with study site, but depression scores for patients of the most senior trainees were significantly lower than those for patients of first-year residents. The intervention was not associated with depression scores in family respondents.
Patients’ SF-12 physical and mental component scores were not associated with the intervention. The mean physical status score for patients of postintervention trainees was 30.8 (95% CI, 29.4 to 32.1), compared with 29.7 (95% CI, 29.0 to 30.4) for patients of control trainees and patients of preintervention trainees. Means for the mental component scores for the 2 groups were 43.7 (95% CI, 42.1 to 45.2) and 43.7 (95% CI, 42.9 to 44.4). Adjusted models showed no significant intervention effect (Table 4).
Communication skills can be taught using simulation, but to our knowledge no previous studies have examined patient-reported outcomes of such training.6-11 We conducted a randomized trial of a simulation-based communication skills-building workshop for internal medicine residents, subspecialty fellows, and nurse practitioners that assessed the effects of this intervention on patient-, family-, and clinician-reported outcomes. In another publication, we showed that this intervention was associated with acquisition of new skills in delivering bad news and responding to emotion, as assessed by standardized-patient encounters.15 In this study, we found there was no significant change in ratings of QOC or QEOLC as assessed by patients, family, or clinicians. We found significant improvement in ratings of QOC for patients who assessed their health status as “poor,” for whom communication about palliative care may be particularly relevant; however, as a post hoc subgroup analysis, this must be interpreted with caution.
A possible explanation for the absence of change in patient and family ratings of QOC and QEOLC may be linked to the difficulties that untrained or unprompted patients or family have in accurately rating clinician communication or end-of-life care. Although an intervention to identify and provide feedback related to patient-specific barriers to communication about end-of-life care was associated with a significant increase in patient-rated quality of end-of-life communication, the effect size was small.19 These measures of communication and care are relatively new, and their responsiveness, sensitivity, and MCID are not known.17,24 Ratings by trained standardized patients are more reliable for assessing communication skills than ratings by untrained patients.38,39 Similarly, a randomized trial of a communication skills workshop for oncologists showed improvement in communication skills as assessed by trained raters but no improvement in patient ratings.7,40 Therefore, our findings may not negate the value of using simulation for communication skills training (which appears to have improved trainees’ communication skills15) but suggest that patients and family members may require training or prompting to provide accurate assessment of these skills. It is also possible that the time lag between evaluators’ working with the trainee and completing the evaluation affected evaluators’ ability to rate accurately or that patient contact with multiple clinicians diluted the effect of a trained clinician. It is also possible that the intervention was not effective despite improved scores with standardized patients15 or that improvement in communication skills in a standardized-patient encounter does not translate to actual patient care.
The increase in patients’ depressive symptoms associated with the intervention is noteworthy. Although statistically significant, the 2.2-point change in PHQ-8 scores is less than the MCID and is the result of one among multiple comparisons. However, patients could experience depressive symptoms or feelings of sadness as a result of discussion about end-of-life care. An observational study showed that patients’ understanding of an incurable prognosis was associated with lower patient ratings of their physicians’ communication,41 supporting the possibility that increasing patients’ awareness of prognosis may trigger negative experiences. Our finding that the increase in patients’ depressive symptoms was significantly greater for first-year residents suggests this increase might be associated with the skill level of the clinician having the discussion. Future studies should explore the effect of discussing end-of-life care on patients’ psychological symptoms and satisfaction with care. If these findings are substantiated, studies should also consider ways to mitigate negative effects while achieving the positive effects of these discussions.3-5
The randomized design of our study and the number of participants are important strengths, but additional limitations should be considered. First, the participation rates were fairly high for physicians but lower for nurse practitioners, which may affect the generalizability of the findings. In addition, the generalizability of these findings to training at other institutions is not certain. Second, participation rates for evaluators could allow nonresponse bias. Sicker patients were less likely to participate, limiting our ability to assess the intervention among patients most likely to have an end-of-life discussion. Third, because evaluations were completed up to 10 months after the intervention, there could be shorter-term benefits that were not identified.
Among internal medicine and nurse practitioner trainees, simulation-based communication skills training compared with usual education did not improve quality of communication about end-of-life care or quality of end-of-life care but was associated with a small increase in patients’ depressive symptoms. These findings raise questions about skills transfer from simulation training to actual patient care and the adequacy of communication skills assessment.
Corresponding Author: J. Randall Curtis, MD, MPH, Division of Pulmonary and Critical Care, Box 359762, Harborview Medical Center, University of Washington, Seattle, WA 98104 (jrc@u.washington.edu).
Author Contributions: Drs Curtis and Engelberg had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Curtis, Back, Shannon, Doorenbos, Kross, Engelberg.
Acquisition of data: Curtis, Back, Ford, Shannon, Doorenbos, Kross, Edlund, Arnold, O’Connor, Engelberg.
Analysis and interpretation of data: Curtis, Back, Ford, Downey, Shannon, Doorenbos, Reinke, Feemster, Arnold, Engelberg.
Drafting of the manuscript: Curtis, Back, Downey, Kross, Engelberg.
Critical revision of the manuscript for important intellectual content: Curtis, Back, Ford, Downey, Shannon, Doorenbos, Reinke, Feemster, Edlund, Arnold, O’Connor, Engelberg.
Statistical analysis: Downey.
Obtained funding: Curtis, Shannon, Engelberg.
Administrative, technical, or material support: Curtis, Back, Ford, Downey, Shannon, Doorenbos, Kross, Reinke, Edlund, O’Connor, Engelberg.
Study supervision: Back, Ford, Shannon, Engelberg.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Drs Feemster and Engelberg reported receiving salary support from a career development award from the National Heart, Lung, and Blood Institute. Dr Reinke reported receiving grants or grants pending from the Department of Veterans Affairs and the National Palliative Care Research Center and receiving payment for development of educational presentations from the European Respiratory Society. No other authors reported disclosures.
Funding/Support: This study was supported by the National Institute of Nursing Research of the National Institutes of Health (R01NR009987).
Role of the Sponsor: The National Institutes of Health had no role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; the preparation, review, or approval of the manuscript; or the decision to submit the manuscript for publication.
Disclaimer: The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Correction: This article was corrected online February 11, 2014, for incorrect language in Figure 1.
1.Wright
AA, Zhang
B, Ray
A,
et al. Associations between end-of-life discussions, patient mental health, medical care near death, and caregiver bereavement adjustment.
JAMA. 2008;300(14):1665-1673.
PubMedGoogle ScholarCrossref 2.Zhang
B, Wright
AA, Huskamp
HA,
et al. Health care costs in the last week of life: associations with end-of-life conversations.
Arch Intern Med. 2009;169(5):480-488.
PubMedGoogle ScholarCrossref 3.Temel
JS, Greer
JA, Muzikansky
A,
et al. Early palliative care for patients with metastatic non-small-cell lung cancer.
N Engl J Med. 2010;363(8):733-742.
PubMedGoogle ScholarCrossref 4.Greer
JA, Pirl
WF, Jackson
VA,
et al. Effect of early palliative care on chemotherapy use and end-of-life care in patients with metastatic non-small-cell lung cancer.
J Clin Oncol. 2012;30(4):394-400.
PubMedGoogle ScholarCrossref 5.Bakitas
M, Lyons
KD, Hegel
MT,
et al. Effects of a palliative care intervention on clinical outcomes in patients with advanced cancer: the Project ENABLE II randomized controlled trial.
JAMA. 2009;302(7):741-749.
PubMedGoogle ScholarCrossref 6.Back
AL, Arnold
RM, Baile
WF,
et al. Efficacy of communication skills training for giving bad news and discussing transitions to palliative care.
Arch Intern Med. 2007;167(5):453-460.
PubMedGoogle ScholarCrossref 7.Fallowfield
L, Jenkins
V, Farewell
V, Saul
J, Duffy
A, Eves
R. Efficacy of a Cancer Research UK communication skills training model for oncologists: a randomised controlled trial.
Lancet. 2002;359(9307):650-656.
PubMedGoogle ScholarCrossref 8.Szmuilowicz
E, Neely
KJ, Sharma
RK, Cohen
ER, McGaghie
WC, Wayne
DB. Improving residents’ code status discussion skills: a randomized trial.
J Palliat Med. 2012;15(7):768-774.
PubMedGoogle ScholarCrossref 9.Clayton
JM, Adler
JL, O’Callaghan
A,
et al. Intensive communication skills teaching for specialist training in palliative medicine: development and evaluation of an experiential workshop.
J Palliat Med. 2012;15(5):585-591.
PubMedGoogle ScholarCrossref 10.Alexander
SC, Keitz
SA, Sloane
R, Tulsky
JA. A controlled trial of a short course to improve residents’ communication with patients at the end of life.
Acad Med. 2006;81(11):1008-1012.
PubMedGoogle ScholarCrossref 11.Gysels
M, Richardson
A, Higginson
IJ. Communication training for health professionals who care for patients with cancer: a systematic review of training methods.
Support Care Cancer. 2005;13(6):356-366.
PubMedGoogle ScholarCrossref 12.Charlson
ME, Pompei
P, Ales
KL, MacKenzie
CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.
J Chronic Dis. 1987;40(5):373-383.
PubMedGoogle ScholarCrossref 13.Fryer-Edwards
K, Arnold
RM, Baile
W, Tulsky
JA, Petracca
F, Back
A. Reflective teaching practices: an approach to teaching communication skills in a small-group setting.
Acad Med. 2006;81(7):638-644.
PubMedGoogle ScholarCrossref 14.Jackson
VA, Back
AL. Teaching communication skills using role-play: an experience-based guide for educators.
J Palliat Med. 2011;14(6):775-780.
PubMedGoogle ScholarCrossref 15.Bays
A, Engelberg
RA, Back
AL,
et al. Interprofessional communication skills training for serious illness: evaluation of small group, simulated patient interventions [published online November 1, 2013].
J Palliat Med. doi:10.1089/jpm.2013.0318.
Google Scholar 16.Curtis
JR, Engelberg
RA, Nielsen
EL, Au
DH, Patrick
DL. Patient-physician communication about end-of-life care for patients with severe COPD.
Eur Respir J. 2004;24(2):200-205.
PubMedGoogle ScholarCrossref 17.Engelberg
R, Downey
L, Curtis
JR. Psychometric characteristics of a quality of communication questionnaire assessing communication about end-of-life care.
J Palliat Med. 2006;9(5):1086-1098.
PubMedGoogle ScholarCrossref 18.Wenrich
MD, Curtis
JR, Shannon
SE, Carline
JD, Ambrozy
DM, Ramsey
PG. Communicating with dying patients within the spectrum of medical care from terminal diagnosis to death.
Arch Intern Med. 2001;161(6):868-874.
PubMedGoogle ScholarCrossref 19.Au
DH, Udris
EM, Engelberg
RA,
et al. A randomized trial to improve communication about end-of-life care among patients with COPD.
Chest. 2012;141(3):726-735.
PubMedGoogle ScholarCrossref 20.Curtis
JR, Wenrich
MD, Carline
JD, Shannon
SE, Ambrozy
DM, Ramsey
PG. Understanding physicians’ skills at providing end-of-life care: perspectives of patients, families, and health care workers.
J Gen Intern Med. 2001;16(1):41-49.
PubMedGoogle Scholar 21.Curtis
JR, Wenrich
MD, Carline
JD, Shannon
SE, Ambrozy
DM, Ramsey
PG. Patients’ perspectives on physician skill in end-of-life care: differences between patients with COPD, cancer, and AIDS.
Chest. 2002;122(1):356-362.
PubMedGoogle ScholarCrossref 22.Wenrich
MD, Curtis
JR, Ambrozy
DM,
et al. Dying patients’ need for emotional support and personalized care from physicians: perspectives of patients with terminal illness, families, and health care providers.
J Pain Symptom Manage. 2003;25:236-246.
PubMedGoogle ScholarCrossref 23.Carline
JD, Curtis
JR, Wenrich
MD, Shannon
SE, Ambrozy
DM, Ramsey
PG. Physicians’ interactions with health care teams and systems in the care of dying patients: perspectives of dying patients, family members, and health care professionals.
J Pain Symptom Manage. 2003;25(1):19-28.
PubMedGoogle ScholarCrossref 24.Engelberg
RA, Downey
L, Wenrich
MD,
et al. Measuring the quality of end-of-life care.
J Pain Symptom Manage. 2010;39(6):951-971.
PubMedGoogle ScholarCrossref 25.Martin
A, Rief
W, Klaiberg
A, Braehler
E. Validity of the Brief Patient Health Questionnaire Mood Scale (PHQ-9) in the general population.
Gen Hosp Psychiatry. 2006;28(1):71-77.
PubMedGoogle ScholarCrossref 26.Löwe
B, Gräfe
K, Kroenke
K,
et al. Predictors of psychiatric comorbidity in medical outpatients.
Psychosom Med. 2003;65(5):764-770.
PubMedGoogle ScholarCrossref 27.Löwe
B, Spitzer
RL, Gräfe
K,
et al. Comparative validity of three screening questionnaires for
DSM-IV depressive disorders and physicians’ diagnoses.
J Affect Disord. 2004;78(2):131-140.
PubMedGoogle ScholarCrossref 28.Kroenke
K, Spitzer
RL, Williams
JB. The PHQ-9: validity of a brief depression severity measure.
J Gen Intern Med. 2001;16(9):606-613.
PubMedGoogle ScholarCrossref 29.Ell
K, Xie
B, Quon
B, Quinn
DI, Dwight-Johnson
M, Lee
PJ. Randomized controlled trial of collaborative care management of depression among low-income patients with cancer.
J Clin Oncol. 2008;26(27):4488-4496.
PubMedGoogle ScholarCrossref 30.Löwe
B, Unützer
J, Callahan
CM, Perkins
AJ, Kroenke
K. Monitoring depression treatment outcomes with the patient health questionnaire-9.
Med Care. 2004;42(12):1194-1201.
PubMedGoogle ScholarCrossref 31.Kroenke
K, Spitzer
RL, Williams
JB, Löwe
B. The Patient Health Questionnaire Somatic, Anxiety, and Depressive Symptom Scales: a systematic review.
Gen Hosp Psychiatry. 2010;32(4):345-359.
PubMedGoogle ScholarCrossref 32.McBurney
CR, Eagle
KA, Kline-Rogers
EM,
et al. Health-related quality of life in patients 7 months after a myocardial infarction: factors affecting the Short Form-12.
Pharmacotherapy. 2002;22(12):1616-1622.
PubMedGoogle ScholarCrossref 33.Reeder
BA, Chad
KE, Harrison
EL,
et al. Saskatoon in motion: class- versus home-based exercise intervention for older adults with chronic health conditions.
J Phys Act Health. 2008;5(1):74-87.
PubMedGoogle Scholar 34.Ware
J
Jr, Kosinski
M, Keller
SD. A 12-Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity.
Med Care. 1996;34(3):220-233.
PubMedGoogle ScholarCrossref 35.Haywood
KL, Garratt
AM, Fitzpatrick
R. Quality of life in older people: a structured review of generic self-assessed health instruments.
Qual Life Res. 2005;14(7):1651-1668.
PubMedGoogle ScholarCrossref 36.Frosch
DL, Rincon
D, Ochoa
S, Mangione
CM. Activating seniors to improve chronic disease care: results from a pilot intervention study.
J Am Geriatr Soc. 2010;58(8):1496-1503.
PubMedGoogle ScholarCrossref 37.Hopman
WM, Harrison
MB, Coo
H, Friedberg
E, Buchanan
M, VanDenKerkhof
EG. Associations between chronic disease, age and physical and mental health status.
Chronic Dis Can. 2009;29(3):108-116.
PubMedGoogle Scholar 38.Fiscella
K, Franks
P, Srinivasan
M, Kravitz
RL, Epstein
R. Ratings of physician communication by real and standardized patients.
Ann Fam Med. 2007;5(2):151-158.
PubMedGoogle ScholarCrossref 39.Roter
DL, Hall
JA, Kern
DE, Barker
LR, Cole
KA, Roca
RP. Improving physicians’ interviewing skills and reducing patients’ emotional distress: a randomized clinical trial.
Arch Intern Med. 1995;155(17):1877-1884.
PubMedGoogle ScholarCrossref 40.Shilling
V, Jenkins
V, Fallowfield
L. Factors affecting patient and clinician satisfaction with the clinical consultation: can communication skills training for clinicians improve satisfaction?
Psychooncology. 2003;12(6):599-611.
PubMedGoogle ScholarCrossref 41.Weeks
JC, Catalano
PJ, Cronin
A,
et al. Patients’ expectations about effects of chemotherapy for advanced cancer.
N Engl J Med. 2012;367(17):1616-1625.
PubMedGoogle ScholarCrossref