Key Points español 中文 (chinese)
Can a low-cost, easily scaled, end-of-life conversation game motivate underserved African American individuals to engage in advance care planning?
This national mixed-methods cohort study reaching 384 underserved African American individuals found that high rates of advance care planning behavior were associated with participation in game events at community venues.
The end-of-life conversation game may be a useful tool for engaging underserved African American communities in advance care planning, a step toward reducing health disparities related to end-of-life care.
Less than 25% of African American individuals have completed advance directives and are thus vulnerable to poor end-of-life care. Low-cost interventions are needed to increase engagement in advance care planning (ACP).
To investigate whether an end-of-life conversation game motivates African American attendees to engage in ACP and to assess whether the game is well received and endorsed.
Attendance at an end-of-life conversation game (Hello) played in groups of 4 to 6 participants for 60 minutes.
Design, Setting, and Participants
Prospective, mixed-methods cohort study conducted from 2018 to 2019 with a 3- to 11-month follow-up interview. Game events were held in 53 community venues across the US; 15 were purposively sampled for onsite research procedures. Of 428 attendees at purposively sampled sites, 386 (90%) consented to research procedures (6 attendees were removed from analysis for protocol deviation). Of 367 attendees who provided accurate contact information, 232 (63%) were contacted, and 220 were included in follow-up analyses.
Main Outcomes and Measures
The primary outcome was advance directive completion rates after the intervention. Secondary outcomes included rates of other ACP behaviors, ACP engagement, conversation satisfaction and realism, and participants’ Net Promoter Score (a measure of endorsement). Follow-up telephone interviews explored the game experience and relevant ACP behaviors of attendees.
Of 380 individuals who participated (mean [SD] age, 62.2 [13.8] years; 304 were female [80%], and 348 were [92%] African American), none withdrew because of an adverse event. After the intervention, 91 of 220 attendees (41%) completed a new advance directive; 176 of 220 attendees (80%) discussed end-of-life wishes with loved ones, and 214 of 219 attendees (98%) completed at least 1 ACP behavior. There was a moderate increase in the self-efficacy domain on the ACP Engagement Survey (mean [SD] change from before to after the game, 0.54 [0.98]; P < .001). The mean (SD) conversation satisfaction score was 6.21 (0.93) (range, 1-7, with 7 being highest satisfaction), and the overall Net Promoter Score was 57.89 (range, −100 to 100, with 100 being highest endorsement). Interviews revealed 5 themes about the game: (1) it was a useful forum for ACP; (2) it provided new information and perspective; (3) it was emotionally beneficial; (4) it increased appreciation for ACP; and (5) it empowered and motivated participants to perform ACP. Mixed-methods integration showed convergence across data sets.
Conclusions and Relevance
Among a nationwide sample of African American individuals, the end-of-life conversation game appeared to be well received and was associated with high rates of ACP behavior. This low-cost and scalable tool may help reduce health disparities associated with end-of-life care.
Underserved populations, particularly African American communities, are vulnerable to low-quality end-of-life care.1 Compared with white individuals living in the United States, African American individuals are less likely to receive end-of-life care aligned with their preferences2-4 and are less likely to receive hospice services.5,6 Such disparities can be addressed in part by advance care planning (ACP)—a process involving conversations about values and preferences for end-of-life care, documentation in advance directives (ADs), and periodic reviews or updates.7 The completion of an AD is associated with reduced unwanted end-of-life medical interventions,8-10 increased hospice use,11 and decreased psychological distress9,12 and may reduce end-of-life costs.13-15
While the percentage of individuals in the US engaging in ACP has nearly doubled to approximately 60% in the last decade, among African American individuals, it remains stagnant at less than 25%.1,16-19 Most strategies for increasing ACP involve resource-intensive 1-to-1 encounters with clinicians, an approach not easily scaled.20,21 Furthermore, traditional approaches to ACP neglect the 2 most well-documented barriers among underserved populations: mistrust of the health care system22-24 and reluctance to discuss dying.22,24 Our team sought to address these issues by evaluating an inexpensive and easily disseminated intervention—a serious game that promotes ACP conversations by combining an important topic with an enjoyable activity to help overcome reluctance to discuss death and dying.25-27
In prior research, participants have reported that the game’s open-ended questions prompted in-depth discussions of values and preferences about end-of-life care,25-29 with 98% subsequently performing at least 1 ACP behavior (eg, AD completion or discussing end-of-life issues with loved ones).25,26,28 However, these studies were conducted primarily in white and South Asian communities. The present study examined the feasibility and acceptability of the game in underserved African American populations and explored whether the game empowered them to complete ACP.
To overcome barriers associated with skepticism about ACP and distrust of the health care system, we developed a pragmatic, community-based delivery model leveraging social networks. We hypothesized that the game would be highly endorsed and engaging for underserved African American communities.
This was a nationally scaled, prospective, mixed-methods cohort study. The primary outcome was completion of a new AD or review or an update of an existing AD within 3 to 11 months after finishing the game. Although a randomized clinical trial may permit conclusions regarding causation, project organizers expressed preferences for giving all participating communities access to the intervention and thus adopted the present mixed-methods cohort design. This study follows the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline for cohort studies and COREQ guidelines for qualitative data and was registered at ClinicalTrials.gov (NCT03456921). The Penn State Hershey Institutional Review Board approved the protocol. Participants provided verbal informed consent, which fulfilled the consent criteria set forth by the institutional review board for this minimal risk study, after reviewing a Summary Explanation of Research. Participants received a $20 gift card for completing the study activities after finishing the game.
Community-Based Delivery Model and Sampling
The Hello Project was a national initiative that engaged geographically diverse individuals from underserved communities. Partnering with the nonprofit Hospice Foundation of America, we used a community-based delivery model to host game events. Influential community organizations (eg, places of worship and community centers) were recruited as hosts if they had experience engaging underserved communities. Using press releases and email listservs, we interviewed 63 applicants via telephone; 53 were selected based on community connections and demographic considerations. All hosts represented underserved communities defined by the National Institutes of Health as “including black/African Americans, socioeconomically disadvantaged, and rural communities.”30 On the basis of funder priority and the desire to address unmet ACP needs in these communities, 17 sites in African American communities were purposively sampled for onsite research based on geographic region and hosts’ prior outreach success. Hosts underwent training on running game events and managing logistics (eg, inviting participants, arranging venues, and introducing the game). Two sites were unable to schedule events within the project timeline. Research staff traveled to 15 sites and obtained informed consent and collected data. Taking a conservative approach, we excluded data from nonpurposively sampled sites because potential procedural variations could not be rigorously ruled out.
Onsite data were collected via paper forms and entered into a secure, electronic REDCap database. Telephone interviews were audiorecorded, and responses were entered into REDCap.
The full scope of the project involved 15 purposively sampled sites and 38 nonpurposively sampled sites in 27 states. In total, 1122 individuals participated from 4 US regions: the Northeastern (n = 8), Southern (n = 24), Midwestern (n = 11), and Western (n = 10) regions. Urban (n = 32) and rural (n = 21) sites were included. Recruitment occurred from May to November 2018; follow-up calls were completed by September 2019. Differences between purposively and nonpurposively sampled sites were procedural in nature because purposively sampled sites had an onsite research assistant (eAppenidx 1 in the Supplement). Only data and procedures from purposively sampled sites are reported hereafter.
Hosts advertised events using institutional review board–approved fliers and newsletters. Participants were research eligible if they attended a game event and self-reported being 18 years of age or older. Those who did not speak English or self-reported difficulties with hearing or speaking were excluded from the analysis but were invited to play the game absent research questionnaires.
Before the intervention, the onsite research team administered questionnaires on demographic characteristics, health status, experience with medical decision-making, and ACP engagement. Hosts opened the event with a scripted greeting providing background about ACP and explaining the event agenda and game rules (eTable 1 in the Supplement). The game was then played for 60 minutes in groups of 4 to 6 participants, using a booklet of 32 open-ended questions (published previously).27,29,31 A player read aloud a question (eg, “In order to provide you with the best care possible, what 3 nonmedical facts should your doctor know about you?”). Players wrote answers and took turns sharing with the group (or were allowed to pass). Answers typically prompted free-flowing conversation. Players controlled how long they shared, what they shared, and when they were ready to proceed to the next question. Players could give others a game chip to acknowledge a particularly thoughtful comment. To promote lighthearted competition, a “winner” was named at the end, with a pregame coin flip determining whether the “winner” would be the player with the most or fewest chips revealed at the end.
Immediately after the game, participants completed a questionnaire assessing game conversation satisfaction (8-item mean score ranging from 1 to 7, with 7 representing the highest satisfaction).32 Participants also completed a 5-item, validated questionnaire measuring conversation realism (5-item mean score ranging from 1 to 7, with 7 indicating most realistic).33 Intervention endorsement was assessed with the Net Promoter Score (NPS), a validated measure used widely in marketing research to estimate product uptake and recommendation.34,35 The single-item question asked “How likely is it that you would recommend the game to friends or family?” on a scale from 1 (not likely at all) to 10 (extremely likely).
Follow-up telephone calls were made 3 to 11 months after each game event to administer questionnaires and conduct an audiorecorded interview about perceptions of the game and relevant actions taken after participating in the game (eAppendix 2 in the Supplement). Participants were contacted starting at 3 months after the intervention. Telephone calls were made until participants were interviewed or declined to be interviewed or 10 calls had been made without contact or 11 months had elapsed since the event. Participants not reached after 10 attempts or 11 months were considered lost to follow-up. The primary outcome was self-reported completion of an AD (defined as any legal document that provides guidance on medical decision-making). Self-report was necessary because review of medical records for AD documentation was not feasible for a study of this scope. Participants who completed ADs prior to the event were asked if they had reviewed or updated their existing AD because periodic review is considered an essential ACP behavior.36 Secondary outcomes included (1) completion of other ACP behaviors (eg, discussing end-of-life wishes with loved ones or clinicians and reviewing ACP resources) and (2) change in score on the 34-item validated ACP Engagement Survey from before the intervention to 3 to 11 months after the intervention.37 The ACP Engagement Survey can detect change in response to ACP interventions.37,38 We defined a moderate, clinically meaningful increase to be a score change of 0.50 to 0.79.38,39
The sample size was chosen in collaboration with the sponsor to reach a diverse, national sample of underserved individuals in the US, with emphasis on African American communities. The target sample size was 50 communities (20-50 participants per site event, anticipating 10% attrition). Owing to resource and staff limitations, 15 of the 53 sites were purposively selected to deploy research staff onsite for data collection. Site enrollment was stratified according to urban or rural area and US region (Table 1).
Data and Statistical Methods
The AD completion rates and completion of other ACP behaviors were calculated. The 34-item ACP Engagement Survey consists of 34 items measured on 5-point Likert scale, with an overall mean and 4 domain scores37: knowledge (2 items), contemplation (3 items), self-efficacy (12 items), and readiness (17 items). Scores have strong internal consistency, test-retest reliability, and construct validity and have shown the ability to detect change in ACP behavior.37,40 The Wilcoxon signed rank test was used to assess changes in ACP Engagement Survey scores as the difference between the time of the follow-up call and immediately prior to the event, analyzing only respondents with scores at both time points. The NPS uses a 10-point Likert scale and classifies detractors (1-6), passives (7-8), or promoters (9-10).41 The NPS is calculated by taking the difference between the percentage of promoters and detractors (scores range from −100 to 100). Positive scores greater than 0 indicate positive endorsement. Conversation satisfaction scores were calculated by averaging 8 items from a 7-point Likert scale.32 Conversation realism is a 5-item mean score on a 7-point Likert scale.33
For all calculated scores, missing items resulted in a missing composite score. Follow-up time was calculated as the number of days between the event and the telephone interview (eAppendix 3 in the Supplement). All analyses were conducted using SAS, version 9.4 (SAS Institute Inc), and 2-sided tests with α = .05 were considered statistically significant.
Thematic analysis was applied to transcribed interviews. Two analysts (L.J.V.S. and A.R.L.) independently reviewed 20% of responses and created categories from the data to form a preliminary codebook. Codes within each category were defined and used to analyze another 20% of responses via the constant comparison method.42 After responses were independently coded, conflicts were reconciled through discussion, and the codebook was finalized. Two analysts (L.J.V.S. and A.R.L.) coded the remaining data and then organized the codes into themes.
Table 1 gives host site locations, venues, demographic characteristics, and consent rates at each purposively sampled site. One site was excluded from analysis due to low turnout (6 attendees) and resultant protocol deviation. Of the 1122 event attendees, 428 participated at purposively sampled sites. Of those, 386 attendees (90%) consented to participate in research (minus the 6 removed from analysis; Figure), and 232 of 367 attendees (63%) who provided accurate contact information completed follow-up telephone interviews. The mean (SD) follow-up for the 220 participant interviews (Figure) was 5.4 (1.8) months (median, 4.8 months; interquartile range, 4.0-6.5 months).
Characteristics of Participants
Participants’ mean (SD) age was 62.2 (13.8) years, with 304 of 380 participants (80%) being female and 348 of 380 (92%) being African American (Table 2). The characteristics of the participants are also shown by site (eTable 2 in the Supplement), demographic characteristics (urban vs rural), and region (eTable 3 in the Supplement).
The calculated NPS was positive for all sites, ranging from 5.88 to 90.91, with an overall score of 57.89 (Table 3). The overall mean (SD) raw NPS score was 8.76 (2.02), with the mean score by sites ranging from 7.21 to 9.75. The mean (SD) conversation satisfaction score was 6.21 (0.93), and mean (SD) site scores ranged from 5.58 (0.98) to 6.88 (0.32). The mean (SD) conversation realism score was 5.20 (1.01), and the mean (SD) site scores ranged from 4.86 (1.27) to 5.40 (1.54) (eTable 4 in the Supplement).
AD Completion Rates and ACP Behaviors
Table 3 gives rates of ACP behaviors reported at the follow-up telephone call. Of 220 participants, 68 (31%) reported having had an AD prior to the game, 91 (41%) completed a new AD, and 106 (48%) completed a new AD or revised an existing AD. Furthermore, 176 (80%) discussed end-of-life issues with loved ones, 214 of 219 (98%) completed at least 1 ACP behavior, and 145 of 215 (67%) completed 3 or more ACP behaviors. Scores on all domains of the ACP Engagement Survey increased (Table 4 and eTable 5 in the Supplement). There was a moderate and significant increase in the self-efficacy domain (mean [SD] difference before and after the game, 0.54 [0.98]; P < .001) and a small but significant increase in the knowledge (difference, 0.38 [1.24]; P < .001) and readiness (difference, 0.33 [0.98]; P < .001) domains as well as in the total score (difference, 0.40 [0.74]; P < .001). Behavioral rates and scores on the ACP Engagement Survey had no consistent patterns by site, demographic characteristics, or region (eTable 6 and eTable 7 in the Supplement).
Themes From Telephone Interviews
Five major themes emerged from participants playing the game (eTable 5 in the Supplement): (1) it was a safe, fun, and enjoyable context for engaging in ACP conversations; (2) it offered new information and perspectives; (3) it was emotionally beneficial; (4) it increased appreciation for both the value and the need for ACP; and (5) it empowered and motivated performance of ACP behaviors. Additional subthemes and representative quotes are given in eTable 5 and eTable 8 in the Supplement.
A joint display that aligns the quantitative and qualitative results in accordance with this convergent, mixed-methods study is given in eTable 5 in the Supplement.43 We found that the data consistently converged in all 3 constructs of interest: satisfaction with the game, acceptability and endorsement of the experience, and self-efficacy and motivation for behavioral change.
This national study showed that a low-cost ($2.50/participant) and scalable game intervention may offer a feasible and acceptable approach for engaging underserved African American populations in ACP. Finding new and innovative ways to engage this hard-to-reach community in ACP is a critical first step toward reducing health disparities associated with end-of-life care for underserved populations.1 To our knowledge, this is the largest community-based dissemination of an ACP intervention among underserved African American communities. Our data indicated that the game events were well attended and highly endorsed. These data suggest that the game intervention was not only feasible to implement but also acceptable in African American communities, in which reticence for discussing end-of-life issues has been well documented.7,18,22
Unlike traditional approaches involving in-person interactions with health care professionals,20 we used a pragmatic delivery model that leveraged community networks. Such an approach is particularly appropriate in communities that may distrust or are less likely to use the health care system. Our qualitative data suggest that participants appreciated having the activity hosted within their social and faith-based communities, which in turn provided opportunities to share and learn from the experiences of trusted peers. Both of these findings highlight the value of addressing ACP within community networks via a trusted community venue (eg, hosting events in places of worship and community centers). This may be particularly salient within African American communities with strong reliance on social networks for information dissemination. The high levels of community engagement may be explained by our community-based delivery model because it sidestepped the need to interact with a distrusted health care system, an approach that has been successfully modeled in other health care initiatives using barbershops.44 Thus, our model using trusted community organizations might be used for health care initiatives beyond ACP whose goal is to engage underserved communities in important health behaviors.
Our qualitative data also suggested that the game itself may be associated with the success observed in this project. Numerous studies have reported that discussions about death and dying are perceived as unpleasant, uncomfortable, or intimidating.21,45,46 The game overcomes this barrier by reframing these discussions as an enjoyable activity in which players share stories, laugh, and learn from one anothers’ experiences. Using a social, conversation game helps establish psychological safety—the shared belief that individuals in a group can bring up risky topics or ideas.47 Players consistently report that the game creates a safe, nonthreatening environment that supports sensitive conversations.27-29 Furthermore, because the game is engaging and enjoyable and promotes positive reinforcement from the group, it serves to motivate players that may facilitate follow-through with subsequent ACP behaviors.
It is notable that 80% of participants had end-of-life conversations with loved ones because, even in populations where ACP is more prevalent, rates of end-of-life discussions are only 40% to 60%.16,19,48,49 Although our study was not designed to assess the effectiveness of the intervention with regard to behavior, our finding that 41% of participants completed a new AD is encouraging given the less than 25% baseline rate of AD completion among African American individuals1,16-19,48,50-52 and the much lower rates (13%) reported in other studies with underserved populations.49 That said, the secondary outcome of change in score on the ACP intervention was low in some domains, and the effect sizes on this instrument were small to moderate. Furthermore, in mixed-methods data reported separately, participants reported a low level skepticism and positive attitudes about ACP in general (unpublished data, 2020).
Although the study was not designed to compare findings across sites, similar rates of behavioral performance and levels of satisfaction and endorsement were observed regardless of site, demographic characteristics, and region. This suggests that use of a serious game may translate well across varied geographic settings.
Limitations and Strengths
Given the national scope and the community-based nature of data collection, outcomes relied on self-report, leaving open the potential for social desirability bias and overreporting of ACP behaviors. Visual verification of AD completion would be useful in future studies. Furthermore, in the absence of a randomized clinical trial, it was not possible to infer or speak directly to any causation between the game and the ACP behaviors that followed. To manage potential researcher bias, research assistants with no relationship with the game’s producer collected the data. Finally, our study included predominantly female participants and African American participants, thus findings may not be generalizable to other populations.
Despite these limitations, this study has several strengths. First, this was a national sample with high rate of recruitment in a population that is traditionally hard to reach. Second, to our knowledge, this project is among the largest to evaluate an ACP intervention in so many communities and regions of the US. Third, the study protocol and analyses closely followed the National Institutes of Health best practices for mixed-methods health research, and our qualitative procedures adhered to published guidelines of methodologic rigor.53-56 Fourth, the consistent and highly convergent quantitative and qualitative data integration increased the validity and reliability of findings.57-59
This project successfully engaged a nationwide audience of underserved communities in ACP. The present findings suggest that a serious game may be a feasible and well-received intervention in African American communities. As a low-cost and pragmatic intervention for increasing ACP engagement in underserved African American communities, such a game may help reduce health disparities associated with end-of-life care. Randomized clinical trials are needed to assess its effect on ACP behavioral performance and actual end-of-life care.
Accepted for Publication: February 20, 2020.
Published: May 8, 2020. doi:10.1001/jamanetworkopen.2020.4315
Correction: This article was corrected on June 4, 2020, to change the degree of the sixth author, Irene Putzig, from BA to BS.
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Van Scoy LJ et al. JAMA Network Open.
Corresponding Author: Lauren Jodi Van Scoy, MD, Department of Medicine, Penn State Milton S. Hershey Medical Center, Penn State College of Medicine, 500 University Dr, H-041, Hershey, PA 17033 (email@example.com).
Author Contributions: Dr Van Scoy had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Van Scoy, B. H. Levi, Witt, Bramble, Richardson, Putzig, Chinchilli, Tucci.
Acquisition, analysis, or interpretation of data: Van Scoy, B. H. Levi, Witt, A. R. Levi, Wasserman, Chinchilli, Tucci, Green.
Drafting of the manuscript: Van Scoy, B. H. Levi, Witt, A. R. Levi, Wasserman, Chinchilli.
Critical revision of the manuscript for important intellectual content: Van Scoy, B. H. Levi, Witt, Bramble, Richardson, Putzig, A. R. Levi, Tucci, Green.
Statistical analysis: Van Scoy, Witt, A. R. Levi, Wasserman, Chinchilli.
Obtained funding: Van Scoy, Tucci.
Administrative, technical, or material support: Van Scoy, B. H. Levi, Witt, Bramble, Richardson, Putzig, A. R. Levi, Tucci.
Supervision: Van Scoy, B. H. Levi, Tucci, Green.
Conflict of Interest Disclosures: Dr Van Scoy reported serving as an unpaid scientific advisor to Common Practice, LLC, which is the company that produces and sells the game used in this study; receiving funding from the National Institutes of Health (NIH), the Canadian Institute of Health Research, the Society for Critical Care Medicine, the Francis Family Foundation, and the Association for Clinical Pastoral Education. Dr B. H. Levi reported receiving funding from the National Institute of Nursing Research and the Children’s Miracle Network during the conduct of the study. Drs B. H. Levi and Green reported receiving personal fees as consultants for Vital Decisions, the parent company that owns the electronic version of the advance directive used in this study (My Living Voice). Drs B. H. Levi and Green are the co-creators of the decision aid, Making Your Wishes Known, which was developed for research purposes and continues to be available free of charge. Any research involving Making Your Wishes Known or My Living Voice is monitored by the Institutional Review Board and Conflict of Interest Review Committee of Penn State. Drs Bramble and Richardson and Ms Putzig reported receiving grants from the Hospice Foundation of America during the conduct of the study. Dr Chinchilli reported receiving funding from the NIH and the Patient Centered Outcomes Research Institute. Dr Green reported receiving funding from the NIH. No other disclosures were reported.
Funding/Support: Research reported in this project was funded by the John and Wauna Harman Foundation, a private family foundation focused on improving end-of-life care in the US. The REDCap database used in this project was funded by the Penn State Clinical and Translational Science Institute, Pennsylvania State University Clinical and Translational Science Award Network (UL1 TR002014), and the National Center for Advancing Translational Sciences grant UL1 TR000127.
Role of the Funder/Sponsor: The John and Wauna Harman Foundation was involved in the design of the study, but was not involved in collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. No other funder had any role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: The authors thank The Hospice Foundation of America, the Hello Project Advisory Board, all the host organizations who participated in the Hello Project nationwide, and Common Practice, LLC, for permission to use their game Hello, which is available online. Data collection, entry, and management assistance were provided by Katherine Callahan, PhD candidate, Kayla Confer, MEd, James Harness, BA, Margaret Hopkins, DEd, Yining Ma, MS, Sara Marlin, MS, Timothy Sheehan, MS, and Xingran Wren, Dr Ph candidate, all affiliated with Penn State University, Penn State College of Medicine; and Lindsay Currin, BA, Meghan Lee, MA, and Nicole Matluck, MS, all affiliated with Hospice Foundation of America. Andrew Foy, MD, Penn State Milton S. Hershey Medical Center, provided critical review of the manuscript. All assistants received financial compensation except Drs Foy and Hopkins, Katherine Callahan, Yining Ma, and Xingran Wren. Vital Decisions provided permission to reproduce paper copies of My Living Voice.
Dying in America. Improving Quality and Honoring Individual Preferences Near the End of Life. The Institute of Medicine; 2014.
JA. Predictors of family conflict at the end of life: the experience of spouses and adult children of persons with lung cancer. Gerontologist
. 2010;50(2):215-225. doi:10.1093/geront/gnp121
BH. Can playing an end-of-life conversation game motivate people to engage in advance care planning? Am J Hosp Palliat Care
. 2017;34(8):754-761. doi:10.1177/1049909116656353
MJ. End-of-life conversation game increases confidence for having end-of-life conversations for chaplains-in-training. Am J Hosp Palliat Care
. 2018;35(4):592-600. doi:10.1177/1049909117723619PubMedGoogle ScholarCrossref
N. Advocacy Drives Growth: Customer Advocacy Drives UK Business Growth. London School of Economics; 2005.
DE. Effect of the PREPARE website vs an easy-to-read advance directive on advance care planning documentation and engagement among veterans: a randomized clinical trial. JAMA Intern Med
. 2017;177(8):1102-1109. doi:10.1001/jamainternmed.2017.1607
J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Lawrence Erlbaum Associates; 1988.
JW. Integrating quantitative and qualitative results in health science mixed methods research through joint displays. Ann Fam Med
. 2015;13(6):554-561. doi:10.1370/afm.1865
JW. Qualitative Inquiry and Research Design: Choosing Among Five Traditions. Sage Publications, Inc; 1998.
YS. Effective Evaluation: Improving the Usefulness of Evaluation Results Through Responsive and Naturalistic Approaches. Jossey-Bass; 1981.
JD. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage publications; 2017.