How effective are the DECIDE (decide the problem; explore the questions; closed or open-ended questions; identify the who, why, or how of the problem; direct questions to your health care professional; enjoy a shared solution) patient and clinician interventions for improving shared decision making and quality of care for ethnic/racial minorities?
In a randomized clinical trial of 312 dyads that included 74 behavioral health clinicians and 312 patients, the clinician intervention significantly improved shared decision making. Patients perceived higher quality of care when patients and clinicians received the recommended dosage of each intervention.
The clinician intervention could improve shared decision making with minority populations, and the patient intervention could improve patient-reported quality of care by incorporating patient preferences in health care.
Few randomized clinical trials have been conducted with ethnic/racial minorities to improve shared decision making (SDM) and quality of care.
To test the effectiveness of patient and clinician interventions to improve SDM and quality of care among an ethnically/racially diverse sample.
Design, Setting, and Participants
This cross-level 2 × 2 randomized clinical trial included clinicians at level 2 and patients (nested within clinicians) at level 1 from 13 Massachusetts behavioral health clinics. Clinicians and patients were randomly selected at each site in a 1:1 ratio for each 2-person block. Clinicians were recruited starting September 1, 2013; patients, starting November 3, 2013. Final data were collected on September 30, 2016. Data were analyzed based on intention to treat.
The clinician intervention consisted of a workshop and as many as 6 coaching telephone calls to promote communication and therapeutic alliance to improve SDM. The 3-session patient intervention sought to improve SDM and quality of care.
Main Outcomes and Measures
The SDM was assessed by a blinded coder based on clinical recordings, patient perception of SDM and quality of care, and clinician perception of SDM.
Of 312 randomized patients, 212 (67.9%) were female and 100 (32.1%) were male; mean (SD) age was 44.0 (15.0) years. Of 74 randomized clinicians, 56 (75.7%) were female and 18 (4.3%) were male; mean (SD) age was 39.8 (12.5) years. Patient-clinician pairs were assigned to 1 of the following 4 design arms: patient and clinician in the control condition (n = 72), patient in intervention and clinician in the control condition (n = 68), patient in the control condition and clinician in intervention (n = 83), or patient and clinician in intervention (n = 89). All pairs underwent analysis. The clinician intervention significantly increased SDM as rated by blinded coders using the 12-item Observing Patient Involvement in Shared Decision Making instrument (b = 4.52; SE = 2.17; P = .04; Cohen d = 0.29) but not as assessed by clinician or patient. More clinician coaching sessions (dosage) were significantly associated with increased SDM as rated by blinded coders (b = 12.01; SE = 3.72; P = .001; Cohen d = 0.78). The patient intervention significantly increased patient-perceived quality of care (b = 2.27; SE = 1.16; P = .05; Cohen d = 0.19). There was a significant interaction between patient and clinician dosage (b = 7.40; SE = 3.56; P = .04; Cohen d = 0.62), with the greatest benefit when both obtained the recommended dosage.
Conclusions and Relevance
The clinician intervention could improve SDM with minority populations, and the patient intervention could augment patient-reported quality of care.
clinicaltrials.gov Identifier: NCT01947283
According to the Institute of Medicine, clinicians could shrink the chasm of health care services quality by improving communication and seeking patients’ perspectives on shared power and responsibility.1 Shared decision making (SDM) is a form of patient-clinician communication where both parties bring expertise to the process and work in partnership to make a decision,2 thereby facilitating improved outcomes3 and quality of health care.3-6 However, few clinicians have the skills to encourage patient involvement or adjust to preferences.7,8 Randomized clinical trials of SDM have mostly targeted primary care8,9 and have included few ethnic/racial minorities.10 Minority patients are less likely than white patients to state concerns, seek information, or feel trust, thereby missing opportunities to improve outcomes.11-13
Implementing SDM in behavioral health care poses challenges. Clinicians are trained as clinical experts; SDM represents a paradigmatic shift in acknowledging patients as experts of their illness experience and asking them to voice visit agendas. Clinician stereotyping, bias, and lack of skills to address power differentials are additional barriers.12,14-16 A collaborative relationship involves the patient’s problem formulation and joint solution development, which may lengthen visits.17 Differences exist in the meaning of SDM depending on patients’ race, ethnicity, or educational level.18-20 Ethnic/racial minorities may also hesitate to question their clinician.21
A previous randomized clinical trial22-25 found that the patient-focused DECIDE intervention (decide the problem; explore the questions; closed or open-ended questions; identify the who, why, or how of the problem; direct questions to your health care professional; enjoy a shared solution) (DECIDE-PA) improved patient activation and self-management in behavioral health care. Nonetheless, minority patients were more likely than white patients to express concern that becoming activated threatened their relationships with their clinicians.23Activation is defined as the acquisition of knowledge, skills, and beliefs to enable thoughtful action and active participation in decisions.26
Although the intervention was developed to incorporate cultural, linguistic, and socioeconomic characteristics central to treatment participation and collaborative client-clinician relationships,27-30 the following clinician actions were observed that impeded communication and SDM: (1) lack of perspective taking to understand circumstances and perceptions,31 (2) erroneous dispositional inferences (ie, attributing negative patient behaviors to character traits),27,32 and (3) low receptivity to collaboration in decision making.33
We tested the effectiveness of DECIDE-PA and DECIDE clinician (DECIDE-PC) interventions in improving SDM and patient-perceived quality of care among white, black, Latino, and Asian patients. We examined patient-perceived quality of communication and a working alliance as mediators. We also explored whether ethnic/racial or language matching moderated the intervention effect. We hypothesized that SDM would increase after both interventions, as would patient-perceived quality of care.
This study was a cross-level 2 × 2 randomized clinical trial with clinicians at level 2 and patients nested within clinicians at level 1 to assess the effectiveness of patient and clinician interventions. The study protocol is found in Supplement 1. The study was approved by the institutional review boards at Massachusetts General Hospital, Boston, and the 13 participating clinics, including Cambridge Health Alliance Windsor Street Health Center, Beth Israel Cognitive Neurology Unit, South Cove Community Health Center, Cambridge Health Alliance Central Street Care Center, Cambridge Health Alliance Macht Clinic, Edward M. Kennedy Community Health Center, Cambridge Health Alliance Outpatient Counseling and Treatment Consult-Liaisons, Beth Israel Outpatient Psychiatry, South End Community Health Center, Center for Behavioral Health/Family Services of Greater Boston, Massachusetts General Hospital–Chelsea Community Health Care, Massachusetts General Hospital Outpatient Psychiatry, and Massachusetts General Hospital Depression Clinic. All participants provided written informed consent.
Eligible clinicians were from participating clinics with at least 4 current patients. Eligible patients were initially those in treatment with a participating clinician; aged 18 to 75 years; English, Spanish or Mandarin speaking; and with no previous exposure to the DECIDE-PA intervention. Patient exclusion criteria included positive screening for mania, psychosis, suicidal ideation, or cognitive impairment (screened in patients 65 years or older).34 Eligibility criteria were adjusted based on input from institutional review boards and clinics to exclude patients with severe mental health conditions. The age inclusion criterion was expanded to a range of 18 to 80 years, because several clinicians had few patients.
Participants were linked to 1 of 13 behavioral health clinics in Massachusetts that serve low-income minority patients. Clinics offered individual and group psychotherapy and pharmacologic services.
Procedures and Interventions
After enrollment, clinicians completed a 30-minute, in-person baseline assessment (with compensation of $50) and were randomized to intervention or control conditions (Figure 1). Clinicians received continuing education credits and $50 per completed assessment. Intervention clinicians participated in a 12-hour DECIDE-PC workshop (with compensation of $300). The training required examples of clinicians’ baseline therapeutic style. Research assistants (RAs) blinded to clinician assignment enrolled 1 to 2 training patients per clinician and audio recorded a clinical session (95 patients who received a $20 gift card).
Blinded RAs next enrolled as many as 9 patients per clinician (mean [SD], 4.2 [2.3] patients), who were randomized by one of us (A.A.-B.) to intervention or control conditions (Figure 2). Patients received $25 for the first 2 assessments and $40 for the final one. Intervention patients participated in as many as 3 hours of training during approximately 5 months (with a $10 gift card for transportation).
Based on previous studies,22 the clinician intervention targeted the following 3 areas of patient-centered communication in promoting SDM: (1) perspective taking, (2) attributional errors, and (3) receptivity to patient participation and collaboration. The clinician intervention (eAppendix 1 in Supplement 2) was delivered by behavioral health professionals and communication experts (coaches) trained by one of us (M.A.). Clinicians first attended a 12-hour workshop (2-3 days) of lectures, videos, role-play, and feedback based on their audio-recorded clinical sessions, highlighting strengths and opportunities for improvement. Coaches coded behaviors (eAppendix 2 in Supplement 2) from 1 to 6 additional patient-clinician recorded sessions and offered individual feedback in 30- to 45-minute telephone calls. A final 45- to 60-minute telephone call summarized concepts and elicited feedback (eAppendix 3 in Supplement 2). Uptake of the optional coaching varied (mean [SD] number of sessions, 3.0 [2.0]), with 6 of the 40 intervention clinicians (15.0%) having 0 coaching sessions, 8 (20.0%) having 1 to 2, 17 (42.5%) having 3 to 4, and 9 (22.5%) having 5 or more. Six coaching sessions constituted the prespecified recommended dosage. Coaches provided telephone feedback (eAppendix 4 in Supplement 2).
The 34 clinicians in the usual care condition had sessions audio recorded and completed the assessments. Patients continued with usual treatment, completed 3 assessments, and had a recorded clinical session.
The patient training22,23,35 consisted of three 60-minute sessions balancing didactics with opportunities to engage, role-play, and reflect on activation (eAppendix 5 in Supplement 2). Bachelor’s-level care managers delivered the intervention under supervision from licensed, bilingual clinicians (including two of us [N.C. and A.P.]). The first session (decisions and agency) educated patients about their role, choices, and agency in clinical visits. The second session (role, process, and reason) taught skills to understand treatment decisions. The third session (self-efficacy and consolidation) encouraged patients to ask questions about conditions and treatment options.23 Overall attendance was 135 of 157 (86.0%) for session 1, 120 of 157 (76.4%) for session 2, and 112 of 157 (71.3%) for session 3.
Supervision, Fidelity, and Adherence to Intervention
Care managers attended supervisor-led training, role-play, and weekly supervision. Supervisors evaluated care managers’ fidelity to the patient intervention using a random sample of recorded trainings and a 58-item checklist of components. Fidelity of care managers for training sessions 1 to 3 was excellent for patients who spoke English (scored by supervisors as 87.2% to 91.4% of the maximum score), Spanish (scored by supervisors as 80.9% to 86.7% of the maximum score), and Mandarin (scored by supervisors as 74.3%to 87.1% of the maximum score). Supervisors provided feedback biweekly to care managers.
Six trained, blinded coders assessed SDM in each session using the 12-item Observing Patient Involvement in Shared Decision-Making (OPTION) instrument.10,36,37 Coders listened to audio-recorded therapy sessions and rated them on a 5-point Likert scale from 0 (behavior not observed) to 4 (behavior exhibited to a high standard). Final scores were summed and transformed to a scale ranging from 0 (lowest SDM) to 100 (highest SDM). Good intercoder reliability was observed (intercorrelation coefficient, 0.53) across 10 sessions using 2-way mixed, absolute agreement. Coders rated a mean (SD) of 70.6 (67.3) sessions (eMethods 1 in Supplement 2). The patient38 and clinician39 versions of the 9-item Shared Decision Making Questionnaire were used to evaluate patient and clinician SDM. Ratings are summed (range, 0-45) and transformed to a scale ranging from 0 (lowest) to 100 (highest). These measures have been frequently used with English and Spanish speakers and show good psychometric properties (α = .88 for patients; α = .89 for clinicians).38,40-42 We administered the patient Perceptions of Care Survey–Global Evaluation of Care Scale (POC)43 to evaluate the patient’s subjective rating of care and whether they would recommend that clinician. The 3 items of the POC are rated on a 4-point Likert scale ranging from 1 (never) to 4 (always), summed and transformed to a score from 0 (lowest quality) to 100 (highest quality).24 Patients also completed the Communication subscale of the Kim Alliance Scale (α = .77)44-46 and the Working Alliance Inventory (α = .90).47,48 Clinician and patient outcomes were assessed at baseline, approximately 2 months after baseline, and 4 to 6 months after baseline (Figure 3).
Sample Size Determination
Clinician sample size (n = 74) was determined by clinician availability. Patient sample size (n = 300) was based on power analysis assuming analysis of covariance. This design provides approximate power of 80% (Cohen d = 0.30) to 90% (Cohen d = 0.35), assuming no clinician cluster effects. Evaluation of assumptions is found in eMethods 2 in Supplement 2.
The project coordinator randomized clinicians to intervention or control conditions and randomized patients within clinicians by site in a 1:1 ratio for each 2-person block. When recruitment was uneven per site, the last clinician was assigned to the intervention condition. Patient randomization streams were stratified by site and clinician using Stata-generated random numbers.49
Research staff enrolled 79 eligible clinicians; 74 agreed to randomization. Research assistants approached patient participants in clinic waiting rooms; 312 eligible patients were randomized (Figure 1). Research assistants were blinded to patient and clinician study condition when administering assessments, and patients and clinicians were blinded to the other’s intervention status.
We first examined descriptive information on clinical and demographic characteristics and determined whether missing value patterns varied by study condition. To account for missing data, we applied multiple imputation using Stata chained equations49-51 (eMethods 2 in Supplement 2). Primary outcomes included (1) independent-coder OPTION assessment of SDM based on clinical recording at follow-up, (2) clinician perception of SDM, (3) patient perception of SDM, and (4) patient-reported quality of care (POC score) at final assessment.
We estimated a multilevel, mixed-effects model following intent-to-treat principles, with participants assigned to their randomization condition regardless of treatment receipt (eMethods 3 and eFigures 1-4 in Supplement 2). Differences were minimal between patients or clinicians with and without missing data; 52 (33.1%) of intervention patients did not complete at least 1 follow-up assessment and/or had no final session recorded (Figure 1 and eTables 1-2 in Supplement 2). Missing outcome data ranged from 48 (15.4%) for patient-reported POC to 68 (21.8%) for the OPTION assessment, with no significant differences across the 4 design arms for the OPTION assessment (χ2 = 2.17; P = .54), patient perception (χ2 = 7.11; P = .07), clinician perception (χ2 = 2.96; P = .40), and patient-reported POC (χ2 = 6.65; P = .08). The intervention variables were effect coded (ie, −0.5 assigned to control, +0.5 assigned to intervention), and the coefficients estimated differences between the treatment and control groups. We included the interaction term DECIDE-PA × DECIDE-PC. To correct for multiplicity of tests, we calculated an omnibus test with 3 df. The hierarchical nature of models and robust standard errors accounted for nonindependence of patients seeing the same clinician.52
We also examined whether outcomes varied with treatment dosage, a continuous variable calculated as the number of coaching sessions divided by the number of intended treatment sessions (3 for patients and 6 for clinicians). This variable ranged from 0 (no dosage) to 1 (intended dosage or more). Dosage was calculated separately for 2-month and final assessments and was fixed to 0 for the control group. We reduced the dosage by half if clinicians did not have recorded sessions and centered dosage at the mean number of training sessions. Dosage varied in part because RAs were blinded to study assignment and scheduled assessments per set time frames, resulting in 29 intervention patients (18.5%) completing follow-up before the intervention. We adjusted for possible confounders such as the patients’ race, sex, educational level, and age and the clinicians’ race, sex, and age (eTable 3 in Supplement 2). Adjusted results did not differ from those of the model with no covariates.
We estimated the role of patient-clinician communication and working alliance as prespecified mediators using Stata49 and Mplus53 software following the approach of Baron and Kenny54 (eMethods 2 in Supplement 2). We also included the main effects of racial/ethnic groups (and separately, language preference), racial/ethnic discordance, and the interaction terms of discordance with interventions. We computed P values for differences in demographic and clinical characteristics between intervention and control groups using the Pearson χ2 test for categorical variables55 and unpaired t tests for continuous variables. In the regression analysis, we computed P values with t tests adjusted for multiple imputation56 and small sample size.57
Recruitment of clinicians began September 1, 2013; recruitment of patients, November 3, 2013. The final follow-up interview was conducted September 30, 2016. Of 312 randomized patients, 212 (67.9%) were female and 100 (32.1%) were male; mean (SD) age was 44.0 (15.0) years. Of 74 randomized clinicians, 56 (75.7%) were female and 18 (4.3%) were male; mean (SD) age was 39.8 (12.5) years. Table 1 presents participants’ sociodemographic and clinical characteristics; there were no significant differences between intervention and control groups.
Table 2 presents the results of intention-to-treat analysis (312 patient-clinician dyads). The omnibus test of patient and clinician interventions on SDM was not significant (F3,1104.7 = 1.85; P = .14), indicating no overall combined intervention effect on SDM ratings at follow-up. However, the clinician intervention affected SDM (b = 4.52; SE = 2.17; P = .04; Cohen d = 0.29) as rated by blinded coders (eFigure 5 in Supplement 2). Intervention clinicians were rated 4.52 points higher on the OPTION assessment (overall mean [SD] score, 33.00 [15.43]). For quality of care, the omnibus test (F3,10515.7 = 3.15; P = .02) and specific effect of the patient intervention (b = 2.27; SE = 1.16; P = .05; Cohen d = 0.19) were significant, indicating that the intervention increased patient-perceived quality of care by 2.27 points on a 100-point scale (overall mean [SD], 89.9 [12.00]). We found no significant intervention effects for patient-perceived SDM (F3,4171.4 = 0.64; P = .59) or clinician-perceived SDM (F3,1339.8 = 0.66; P = .57) and no individually significant coefficients.
To examine whether the intention-to-treat findings were owing to variability in the number of sessions received, we examined the association of the primary outcomes with training dosage (Table 2). For blinded-coder SDM, the omnibus test for training dosage was significant (F3,1696.3 = 3.45; P = .02). Blinded-coder SDM ratings at follow-up increased significantly by 12.01 points when clinicians received the recommended 6 sessions compared with clinicians without coaching (b = 12.01; SE = 3.72; P = .001; Cohen d = 0.78).
We also found an overall association of dosage with global evaluation of care at the final assessment (F3,5153.1 = 8.66; P < .001). Patient dosage was statistically significant (b = 3.33; SE = 1.17; P = .004; Cohen d = 0.28), and although clinician dosage was not (b = 0.16; SE = 2.64; P = .95; Cohen d = 0.01), the combined effect of patient and clinician dosage was significant (b = 7.40; SE = 3.56, P = .04; Cohen d = 0.62). Maximum benefit occurred when clinicians and patients obtained the recommended dosage (eFigure 6 in Supplement 2). We found no association jointly or individually between dosage and patient-reported SDM (F3,3888.7 = 1.40; P = .24) or between dosage and clinician-reported SDM (F3,1510.0 = 1.42; P = .24). Results are similar when the dosage is not right-censored at the recommended dosage (eTable 4 in Supplement 2).
We explored whether patient evaluation of communication (Kim Alliance Scale Communication subscale) or working alliance (Working Alliance Inventory) served as mediators in the association between the patient intervention and global evaluation of care. We found evidence of partial mediation of the patient intervention effect through communication for the intention-to-treat effect (original, 2.87; indirect, 0.97; 95% CI, 0.06-2.27; Cohen d = 0.09) and the patient dosage effect (original, 3.13; indirect, 1.11; 95% CI, 0.12-2.25; Cohen d = 0.10). In addition, we found evidence of partial mediation of the patient intervention effect through working alliance for the intention-to-treat effect (original, 2.73; indirect, 1.23; 95% CI, 0.06-2.76; Cohen d = 0.11) and the patient dosage effect (original, 3.03; indirect, 1.37; 95% CI, 0.11-2.96; Cohen d = 0.13). We found no evidence that communication or working alliance mediated the effect of the clinician intervention on SDM OPTION assessment, suggesting that the observed intervention effect on SDM is not owing to improved perceptions of communication or working alliance within dyads.
In moderation analyses (eTable 5 in Supplement 2), we found that the intervention effects on SDM were robust to patient and clinician racial/ethnic and linguistic discordance because omnibus tests were not significant (F3,982.2 = 0.69 [P = .56] and F3,1373.6 = 1.13 [P = .34], respectively). Similarly, we found no moderation effects for global evaluation of care with respect to racial/ethnic or linguistic discordance, with insignificant omnibus tests (F3,3445.6 = 0.74 [P = .53] and F3,1732.7 = 1.39 [P = .25], respectively). The clinician intervention seemed to affect patient global evaluation of care more strongly when patients and clinicians did not have the same primary language (b = 4.91; SE = 2.38; P = .04; Cohen d = 0.41).
Our study was, to our knowledge, the first to investigate the effectiveness of patient and clinician interventions to improve SDM in behavioral health care among an ethnically and racially diverse patient sample. The clinician intervention improved SDM with small to moderate effect sizes and more strongly when the patient and clinician had different primary languages.58 Training clinicians on SDM can facilitate identification of patient preferences.59 The intervention may have a stronger effect on patient global evaluation of care in linguistically discordant patient-clinician relationships requiring greater clinician effort and a subjectively different, more apparent SDM experience for the patient. Non–English-speaking patients may also have more cultural distance from English-speaking clinicians and, under distress, are less able to express preferences.60
Our assumption that the patient intervention would lead to increased SDM was not supported, maybe owing to conducting some training by telephone. The patient intervention did not teach patients about SDM but rather focused on asking questions, identifying resources, and communicating preferences. Preparing patients for SDM during the clinical session should be made explicit. The finding of no association jointly or individually between dosage and patient- or clinician-reported SDM might be related to the challenge of detecting changed interactions during the clinical visit. Overall, clinicians may fail to recognize SDM advantages and coaching improvements because much of the clinician behavior change across interactions was only observed by the blinded coder. Clinicians may need to hear audiotaped sections before and after coaching to recognize their changes in attributional errors, perspective taking, and receptivity to collaboration and how these affect their patients. Similarly, video training for patients in SDM could facilitate recognizing these behaviors.
The patient intervention increased perception of quality of care by increasing patient opportunities to voice concerns or topics through inquiry. We hypothesized synergistic effects of the combined intervention on quality of care, and maximum benefit was observed when patients and clinicians obtained the recommended training dosage.8,61 This observation suggests the importance of preparing for changes in power dynamics that require clinician receptivity to patient activation and patient trust.62,63 In these instances, clinicians might be more receptive to patient queries and patients more trusting and confident in testing their skills.64,65
One limitation of our study is the low intervention dosage for clinicians, which can lead to a conservative clinician intervention effect. The low to medium clinician participation in coaching before follow-up (mean [SD], 1.6 [0.8] sessions) and final assessment (mean [SD], 3.2 [1.6] sessions) suggests that time constraints can hinder engagement. Intervention effects could be strengthened by incentivizing adequate training dosage.
Another limitation was assessment of multiple measures of SDM and testing both interventions simultaneously. Although we analyzed multivariate significance tests, only replication studies on independent samples can prevent false-positive findings entirely. We used an inclusive definition of decision, which could dilute the observed effect. Training clinicians and patients to explicitly discuss common decisions (eg, next appointment date or use of decision aids) could strengthen SDM. Finally, because the clinician intervention varied by patient needs, we could not standardize adherence.
The study’s heterogeneous sample of participants speaking diverse languages at different clinics with different coaches expands its generalizability. Our findings reveal how professional knowledge and collaborative dialogue can coexist. Results suggest that an adequate threshold of SDM training promotes a gradual philosophical transformation for clinicians, whereby patient preferences, choices, and agency come to the forefront, all of which are important components of achieving better health outcomes.
Accepted for Publication: December 17, 2017.
Corresponding Author: Margarita Alegria, PhD, Disparities Research Unit, Department of Medicine, Massachusetts General Hospital, 50 Staniford St, Ste 830, Boston, MA 02114 (firstname.lastname@example.org).
Published Online: February 21, 2018. doi:10.1001/jamapsychiatry.2017.4585
Author Contributions: Dr Alegria had full access to all the data in the study and takes responsibility for the integrity of the date and the accuracy of data analysis.
Study concept and design: Alegria, Nakash, Ault-Brutus, Freeman, Rosenbaum, Epelbaum, LaRoche, Carrasco, Shrout.
Acquisition, analysis, or interpretation of data: Alegria, Nakash, Johnson, Ault-Brutus, Carson, Fillbrunn, Wang, Cheng, Harris, Polo, Lincoln, Bostdorf, Okpokwasili-Johnson, Shrout.
Drafting of the manuscript: Alegria, Nakash, Ault-Brutus, Carson, Fillbrunn, Cheng, Lincoln, Bostdorf, Shrout.
Critical revision of the manuscript for important intellectual content: Alegria, Nakash, Johnson, Carson, Fillbrunn, Wang, Harris, Polo, Freeman, Bostdorf, Rosenbaum, Epelbaum, LaRoche, Okpokwasili-Johnson, Carrasco, Shrout.
Statistical analysis: Alegria, Johnson, Ault-Brutus, Fillbrunn, Wang, Bostdorf, Shrout.
Obtained funding: Alegria, Ault-Brutus, Bostdorf.
Administrative, technical, or material support: Alegria, Johnson, Ault-Brutus, Carson, Cheng, Harris, Polo, Bostdorf, Rosenbaum, Epelbaum, Okpokwasili-Johnson.
Study supervision: Alegria, Nakash, Johnson, Ault-Brutus, Carson, Polo, Bostdorf, Epelbaum, LaRoche.
Conflict of Interest Disclosures: None reported.
Funding/Support: This study was supported by award CD-12-11-4187 from the Patient-Centered Outcomes Research Institute (PCORI).
Role of the Funder/Sponsor: The sponsor had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: The content of this article is solely the responsibility of the authors and does not necessarily represent the views of the PCORI, its board of governors, or its methodology committee.
Meeting Presentation: Data from this study were presented at the 144th Annual Meeting of the American Public Health Association; November 1, 2016; Denver, Colorado.
Additional Contributions: Susan Essock, PhD, Columbia University Medical Center, reviewed an earlier version of this article and provided comments and was not compensated. Naomi Ali, BS, Karissa DiMarzio, BA, and Sheri Lapatin Markle, MIA, Disparities Research Unit, contributed to manuscript revision and were not compensated. Naihua Duan, PhD, Columbia University Medical Center, contributed to reviewing and refining the statistical analyses of data and was compensated for his work. The following individuals provided collaboration in patient recruitment and retention: Mark Albanese, MD, David Bor, MD, Marshall Forstein, MD, and Sara Kleinberg, PhD, Cambridge Health Alliance, were not compensated; and Claudia Epelbaum, MD, and Pamela Peck, PsyD, Beth Israel Deaconess Medical Center; Mary Fierro, MD, Edward M. Kennedy Community Health Center; Mary Lyons-Hunter, PsyD, Massachusetts General Hospital; France Neff, PhD, Center for Behavioral Health/Family Services of Greater Boston; Ebele Okpokwasili-Johnson, MD, South End Community Health Center; and Albert Yeung, MD, South Cove Community Health Center and Massachusetts General Hospital, were compensated for their work. We thank all the patients who generously gave their time to the study.
KC. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: Institute of Medicine, National Academy Press; 2001.
New Freedom Commission on Mental Health. Achieving the Promise: Transforming Mental Health Care in America: Final Report. Rockville, MD: Department of Health and Human Services; 2003. DHHS publication SMA-03-3832.
C. Recent advances in shared decision making for mental health. Curr Opin Psychiatry
. 2008;21(6):606-612.PubMedGoogle ScholarCrossref
DE. Effect of mental health care and shared decision making on patient satisfaction in a community sample of patients with depression. Med Care Res Rev
. 2007;64(4):416-430.PubMedGoogle ScholarCrossref
M. Integrating decision making and mental health interventions research: research directions. Clin Psychol (New York)
. 2006;13(1):9-25.PubMedGoogle Scholar
et al. Assessments of the extent to which health-care providers involve patients in decision making: a systematic review of studies using the OPTION instrument. Health Expect
. 2015;18(4):542-561.PubMedGoogle ScholarCrossref
H, Painchaud Guérard
F. Training health professionals in shared decision making: update of an international environmental scan. Patient Educ Couns
. 2016;99(11):1753-1758.PubMedGoogle ScholarCrossref
et al. Do interventions designed to support shared decision-making reduce health inequalities? a systematic review and meta-analysis. PLoS One
. 2014;9(4):e94670.PubMedGoogle ScholarCrossref
et al. Medical decision-making among Hispanics and non-Hispanic whites with chronic back and knee pain: a qualitative study. BMC Musculoskelet Disord
. 2011;12(1):78.PubMedGoogle ScholarCrossref
JN. Disparities and distrust: the implications of psychological processes for understanding racial disparities in health and health care. Soc Sci Med
. 2008;67(3):478-486.PubMedGoogle ScholarCrossref
TW. Are African Americans really less willing to use health care? Soc Probl
. 2005;52(2):255-271.Google ScholarCrossref
Committee on Understanding and Eliminating Racial and Ethnic Disparities in Health Care, Board on Health Sciences Policy, Institute of Medicine. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. Washington, DC: National Academy Press; 2003.
D, van Ryn
S. Reducing racial bias among health care providers: lessons from social-cognitive psychology. J Gen Intern Med
. 2007;22(6):882-887.PubMedGoogle ScholarCrossref
M. Research on the provider contribution to race/ethnicity disparities in medical care. Med Care
. 2002;40(1)(suppl):I140-I151.PubMedGoogle Scholar
K. Voices of dialogue and directivity in family therapy with refugees: evolving ideas about dialogical refugee care. Fam Process
. 2012;51(3):391-404.PubMedGoogle ScholarCrossref
B. Integrating client and clinician perspectives on psychotropic medication decisions: developing a communication-centered epistemic model of shared decision making for mental health contexts. Health Commun
. 2016;31(6):707-717.PubMedGoogle ScholarCrossref
S. Some problems in health communication in a multicultural clinical setting: a South African experience. Health Commun
. 1996;8(2):153-170.Google ScholarCrossref
B, El Ansari
K. Communication and cultural issues in providing reproductive health care to immigrant women: health care providers’ experiences in meeting the needs of Somali women living in Finland [published correction in J Immigr Minor Health
. 2012;14(2):344]. J Immigr Minor Health
. 2012;14(2):330-343.PubMedGoogle ScholarCrossref
RC. Cancer and communication in the health care setting: experiences of older Vietnamese immigrants, a qualitative study. J Gen Intern Med
. 2008;23(1):45-50.PubMedGoogle ScholarCrossref
et al. Evaluation of a patient activation and empowerment intervention in mental health care. Med Care
. 2008;46(3):247-256.PubMedGoogle ScholarCrossref
M. Patient-provider communication: understanding the role of patient activation for Latinos in mental health treatment. Health Educ Behav
. 2009;36(1):138-154.PubMedGoogle ScholarCrossref
et al. Activation, self-management, engagement, and retention in behavioral health care: a randomized clinical trial of the DECIDE intervention. JAMA Psychiatry
. 2014;71(5):557-565.PubMedGoogle ScholarCrossref
M. Examining implementation of a patient activation and self-management intervention within the context of an effectiveness trial. Adm Policy Ment Health
. 2014;41(6):777-787.PubMedGoogle ScholarCrossref
et al. How missing information in diagnosis can lead to disparities in the clinical encounter. J Public Health Manag Pract
. 2008;14(6)(suppl):S26-S35.PubMedGoogle ScholarCrossref
et al. Patient-clinician ethnic concordance and communication in mental health intake visits. Patient Educ Couns
. 2013;93(2):188-196.PubMedGoogle ScholarCrossref
M. Interpersonal complementarity in the mental health intake: a mixed-methods study [correction appears in J Couns Psychol
. 2012;59(2):196]. J Couns Psychol
. 2012;59(2):185-196.PubMedGoogle ScholarCrossref
M. Preferences for relational style with mental health clinicians: a qualitative comparison of African American, Latino and non-Latino white patients. J Clin Psychol
. 2011;67(1):31-44.PubMedGoogle ScholarCrossref
M. Examination of the role of implicit clinical judgments during the mental health intake. Qual Health Res
. 2013;23(5):645-654.PubMedGoogle ScholarCrossref
M. The clinical encounter as local moral world: shifts of assumptions and transformation in relational context. Soc Sci Med
. 2009;68(7):1238-1246.PubMedGoogle ScholarCrossref
A. The Mini-Cog: a cognitive “vital signs” measure for dementia screening in multi-lingual elderly. Int J Geriatr Psychiatry
. 2000;15(11):1021-1027.PubMedGoogle ScholarCrossref
JT. Increasing the engagement of Latinos in services through community-derived programs: the Right Question Project–Mental Health. Prof Psychol Res Pr
. 2012;43(3):208-216.Google ScholarCrossref
et al. The OPTION scale: measuring the extent that clinicians involve patients in decision-making tasks. Health Expect
. 2005;8(1):34-42.PubMedGoogle ScholarCrossref
et al. Shared decision making: a model for clinical practice. J Gen Intern Med
. 2012;27(10):1361-1367.PubMedGoogle ScholarCrossref
M. The 9-item Shared Decision Making Questionnaire (SDM-Q-9): development and psychometric properties in a primary care sample. Patient Educ Couns
. 2010;80(1):94-99.PubMedGoogle ScholarCrossref
M. Development and psychometric properties of the Shared Decision Making Questionnaire–physician version (SDM-Q-Doc). Patient Educ Couns
. 2012;88(2):284-290.PubMedGoogle ScholarCrossref
BH. Randomized trial of a telephone care management program for outpatients starting antidepressant treatment. Psychiatr Serv
. 2006;57(10):1441-1445.PubMedGoogle ScholarCrossref
De las Cuevas
P. Attitudes toward concordance in psychiatry: a comparative, cross-sectional study of psychiatric patients and mental health professionals. BMC Psychiatry
. 2012;12(1):53.PubMedGoogle ScholarCrossref
et al. Dutch translation and psychometric testing of the 9-item Shared Decision Making Questionnaire (SDM-Q-9) and Shared Decision Making Questionnaire–Physician Version (SDM-Q-Doc) in primary and secondary care. PLoS One
. 2015;10(7):e0132158.PubMedGoogle ScholarCrossref
B. Assessing consumer perceptions of inpatient psychiatric treatment: the Perceptions of Care Survey. Jt Comm J Qual Improv
. 2002;28(9):510-526.PubMedGoogle Scholar
D. The quality of therapeutic alliance between patient and provider predicts general satisfaction. Mil Med
. 2008;173(1):85-90.PubMedGoogle ScholarCrossref
et al. Psychometrics of shared decision making and communication as patient centered measures for two language groups. Psychol Assess
. 2016;28(9):1074-1086.PubMedGoogle ScholarCrossref
L. Development and validation of the working alliance inventory. J Couns Psychol
. 1989;36(2):223-233.Google ScholarCrossref
Y. “What should we talk about?” the association between the information exchanged during the mental health intake and the quality of the working alliance. J Couns Psychol
. 2015;62(3):514-520.PubMedGoogle ScholarCrossref
StataCorp LP. STATA Statistical Software, Release 14 [computer program]. College Station, TX: StataCorp LP; 2015.
JM, Van Hoewyk
P. A multivariate technique for multiply imputing missing values using a sequence of regression models. Surv Methodol
. 2001;27(1):85-96.Google Scholar
D. Multiple Imputation for Nonresponse in Surveys. Hoboken, NJ: John Wiley & Sons, Inc; 2004.
AS. Hierarchical Linear Models: Applications and Data Analysis Methods. Thousand Oaks, CA: Sage Publications; 2002.
BO. Mplus User’s Guide. 6th ed. Los Angeles, CA: Muthén & Muthén; 1998-2012.
DA. The moderator-mediator variable distinction in social psychological research: conceptual, strategic, and statistical considerations. J Pers Soc Psychol
. 1986;51(6):1173-1182.PubMedGoogle ScholarCrossref
WJ. Practical Nonparametric Statistics. 3rd ed. New York, NY: Wiley; 1999.
DB. Significance levels from repeated P
-values with multiply imputed data. Stat Sin
. 1991;1:65-92.Google Scholar
DB. Small-sample degrees of freedom with multiple imputation. Biometrika
. 1999;86:948-955.Google ScholarCrossref
AM. Cultural challenges to engaging patients in shared decision making. Patient Educ Couns
. 2017;100(1):18-24.PubMedGoogle ScholarCrossref
M. The effects of a shared decision-making intervention in primary care of depression: a cluster-randomized controlled trial. Patient Educ Couns
. 2007;67(3):324-332.PubMedGoogle ScholarCrossref
JL. Beyond the critical period: processing-based explanations for poor grammaticality judgment performance by late second language learners. J Mem Lang
. 2006;55(3):381-401.Google ScholarCrossref
et al. Enhancing shared decision making through carefully designed interventions that target patient and provider behavior. Health Aff (Millwood)
. 2016;35(4):605-612.PubMedGoogle ScholarCrossref
et al. The impact of patient-centered care on outcomes. J Fam Pract
. 2000;49(9):796-804.PubMedGoogle Scholar
DL. Patient participation in the patient-provider interaction: the effects of patient question asking on the quality of interaction, satisfaction and compliance. Health Educ Monogr
. 1977;5(4):281-315.PubMedGoogle ScholarCrossref
S. Adapting shared decision making for individuals with severe mental illness. Psychiatr Serv
. 2014;65(12):1483-1486.PubMedGoogle ScholarCrossref
A. Knowledge is not power for patients: a systematic review and thematic synthesis of patient-reported barriers and facilitators to shared decision making. Patient Educ Couns
. 2014;94(3):291-309.PubMedGoogle ScholarCrossref