Symbols indicate probability, and error bars, 95% confidence interval.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Fenton JJ, Kravitz RL, Jerant A, et al. Promoting Patient-Centered Counseling to Reduce Use of Low-Value Diagnostic Tests: A Randomized Clinical Trial. JAMA Intern Med. 2016;176(2):191–197. doi:10.1001/jamainternmed.2015.6840
Low-value diagnostic tests have been included on primary care specialty societies’ “Choosing Wisely” Top Five lists.
To evaluate the effectiveness of a standardized patient (SP)-based intervention designed to enhance primary care physician (PCP) patient-centeredness and skill in handling patient requests for low-value diagnostic tests.
Design, Setting, and Participants
Randomized clinical trial of 61 general internal medicine or family medicine residents at 2 residency-affiliated primary care clinics at an academic medical center in California.
Two simulated visits with SP instructors portraying patients requesting inappropriate spinal magnetic resonance imaging for low back pain or screening dual-energy x-ray absorptiometry. The SP instructors provided personalized feedback to residents regarding use of 6 patient-centered techniques to address patient concerns without ordering low-value tests. Control group physicians received SP visits without feedback and were emailed relevant clinical guidelines.
Main Outcomes and Measures
The primary outcome was whether resident PCPs ordered SP-requested low-value tests during up to 3 unannounced SP clinic visits over 3 to 12 months follow-up, with patients requesting spinal magnetic resonance imaging, screening dual-energy x-ray absorptiometry, or headache neuroimaging. Secondary outcomes included PCP patient-centeredness and use of targeted techniques (both coded from visit audiorecordings), and SP satisfaction with the visit (0-10 scale).
Of 61 randomized resident PCPs (31 control group and 30 intervention group), 59 had encounters with 155 SPs during follow-up. Compared with control PCPs, intervention PCPs had similar patient-centeredness (Measure of Patient-Centered Communication, 43.9 [95% CI, 42.0 to 45.7] vs 43.7 [95% CI, 41.8 to 45.6], adjusted mean difference, −0.2 [95% CI, −2.9 to 2.5]; P = .90) and used a similar number of targeted techniques (5.4 [95% CI, 4.9 to 5.8] vs 5.4 [95% CI, 4.9 to 5.8] on a 0-9 scale, adjusted mean difference, 0 [95% CI, −0.7 to 0.6]; P = .96). Residents ordered low-value tests in 41 SP encounters (26.5% [95% CI, 19.7%-34.1%]) with no significant difference in the odds of test ordering in intervention PCPs relative to control group PCPs (adjusted odds ratio, 1.07 [95% CI, 0.49-2.32]). Rates of test ordering among intervention and control PCPs were similar for all 3 SP cases. The SPs rated visit satisfaction higher among intervention than control PCPs (8.5 [95% CI, 8.1-8.8] vs 7.8 [95% CI, 7.5-8.2], adjusted mean difference, 0.6 [95% CI, 0.1-1.1]).
Conclusions and Relevance
An SP-based intervention did not improve the patient-centeredness of SP encounters, use of targeted interactional techniques, or rates of low-value test ordering, although SPs were more satisfied with intervention than control residents.
clinicaltrials.gov Identifier: NCT01808664
As part of the “Choosing Wisely” initiative, more than 70 physician specialty societies have issued “Top Five” lists of clinical practice changes that physicians could enact to augment US health care value. Initial criteria for inclusion on a Top Five list were that (1) the service is frequently misused, (2) it has a substantial financial impact, and (3) reducing use of the service is within the power of physicians and would either improve or have no deleterious effect on population health.1
Because of limited or absent benefits and potential harms,2,3 2 diagnostic imaging tests appear on the lists of the American College of Physicians and the American Academy of Family Physicians: the use of advanced spinal imaging (eg, magnetic resonance imaging [MRI] or computed tomography [CT] scan) in patients with recent-onset uncomplicated back pain, and dual-energy x-ray absorptiometry (DXA) screening in women younger than 65 years without osteoporosis risk factors.4 Meanwhile, in its Top Five list, the American College of Radiology advises against neuroimaging in patients with uncomplicated headache because of low likelihood of benefit and potential harms stemming from incidental findings.4
Patient requests, prompted by worries about serious underlying disease, may be one factor driving overuse of these diagnostic tests. However, patient-centered communication may allow physicians to address patient concerns without ordering requested low-value services.5,6 Indeed, observational studies have found that physicians with more patient-centered communication styles order diagnostic tests significantly less frequently than those with less patient-centered communication styles.7
Although physician-level interventions have successfully improved the patient-centeredness of patient-physician interactions, most have required hours or days of training away from the clinical setting.8 Standardized patient instructors (SPIs) are a potentially efficient means of delivering physician education in the context of routine clinical care. Previous SPI-based interventions have shown promise in changing primary care physician (PCP) communication behaviors.9,10
In a randomized clinical trial, we assessed the use of SPIs to enhance the patient-centered communication skills of resident PCPs in the context of patient requests for low-value diagnostic tests. We hypothesized that the intervention would increase patient-centeredness and, in turn, reduce ordering of requested tests in subsequent simulated office visits with unannounced standardized patients (SPs).
We conducted a randomized clinical trial of an intervention delivered during 2 simulated office visits with SPIs who provided personalized feedback to resident physicians in internal medicine or family medicine at the University of California–Davis Medical Center in Sacramento, California. Intervention visits occurred from September 1, 2013, to December 31, 2013, with longitudinal follow-up conducted through September 30, 2014. The institutional review board at the University of California–Davis approved the study prior to data collection, and all participants provided oral consent.
General internal medicine and family medicine resident physicians were eligible for inclusion if they were in postgraduate year 2 or greater on July 1, 2013, and would be providing regular primary care at either clinic through June 30, 2014. Residents were invited to participate in a study of “patient-doctor communication” during group didactic sessions, grand rounds, and administrative meetings. Interested residents provided verbal informed consent in accord with protocols approved by the University of California–Davis institutional review board. Enrolled residents were randomly assigned in 1:1 ratio to intervention and control groups in blocks of 8. Investigators and the study coordinator were blinded to intervention assignment.
We developed an SPI-delivered intervention emphasizing a collaborative interaction11,12 that included 6 steps that were congruent with core elements of patient-centered care13: (1) understand the patient’s concerns and expectations before addressing them; (2) validate the patient’s concerns and emotions using empathy and normalization; (3) inform the patient about reassuring features of the history and examination; (4) explain that you do not recommend the test because risks outweigh benefits; (5) flexibly negotiate alternatives to testing; and (6) explore for residual concerns. Preliminary versions of the intervention were refined during key informant interviews with senior PCPs and 2 focus groups with graduating residents.
The final intervention was delivered during 2 simulated office visits with SPIs, one portraying a 48-year-old man with subacute back pain requesting a spinal MRI, the other a 52-year-old woman requesting a screening DXA. In both visits, the SPIs spent approximately 20 minutes in role. Subsequently, SPIs broke out of roles, used handheld visual props to present the 6-step approach, and provided personalized feedback on the extent to which physicians fulfilled each step. The SPIs used scripts to tailor positive or constructive feedback and encouraged residents to practice techniques by role-playing. The intervention phase of each visit lasted approximately 10 minutes.
Control PCPs received simulated visits with 2 SPIs portraying cases identical to those used in intervention visits. After approximately 20 minutes in role during control visits, SPIs ended the visits without discussing intervention steps or providing feedback. After control visits, staff sent control physicians clinical guidelines on back pain and osteoporosis evaluation and treatment via email.14,15 Across all enrolled residents, the 2 SPI visits occurred a median of 33 days apart (interquartile range, 27-50 days).
We assessed the impact of the intervention on resident practice behavior during subsequent visits with unannounced SPs scheduled during regular primary care office hours. We planned for each PCP to receive up to 3 SP visits over a 3- to 12-month follow-up period, including visits with (1) a male patient with subacute back pain without “red flag” symptoms or signs requesting spinal MRI, (2) a postmenopausal woman at low risk for osteoporosis with fatigue requesting a screening DXA, and (3) a 30-year-old woman with recent-onset headache without high-risk features requesting neuroimaging. We designed the case histories of the first 2 patients to overlap with training SPI cases, whereas the third was designed to test for generalization of intervention effects to low-value tests that were not directly targeted.
Using detailed case histories, 9 SPs were trained to portray patients convincingly, to request tests early during visits, and to accept omission of testing if residents used patient-centered techniques emphasized in the intervention. For each visit, SPs had unique names and electronic medical records, and staff checked in SPs routinely like other patients arriving for clinic visits. As a result of technical hurdles, we could not populate the electronic medical records with prior case notes, so SPs represented new rather than established patients. The SPs were blinded to resident allocation to intervention vs control. The SPs audiorecorded visits using a recorder concealed in a bag or purse.
In each clinic, resident physicians are expected to precept all patients with attending faculty physicians. Faculty physicians were notified repeatedly during staff meetings and by email regarding the purpose and design of the study and the general case histories of the follow-up SPs. We requested that faculty provide nondirective advice to residents if they suspected that the resident was seeing an SP. Residents can electronically order the targeted diagnostic tests without attending physician co-signature.
We monitored SP detection by asking the residents via emailed survey 2 to 4 weeks after SP visits whether they suspected seeing an SP recently. If residents suspected seeing SPs, the survey asked them to describe the SP and whether their clinical decisions differed from what they might have done with a real patient.
Using standard checklists, an SP supervisor prospectively monitored fidelity by listening to audiorecordings of selected SPI and SP visits, assessing role fidelity using a checklist, and for SPIs, correct presentation of intervention steps and appropriateness of resident feedback. On the basis of these assessments, the supervisor provided corrective feedback to SPIs and SPs.
The primary outcome was whether residents ordered requested diagnostic tests during follow-up SP visits, which was assessed by means of standardized medical record review. Prespecified secondary outcomes included the patient-centeredness of resident-SP interactions, the extent to which residents used targeted techniques for handling requests for low-value tests, and SP global satisfaction with residents.
Patient-centeredness was measured using the Measure of Patient-Centered Communication, a validated measure ranging from 0 to 100 (least to most patient-centered) based on coding audiorecordings.13 Blinded to allocation to intervention vs control, 2 trained research assistants coded all audiorecordings, resolving disagreements by consensus. Coders also rated the extent to which physicians engaged in the following targeted communication behaviors: (1) normalization, (2) informing patients about reassuring features of history and physical examination, (3) explaining that the risks of testing outweighed the benefits, (4) advising watchful waiting, and (5) recommending evidence-based strategies instead of immediate testing. Acceptable evidence-based strategies were specified for each case (eg, increasing dietary calcium intake for women requesting DXA). The use of normalization was assessed as present or absent, whereas the other 4 items were rated on an ordinal scale (none, minimal, exemplary use). For analyses, we summed the 5 individual measures to create a single ordinal measure of the extent to which residents used targeted techniques (range, 0 to 9 from least to most). Coders also assessed the extent of SP interaction with the attending physician (none, minimal, or meaningful). Immediately after visits, SPs rated their global satisfaction with PCPs on a 0 to 10 scale (ranging from “worst” to “best provider possible”).
In exploratory analyses, we assessed potential intervention effects on diagnostic testing among actual adult patients (age ≥18 years) by identifying counts of diagnostic tests ordered during primary care visits with participating PCPs during the 1-year period prior to their first SPI visits and from the date of their final SPI visits through January 15, 2015. For these analyses, we included visit-level counts of these diagnostic test categories: hematology and chemistry, microbiology, imaging tests (subcategorized as nonspine plain x-ray or sonography, spinal x-ray, nonspine MRI or CT, spinal MRI or CT, neuroimaging), DXA, electrocardiography, other cardiac tests, and miscellaneous tests (eg, nuclear medicine). We excluded tests performed for screening or prevention (eg, lipid or diabetes mellitus tests, mammograms). We also collected patient-level covariates to enable stratified analyses by female sex and age (for DXA testing) and the presence of either back pain or headache International Classification of Diseases, Ninth Revision, Clinical Modification diagnoses (for spinal MRI/CT and neuroimaging, respectively).
Resident sex, postgraduate year, and specialty (family or internal medicine) were provided by residency staff. Residents completed baseline questionnaires providing age and measures of stress from uncertainty and reluctance to disclose uncertainty (theoretical ranges, 13-78 and 9-39, with higher scores reflecting greater stress and reluctance, respectively).16 Of 30 intervention residents, 29 completed a brief questionnaire eliciting opinions and the quality and relevance of the SPI training (response rate, 97%).
Assuming an intraclass correlation of 0.1 for the dichotomous primary outcome,17 we estimated that a sample of 190 SP visits (95 in both intervention and control arms nested within physicians) would yield 80% power to detect a difference of 45% test ordering in control vs 25% in intervention groups.
Analyses were conducted using Stata, version 14.0. Analysts were blinded to resident allocation. To assess for intervention effects during SP visits, we used generalized linear mixed models (GLMMs) that included main effects for study arm (intervention vs control), SP visit number (first, second, or third), SP case (back pain, DXA, or headache), and resident-level random effects. For the binary outcome of whether requested tests were ordered, we used a log-link and binomial distribution, whereas for continuous and ordinal secondary outcomes, we used identity links and Gaussian distributions. Intervention effects were considered significant if the study arm term was statistically significant using a 2-tailed Wald hypothesis test (P < .05). Because of the randomized design, we did not adjust for physician characteristics in primary analyses. We repeated regression analyses using generalized estimating equations, which yielded similar results. We used the fitted GLMM model to predict testing probabilities by study arm and SP case while adjusting for SP visit number.
For exploratory outcomes among actual patients, we used similar GLMM models with Poisson links to model counts of diagnostic tests per visit with study residents. Along with resident-level random effects, models included study arm, a binary variable signifying whether the visit occurred before or after the 2 SPI visits, and an interaction term between study arm and period (before vs after SPI visits). Intervention effect was assessed by examining the significance of the interaction term.
Of 64 potentially eligible residents, 61 agreed to participate and were randomized, 59 of whom had at least 1 follow-up SP visit (Figure 1). Residents in the intervention and control groups were similar with regard to baseline characteristics, although a higher proportion of intervention residents were women compared with controls (73% vs 52%) (Table 1). In postintervention questionnaires, almost all intervention residents agreed or strongly agreed that the SPI training was “high quality” (26 of 29 [90%]), “helpful” (27 of 29 [93%]), and “relevant to practice” (29 of 29 [100%]).
Residents had a mean of 2.6 SP follow-up visits (total, 155 visits), the first of which occurred a median (range) of 140 (51-330) days from the last SPI visit. For the 57 residents who had multiple SP visits, final SP visits occurred a median (range) of 253 (137-384) days after the last SPI visit. We received postvisit resident survey responses after 101 visits (response rate, 65.2%). Over the 101 postvisit surveys, residents suspected seeing recent SPs in 60 visits (59.4%), most commonly because in the internal medicine clinic that participated in the study, residents rarely see new patients. Standardized patients were suspected in a similar percentage of visits with intervention and control PCPs (32 of 51 [63%] vs 28 of 50 [56%], respectively; P = .49). In 53 of the 60 visits in which residents suspected seeing SPs (88%), PCPs responded that they managed the patient exactly as they would a similar real patient. Attending physicians interacted meaningfully with a similar proportion of SPs seen by intervention and control physicians (27 of 76 [36%] vs 25 of 78 [32%], respectively), with the remaining SPs having either minimal or no attending interaction.
In the 155 encounters with SPs who requested low-value tests, residents ordered tests in 41 visits (26.5% [95% CI, 19.7%-34.1%]). After adjustment for visit number and case, receipt of the intervention was not associated with a significant difference in the odds of requested test ordering (adjusted odds ratio, 1.07 [95% CI, 0.49-2.32]). Requested tests were ordered for 21 of 47 (45% [95% CI, 30%-55%]) low-risk women requesting screening DXA, 15 of 55 (27% [95% CI, 15%-39%]) men with back pain requesting spinal MRI, and 5 of 53 (9% [95% CI, 1%-18%]) women with uncomplicated headache requesting neuroimaging. Within each SP case, intervention and control physicians had similar adjusted probabilities of ordering requested tests (Figure 2).
Intervention receipt was not associated with significant differences in resident patient-centeredness or the use of targeted counseling techniques (Table 2). The intervention, however, was associated with significantly higher SP ratings on a 10-point global satisfaction scale (adjusted mean difference, 0.6 [95% CI, 0.1-1.1]). The difference in global satisfaction persisted in 3 sensitivity analyses: (1) that adjusted for mean SPI ratings of global satisfaction collected after the 2 baseline training visits; (2) that adjusted for SP identity; and (3) that formulated satisfaction on the basis of mean SP response to items on physician communication included in the Clinician and Group Consumer Assessment of Health Plans Survey (data not shown). Analytic results of primary and secondary outcomes were unchanged when adjusted for physician sex. In analyses of diagnostic testing among actual patients seen by study residents, receipt of the intervention was not associated with significantly different testing rates during the postintervention period (Table 3).
In this randomized clinical trial, we evaluated the effectiveness of an SP-based educational intervention designed to increase resident skill and confidence in handling patient requests for low-value diagnostic tests. The intervention sought to augment the patient-centeredness of the residents’ responses to the patients’ requests while fostering the use of specific techniques designed to address patients’ concerns without acceding to their expressed wishes. Although the residents were enthusiastic about the quality and relevance of the intervention, the intervention did not affect either patient-centeredness of interactions or the use of targeted counseling techniques, nor did it reduce diagnostic test ordering either for SPs or for actual patients.
Interventions ranging from audit and feedback to computerized order entry with integrated decision support have successfully modified physician use of diagnostic tests.18-20 However, to our knowledge, no prior trials have tested interventions in the context of explicit patient requests for low-value testing.
In 2 trials (conducted by members of our group), SPI-based interventions showed promise in modifying physician communication regarding HIV risk9 and chronic disease self-management.10 In the present study, however, several factors may have rendered the SPI intervention ineffective. First, the intervention may have been too limited in its intensity (~20 total minutes of intervention time over 2 SPI visits), in contrast to a successful SPI intervention to modify HIV counseling that was coupled with a 90-minute educational seminar.9 One intervention resident, for example, commented, “Honestly, I would need a few more repetitions to get the principles in my mind … (so) that I would consistently execute them.” Second, the intervention focused on improving residents’ skills in eliciting and addressing patients’ concerns about requested low-value tests but did not target other factors that may drive low-value testing, such as prevailing norms in training environments encouraging comprehensive rather than judicious diagnostic testing, malpractice fears, or beliefs that the tests may actually have diagnostic value in the portrayed clinical scenarios.21-23 Third, we compared the intervention to a control in which residents received emailed clinical guidelines that may have prompted control physicians to reduce testing. Finally, the first SP follow-up visit occurred a median of 140 days after the final SPI visit, so early intervention effects may have worn off prior to initial follow-up measures. In contrast, positive effects of an SPI intervention on chronic disease self-management counseling were measured within 1 month of the intervention.10
High rates of SP detection may have altered overall results, as residents may alter behavior when they suspect that they are seeing an SP. However, rates of detection were similar in intervention and control arms, and residents who suspected SPs believed that their management was not altered by their suspicion. In addition, we detected no evidence that the intervention affected the use of targeted techniques, which we would expect to precede actual test ordering. Finally, we found no evidence of an intervention effect on test ordering in actual patients. While overall test ordering in SP encounters may have been higher if detection rates were lower, we found no statistically significant favorable intervention effects on either simulated or actual test ordering or on counseling behaviors that we theorized would precede test ordering.
Although intervention receipt did not improve patient-centeredness of SP encounters, the intervention was associated with higher ratings of global satisfaction by blinded SPs. We measured SP satisfaction mainly to ensure that an intervention designed to encourage omission of testing did not negatively affect the patient-physician relationship. In the absence of favorable changes in patient-centeredness, targeted behaviors, or test ordering, it is difficult to judge the clinical significance of higher ratings of SP satisfaction with intervention physicians. It is nevertheless conceivable that the intervention improved resident communication skills that were unmeasured by blinded coding for patient-centered behaviors.24
Our study was limited by the inclusion of only 2 academic practices in a single institution. We only studied resident physicians because we viewed residency training as a potentially formative period when counseling habits may be more easily modified. Results could differ among physicians in community practice. Because intervention and control residents practiced in the same settings, they may have discussed the intervention, introducing contamination. In addition, we lacked precision in estimating the relative odds of requested test ordering in intervention vs control encounters, because of a smaller than planned number of SP visits and lower than anticipated rates of test ordering. We also acknowledge the possible influence of attending teaching physicians on resident ordering or counseling behaviors.
An SPI-based intervention aiming to improve resident skill in handling patient requests for low-value tests had no effect on ordering of low-value tests during subsequent unannounced SP visits, nor did the intervention influence resident patient-centeredness, the use of targeted counseling techniques, or diagnostic testing among actual patients. Although the intervention was theoretically grounded and rated favorably by residents, an SPI intervention with such limited scope and duration cannot be recommended as a means of improving the value of diagnostic testing in primary care.
Accepted for Publication: October 8, 2015.
Corresponding Author: Joshua J. Fenton, MD, MPH, Department of Family and Community Medicine, University of California–Davis Health System, 4860 Y St, Ste 2300, Sacramento, CA 95817 (firstname.lastname@example.org).
Published Online: December 7, 2015. doi:10.1001/jamainternmed.2015.6840.
Author Contributions: Dr Fenton had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: All authors.
Acquisition, analysis, or interpretation of data: Fenton, Jerant, Paterniti, Bang, Williams, Epstein, Franks.
Drafting of the manuscript: Fenton, Bang.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Bang, Franks.
Obtained funding:Fenton, Bang, Franks.
Administrative, technical, or material support: Jerant, Williams, Epstein.
Study supervision: Fenton, Paterniti, Williams, Epstein.
Conflict of Interest Disclosures: None reported.
Funding/Support: This work was supported by the Patient-Centered Outcomes Research Institute.
Role of the Funder/Sponsor: The sponsors had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.