Antibiotic prescribing rates at primary care office visits over time for each intervention are marginal predictions from hierarchical regression models of intervention effects, adjusted for concurrent exposure to other interventions and clinician and practice random effects. Error bars indicate 95% CIs. Interventions started at day 0 and ended at day 540. The plot in Panel A differs slightly during the intervention period from Panel 2 of the study by Meeker et al3 due to attrition of 5 clinicians, who were not included in this analysis.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Linder JA, Meeker D, Fox CR, et al. Effects of Behavioral Interventions on Inappropriate Antibiotic Prescribing in Primary Care 12 Months After Stopping Interventions. JAMA. 2017;318(14):1391–1392. doi:10.1001/jama.2017.11152
Inappropriate antibiotic prescribing contributes to antibiotic resistance and leads to adverse events.1 A cluster-randomized trial of 3 behavioral interventions2 intended to reduce inappropriate prescribing found that 2 of the 3 interventions were effective.3 This study examines the persistence of effects 12 months after stopping the interventions.
We randomized 47 primary care practices in Boston, Massachusetts, and Los Angeles, California, and enrolled 248 clinicians to receive 0, 1, 2, or 3 interventions for 18 months. All clinicians received education on antibiotic prescribing guidelines. Two behavioral interventions were electronic health record (EHR)–based: (1) suggested alternatives presented order sets that offered nonantibiotic treatments when clinicians attempted to prescribe antibiotics for acute respiratory infections (ARIs) and (2) accountable justification prompted clinicians to enter free-text written justifications for prescribing antibiotics for ARIs. The third behavioral intervention, peer comparison, sent monthly emails to clinicians comparing their inappropriate antibiotic prescribing rates for ARIs to clinicians with the lowest rates.3
Interventions began between November 1, 2011, and October 1, 2012. Measurements of baseline antibiotic prescribing began 18 months before the start of the intervention and ended 18 months after intervention stopped. The primary outcome was the rate of inappropriate antibiotic prescribing among office visits by adult patients for nonspecific upper respiratory tract infections, acute bronchitis, and influenza.2 In the study, accountable justification and peer comparison significantly reduced inappropriate antibiotic prescribing at the end of the intervention period.3 As a prespecified secondary objective, data were collected for 12 months postintervention, ending on April 1, 2015. During the postintervention period, 5 clinicians left the study and were excluded from this analysis.
The analysis was a piecewise logistic hierarchical model, with random effects for practices and clinicians and knots demarcating the intervention start and stop dates for each practice. This model measured the persistence of effects of each intervention during the postintervention period compared with practices that did not receive the intervention, adjusting for exposure to other interventions and practice-level and clinician-level effects. We used Stata (StataCorp), version 14.0, and considered 2-tailed P values less than .05 significant, unless otherwise specified. The institutional review board of each participating institution approved the study and waived patient informed consent.
There were 14 753 visits for antibiotic-inappropriate ARIs during the baseline period, 16 959 during the intervention period, and 7489 during the postintervention period. During the postintervention period, the rate of inappropriate antibiotic prescribing decreased in control clinics from 14.2% to 11.8% (absolute difference, −2.4%); increased from 7.4% to 8.8% (absolute difference, 1.4%) for suggested alternatives (difference-in-differences, 3.8% [95% CI, −10.3% to 17.9%]; P = .55); increased from 6.1% to 10.2% (absolute difference, 4.1%) for accountable justification (difference-in-differences, 6.5 [95% CI, 4.2% to 8.8%]; P < .001); and increased from 4.8% to 6.3% (absolute difference, 1.5%) for peer comparison (difference-in-differences, 3.9% [95% CI, 1.1% to 6.7%]; P < .005) (Figure). During the postintervention period, peer comparison remained lower than control (P < .001; 1-tailed test), whereas accountable justification was not different from control (P = .99; 1-tailed test).
In the 12 months after removing behavioral interventions, inappropriate antibiotic prescribing for ARIs increased relative to control practices—whose inappropriate prescribing rates continued to decrease. However, there was still a statistically significant difference between peer comparison and control practices 12 months after the interventions were removed, possibly because this intervention did not rely on EHR prompts whose absence might have been quickly noted by clinicians. Peer comparison might also have led clinicians to make judicious prescribing part of their professional self-image. Although these findings differ from a prior antibiotic-prescribing feedback intervention that did not have persistent effects,4 peer comparison–induced improvements have been durable in other nonmedical domains.5
Limitations of this study are that it only included volunteering clinicians from selected practices, and the postintervention follow-up was only 12 months. Persistence of effects might diminish further as more time passes.
These findings suggest that institutions exploring behavioral interventions to influence clinician decision making should consider applying them long-term.
Corresponding Author: Jason N. Doctor, PhD, Leonard D. Schaeffer Center for Health Policy and Economics, School of Pharmacy, University of Southern California, 635 Downey Way, Verna and Peter Dauterive Hall, Los Angeles, CA 90089-3333 (email@example.com).
Accepted for Publication: July 24, 2017.
Author Contributions: Drs Meeker and Doctor had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: All authors.
Acquisition, analysis, or interpretation of data: Linder, Meeker, Fox, Friedberg, Persell, Doctor.
Drafting of the manuscript: Linder, Goldstein, Doctor.
Critical revision of the manuscript for important intellectual content: Linder, Meeker, Fox, Friedberg, Persell, Doctor.
Statistical analysis: Linder, Meeker, Doctor.
Obtained funding: All authors.
Administrative, technical, or material support: Linder, Meeker, Goldstein.
Supervision: Linder, Doctor.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Linder reported an honorarium from the Society of Healthcare Epidemiology of America (SHEA) as part of the SHEA Antimicrobial Stewardship Research Workshop Planning Committee, an educational activity supported by Merck. Dr Persell reported grant funding from Pfizer and personal fees from Omron Healthcare. Dr Doctor reported consulting fees from Precision Health Economics. No other disclosures were reported.
Funding/Support: This study was supported by grants RC4 AG039115 (Dr Doctor) and R01 HS19913-01 (Dr Ohno-Machado) of the American Recovery and Reinvestment Act of 2009 from the National Institutes of Health and National Institute on Aging and Agency for Healthcare Research and Quality. Data for the project was collected by the University of Southern California's Medical Information Network for Experimental Research, which participates in the Patient-Centered Scalable National Network for Effectiveness Research supported by contract CDRN-1306-04819 (Dr Ohno-Machado) from the Patient-Centered Outcomes Research Institute.
Role of Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Create a personal account or sign in to: