Prescribing rates for each intervention are marginal predictions from hierarchical regression models of intervention effects, adjusted for concurrent exposure to other interventions and clinician and practice random effects. Error bars indicate 95% CIs. Model coefficients are available in eTable 3 in Supplement 2.
Study Protocol and Changes to Analysis Plan
eFigure 1. Study Timeline
eTable 1. Qualifying Revisit Diagnostic Codes
eTable 2. Sample Characteristics During the 18-month Baseline Period
eTable 3. Coefficient Estimates From Main Hierarchical Logistic Model to Estimate Intervention Effects on Inappropriate Antibiotic Prescribing Trajectories
eTable 4A. Coefficient Estimates for the Fully Interacted Logistic Model Estimated With Robust Standard Errors to Estimate Intervention Effects on Inappropriate Antibiotic Prescribing Trajectories, for Wald Test
eTable 4B. Coefficient Estimates for the Main Effects Only Logistic Model Estimated With Robust Standard Errors to Estimate Intervention Effects on Inappropriate Antibiotic Prescribing Trajectories, for Wald Test
eTable 5. Coefficient Estimates From Sensitivity Analysis: Simplified Hierarchical Logistic Model to Estimate Intervention Effects on Inappropriate Antibiotic Prescribing Rates, Comparing 18-Month Baseline to 18-Month Intervention Period
eTable 6. Coefficient Estimates From Logistic Model to Estimate Intervention Effects on Proportion of all Acute Respiratory Infections Coded as Antibiotic-Appropriate Diagnoses
eTable 7. Rate of Return Visit With Diagnosis of Concern, When Antibiotics Were Not Prescribed at the Index Visit, by Study Arm
eTable 8. Specific Diagnoses of Concern Associated With Return Visits for Which Antibiotics Might Have Been Helpful in the Initial Visit
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562–570. doi:10.1001/jama.2016.0275
Interventions based on behavioral science might reduce inappropriate antibiotic prescribing.
To assess effects of behavioral interventions and rates of inappropriate (not guideline-concordant) antibiotic prescribing during ambulatory visits for acute respiratory tract infections.
Design, Setting, and Participants
Cluster randomized clinical trial conducted among 47 primary care practices in Boston and Los Angeles. Participants were 248 enrolled clinicians randomized to receive 0, 1, 2, or 3 interventions for 18 months. All clinicians received education on antibiotic prescribing guidelines on enrollment. Interventions began between November 1, 2011, and October 1, 2012. Follow-up for the latest-starting sites ended on April 1, 2014. Adult patients with comorbidities and concomitant infections were excluded.
Three behavioral interventions, implemented alone or in combination: suggested alternatives presented electronic order sets suggesting nonantibiotic treatments; accountable justification prompted clinicians to enter free-text justifications for prescribing antibiotics into patients’ electronic health records; peer comparison sent emails to clinicians that compared their antibiotic prescribing rates with those of “top performers” (those with the lowest inappropriate prescribing rates).
Main Outcomes and Measures
Antibiotic prescribing rates for visits with antibiotic-inappropriate diagnoses (nonspecific upper respiratory tract infections, acute bronchitis, and influenza) from 18 months preintervention to 18 months afterward, adjusting each intervention’s effects for co-occurring interventions and preintervention trends, with random effects for practices and clinicians.
There were 14 753 visits (mean patient age, 47 years; 69% women) for antibiotic-inappropriate acute respiratory tract infections during the baseline period and 16 959 visits (mean patient age, 48 years; 67% women) during the intervention period. Mean antibiotic prescribing rates decreased from 24.1% at intervention start to 13.1% at intervention month 18 (absolute difference, −11.0%) for control practices; from 22.1% to 6.1% (absolute difference, −16.0%) for suggested alternatives (difference in differences, −5.0% [95% CI, −7.8% to 0.1%]; P = .66 for differences in trajectories); from 23.2% to 5.2% (absolute difference, −18.1%) for accountable justification (difference in differences, −7.0% [95% CI, −9.1% to −2.9%]; P < .001); and from 19.9% to 3.7% (absolute difference, −16.3%) for peer comparison (difference in differences, −5.2% [95% CI, −6.9% to −1.6%]; P < .001). There were no statistically significant interactions (neither synergy nor interference) between interventions.
Conclusions and Relevance
Among primary care practices, the use of accountable justification and peer comparison as behavioral interventions resulted in lower rates of inappropriate antibiotic prescribing for acute respiratory tract infections.
clinicaltrials.gov Identifier: NCT01454947
Quiz Ref IDOveruse of antibiotics exposes patients to unnecessary risk of adverse drug events, increases health care costs, and increases the prevalence of antibiotic-resistant bacteria.1,2 Most antibiotics prescribed in the United States are for acute respiratory tract infections, and roughly half of these prescriptions are intended to treat diagnoses for which antibiotics have no benefit.3,4 Despite published clinical guidelines and decades of efforts to change prescribing patterns, antibiotic overuse persists.1,2,5,6 Interventions such as physician and patient education, computerized clinical decision support, and financial incentives have historically produced modest reductions in antibiotic prescription rates for targeted acute respiratory tract infections.7,8
Changing clinician decision making has been challenging in acute respiratory visits and other care domains. For example, pay-for-performance has yielded mixed results,9,10 and traditional alerts and reminders, which can contribute to information overload, are often disruptive and ignored.11 Two recent studies favored an intervention that did not disrupt workflow and that appealed to clinicians’ pride in their own performance.12,13 With this in mind, there is increasing interest in use of behavioral science, including psychology and behavioral economics, to affect policy.14 Researchers are beginning to apply models from these disciplines to identify new social and cognitive devices to gently nudge clinician decision making while preserving freedom of choice.15 Such approaches are well matched to the goal of encouraging uptake of effective evidence-based treatments in health care, with appropriate antibiotic prescribing being one example.
We applied insights from behavioral science to design 3 interventions to reduce the rate of unnecessary antibiotic prescribing during ambulatory visits for acute respiratory tract infections in a multisite cluster randomized trial.
Details of the study design, randomization scheme, and interventions are summarized below and presented in detail in the original protocol (available in Supplement 1). The institutional review board of each participating institution approved all study procedures and waived informed consent for patients.
We recruited 49 primary care practices from 3 health systems using 3 different electronic health records (EHRs) in 2 geographically distinct regions: Massachusetts (Partners HealthCare: 22 practices affiliated with Brigham and Women’s Hospital or Massachusetts General Hospital) and Southern California (AltaMed Medical Group, 22 practices; The Children’s Clinic, 5 practices [the latter also sees a high volume of adult patients]). Participating practices served different patient populations. The majority of patients at the Southern California practices were Hispanic, with a high proportion living at or below 200% of the federal poverty level, whereas the patients served by the Massachusetts practices were predominantly white/non-Hispanic, with a wider income range. Race/ethnicity was assessed as part of the standard collection of demographic data from EHRs, and harmonized using Observational Medical Outcomes Partnership standards for race/ethnicity reporting.16
Practices were excluded prior to randomization if none of their clinicians had at least 5 antibiotic-inappropriate acute respiratory tract infection visits annually. Two of the practices did not meet inclusion criteria, leaving 47 practices for randomization. All sites required clinicians to prescribe medications through their EHRs.
Clinicians were recruited via email and enrolled through an online education module covering acute respiratory tract infection diagnosis and treatment guidelines, information about the interventions, and any expected changes to their EHRs (details in the original protocol, available in Supplement 1); clinicians provided electronic informed consent at module completion. Some organizations allowed payment to individual clinicians for their participation; others instead requested payment to practices. Payment for study participation was $1200 per clinician, regardless of intervention assignment or antibiotic prescribing rates.
Quiz Ref IDThe primary study outcome was the antibiotic prescribing rate for antibiotic-inappropriate acute respiratory tract infection visits and no concomitant reason for antibiotic prescribing. Antibiotic-inappropriate diagnoses included nonspecific upper respiratory tract infections, acute bronchitis, and influenza (International Classification of Diseases, Ninth Revision [ICD-9] codes 460, 464, 464.0, 464.00, 464.1, 464.10, 464.2, 464.20, 464.4, 464.50, 465, 465.0, 465.8, 465.9, 466, 466.0, 466.1, 466.11, 466.19, 487, 487.1, 487.8, 490). We excluded visits with diagnosis codes for acute pharyngitis or acute rhinosinusitis because guidelines permit antibiotic prescription when certain criteria are met, and we lacked data necessary to identify this antibiotic-appropriate subset.17,18
A visit for an antibiotic-inappropriate acute respiratory tract infection was eligible for outcome inclusion if (1) the patient was 18 years or older, (2) the clinician and practice were enrolled in the study, (3) the visit occurred during the 18-month baseline or 18-month intervention period, and (4) the patient had no visit for acute respiratory tract infection within the prior 30 days. Visits were excluded when patients had medical comorbidities that were acute respiratory tract infection guideline exclusions (eg, chronic lung disease; for full list of excluded diagnoses, see original protocol [Appendix E: Code Set Definitions] in Supplement 1) or patients had concomitant visit diagnoses indicating presence of other, potentially antibiotic-appropriate, infections (eg, cellulitis, acute sinusitis).
Each participating site created an extract from its EHR or billing records of the data elements necessary to compute the outcome measures for all patients with acute respiratory tract infections. These records were transferred to the University of Southern California, where study staff checked data quality and transformed the data into a standard model (Observational Medical Outcomes Partnership Common Data Model, version 3).16
Suggested alternatives was an EHR-based intervention most closely resembling traditional clinical decision supports and order sets. Diagnoses of acute respiratory tract infection triggered a pop-up screen stating that “Antibiotics are not generally indicated for [this diagnosis]. Please consider the following prescriptions, treatments, and materials to help your patient,” followed by a list of alternatives (see original protocol [Appendix F: Example of Suggested Alternatives Order Set] in Supplement 1), each with streamlined order entry options for over-the-counter and prescription medications (eg, decongestants) and letter templates excusing patients from work. The suggested alternatives intervention drew from the behavioral insight that prescribers may infer that a suggested (nonantibiotic) alternative ought to be considered, thus reducing the likelihood that an antibiotic would be prescribed.19
Quiz Ref IDAccountable justification was also an EHR-based intervention. An EHR prompt asked each clinician seeking to prescribe an antibiotic to explicitly justify, in a free text response, his or her treatment decision. The prompt also informed clinicians that this written justification would be visible in the patient’s medical record as an “antibiotic justification note” and that if no justification was entered, the phrase “no justification given” would appear. Encounters could not be closed without the clinician’s acknowledgment of the prompt, but clinicians could cancel the antibiotic order to avoid creating a justification note, if they chose. The accountable justification alert was triggered for both antibiotic-inappropriate diagnoses and potentially antibiotic-appropriate acute respiratory tract infection diagnoses (eg, acute pharyngitis).
The accountable justification intervention was based on prior findings that accountability improves decision making accuracy and that public justification engenders reputational concerns.20-23 To preserve their reputations, clinicians should be more likely to act in line with injunctive norms24—that is, what one “ought to do” as recommended by clinical guidelines.25
Peer comparison was an email-based intervention. Clinicians were ranked from highest to lowest inappropriate prescribing rate within each region using EHR data. Clinicians with the lowest inappropriate prescribing rates (the top-performing decile) were told via monthly email they were “Top Performers” (see original protocol [Appendix G: Sample Peer Comparison Email Text] in Supplement 1). The remaining clinicians were told that they were “Not a Top Performer” in an email that included the number and proportion of antibiotic prescriptions they wrote for antibiotic-inappropriate acute respiratory tract infections, compared with the proportion written by top performers.
Peer comparison was distinct from traditional audit-and-feedback interventions in its comparison with top-performing peers instead of average-performing peers and its delivery of positive reinforcement to top performers—a strategy shown elsewhere to sustain performance.26-28
The suggested alternatives and accountable justification interventions were triggered by clinician entry of antibiotic orders for antibiotic-inappropriate acute respiratory tract infections and those for which antibiotics were potentially (but not necessarily) appropriate: acute rhinosinusitis and acute pharyngitis. The peer comparison intervention included only antibiotic-inappropriate acute respiratory tract infections. All 3 interventions were suppressed for patients who had comorbidities constituting guideline exclusions.
We randomized practices to receive 0, 1, 2, or 3 interventions in a 2 ×2 × 2 factorial design to avoid within-practice contamination between clinicians, blocking on geographic region (in part, to balance study allocation by EHR) to achieve balance for evaluation of main effects.29 This factorial design enabled investigation of potential interactions between intervention effects, while preserving the ability to estimate the main effects of each intervention individually. Randomization programming was conducted with the R statistical programming package (R Foundation for Statistical Computing; https://www.r-project.org/foundation/).
Dates for intervention implementation differed between practice organizations, owing to differences in clinician recruitment procedures and EHR-specific development times for intervention features. Interventions began between November 2011 and October 2012 and lasted for 18 months in each practice. For the latest-starting practices, follow-up ended on April 1, 2014. A study timeline is presented in the eFigure in Supplement 2.
Our primary analysis was based on the approach of Gerber et al12: we used a piecewise hierarchical model with a knot at month 0 (the intervention start date) to model trajectories of the log odds of the main outcome for the control group and each intervention group, starting 18 months before each intervention began and ending after 18 months of intervention exposure, and including random effects for practices and clinicians. The estimates of interest were intervention × time interaction terms, which represented changes in prescribing trajectories, relative to contemporaneous controls, that occurred when each intervention began. This model measured the effects of each intervention in comparison with all practices that did not receive the intervention, adjusting for exposure to other interventions and practice- and clinician-level effects to account for time-invariant characteristics (eg, specific EHR product used). We did not adjust for patient characteristics, which were measured after clinicians were exposed to the interventions and could be concomitant with outcome. Instead, we relied on block randomization to equate groups on patient characteristics.
To display findings on the original scale of the data, we generated monthly marginal predictions and confidence intervals from this model corresponding to the control condition and each intervention individually. Confidence intervals for differences between control and intervention (ie, for intervention effects) were bootstrapped with 1000 replications.
We performed sensitivity analyses to test for interactions between interventions by expanding the main effects model to include interaction terms for each combination of interventions and comparing this fully interacted model to the original main model using a Wald test.30 We also evaluated interaction terms individually using a similar approach.
With an expected 2252 or more visits per intervention group, a priori calculations indicated 80% power to detect a 7% absolute reduction in antibiotic prescribing (less than the 8.9% median effect in prior efforts to improve antibiotic quality improvement)8 at the .05 level of significance, assuming a baseline prescribing rate of 50% and intrapractice correlation coefficient of 0.05. These expected visit counts and correlation coefficients were based on preintervention analyses, using data from the Boston and Los Angeles sites.
To investigate the possibility that interventions led to diagnosis shifting (ie, changes in clinicians’ diagnostic coding habits), we used the approach of our main models to test whether potentially antibiotic-appropriate acute respiratory tract infection diagnoses (eg, pneumonia, chronic sinusitis) increased as a proportion of all acute respiratory tract infection diagnoses.31
In sensitivity analyses, we fit a simple difference-in-differences model estimating changes in the primary outcome associated with each intervention, treating the entire 18-month intervention period as a binary variable, without accounting for prescribing trajectories.
Elements of our analytic approach (specifically, using an 18-month baseline period and piecewise hierarchical modeling technique in the main analyses, performing the interaction effect sensitivity analysis, and performing the simple difference-in-differences model sensitivity analyses) were modified from our original analysis plan based on feedback received during the peer review process.
Under the direction of an independent data and safety monitoring board, we evaluated patient safety. For antibiotic-inappropriate visits in which no antibiotic was prescribed, we assessed return visits within 30 days for the presence of complications potentially attributable to untreated bacterial infections (eTable 1 in Supplement 2). We conducted chart reviews on a 20% random sample of such cases to determine whether prescription of antibiotics at the initial visit would have prevented the complication.
We performed a complete case analysis. There were no missing values of the main outcome, and we did not impute missing covariate values (which were missing in approximately 3% of records). We analyzed data using Stata MP version 12.1 (StataCorp) and considered 2-sided P values less than .05 significant.
Of 353 clinicians invited, 248 (70%) agreed to participate (Figure 1). On average, enrolled clinicians were 48 years old, and most were women (Table 1). Among patients who had a qualifying visit during the 18-month intervention period, the mean age was 48 years, 33% were men, 87% were white, 32% were Hispanic, and 59% had private insurance (patient characteristics during the baseline period are available in eTable 2 in Supplement 2).
During the study, there were 125 333 visits for any diagnosis of acute respiratory tract infection. Of these, 31 712 visits (14 753 during baseline period, 16 959 during intervention period) met criteria for outcome evaluation.
Mean antibiotic prescribing rates decreased from 24.1% at intervention start to 13.1% at intervention month 18 (absolute difference, −11.0%) for control practices; from 22.1% to 6.1% (absolute difference, −16.0%) for suggested alternatives (difference in differences, −5.0% [95% CI, −7.8% to 0.1%]; P = .66 for differences in trajectories); from 23.2% to 5.2% (absolute difference, −18.1%) for accountable justification (difference in differences, −7.0% [95% CI, 9.1% to −2.9%]; P < .001); and from 19.9% to 3.7% (absolute difference, −16.3%) for peer comparison (difference in differences, −5.2% [95% CI, −6.9% to −1.6%]; P < .001) (Figure 2; full regression results available in eTable 3 in Supplement 2).
There were no statistically significant interactions between interventions, and the fully interacted model failed to improve goodness-of-fit over the main effects model (eTable 4A and 4B in Supplement 2). The results of sensitivity analyses treating the entire 18-month baseline and intervention periods as dummy variables (eTable 5 in Supplement 2) were similar to the main analysis (Figure 2; eTable 3 in Supplement 2). Unadjusted qualifying visit counts and prescribing rates before and during the intervention for each study group are available in Table 2.
Relative to control, none of the interventions was statistically significantly associated with changes over time in the proportion of antibiotic-appropriate acute respiratory tract infection diagnoses among visits for any acute respiratory tract infection diagnosis (ie, there was no evidence of diagnosis shifting; full regression results available in eTable 6 in Supplement 2).
The rate of return visits for possible bacterial infections within 30 days following visits for acute respiratory tract infection (both antibiotic-inappropriate and potentially antibiotic-appropriate) in which antibiotics were not prescribed was 0.43% (95% CI, 0.25% to 0.70%) among control practices. There was a statistically significantly higher rate of such return visits in the accountable justification plus peer comparison group (1.41% [95% CI, 1.06% to 1.85%]) (eTable 7 in Supplement 2). No other intervention group (including the group applying all 3 interventions simultaneously) had a statistically significantly higher rate of such return visits.
Among return visits, study physicians and an independent data safety and monitoring board reviewed a random sample of 33 cases across all study groups. For 12 cases, it was unlikely that an antibiotic would have been helpful if prescribed at the index visit (eg, “cold symptoms” with clear chest and no fever at return visit); for 8 cases, there was uncertainty (eg, patient returned with a diagnosis of pneumonia, but no chest radiograph was obtained at the index or return visit); and for 13 cases, antibiotics might have been helpful (eg, a patient with influenza diagnosis at initial visit was hospitalized for pneumonia 1 week later; return-visit diagnoses associated with these 13 cases are available in eTable 8 in Supplement 2).
Quiz Ref IDWe designed, implemented, and evaluated 3 behavioral interventions to curtail overuse of antibiotics for acute respiratory tract infections. We found that 2 socially motivated interventions—accountable justification and peer comparison—resulted in statistically significant reductions in inappropriate antibiotic prescribing, while suggested alternatives, which lacked a social component, had no statistically significant effect. There were no statistically significant interactions between interventions; therefore, applying these interventions simultaneously might have additive effects on antibiotic prescribing.
The intervention effects that we observed represent reductions in inappropriate prescribing beyond those attributable to an educational module or observation alone (the Hawthorne effect), which were both applied in the control condition. We believe these effect sizes (5.2 to 7.0 percentage points) are clinically significant, especially when measured against control clinicians who were motivated to join a trial, knew they were being monitored, and who had relatively low antibiotic prescribing rates at baseline.32-34
Quiz Ref IDInsights from behavioral science may have enhanced the effectiveness of interventions. Our peer comparison intervention performed favorably in comparison with traditional audit-and-feedback.35 Previous studies have shown modest effects36 or null effects37 when justification interventions lacked public accountability (ie, lacked mechanisms to publicly display clinicians’ written justifications). In contrast, our justification intervention included such a mechanism: the “antibiotic justification note.”
All 3 interventions involved modest changes to the practice environment (ie, “nudges”); none restricted clinicians’ choice of treatment or changed how clinicians were paid. For some primary care practices, the peer comparison intervention might be the simplest and most pragmatic of these interventions, since it requires no modification of the EHR. However, the peer comparison intervention depends on producing valid performance measures, which can be challenging when data are unavailable or individual-clinician sample sizes are small. The accountable justification intervention, although it requires EHR modification, does not have these drawbacks and can, in theory, be applied to any clinical decision (since clinicians should always be able to articulate a decision-making rationale).
There was little evidence of potential harm associated with any of the 3 interventions. Although the study group applying both the accountable justification and peer comparison interventions had a modestly higher rate of return visits for diagnoses of concern (approximately 1.4% of visits, compared with 0.4% in the control group), no other intervention group (including the group applying all 3 interventions simultaneously) was associated with higher rates of such return visits. We did not assess potential harms associated with prescribing antibiotics unnecessarily; such harms are most likely when inappropriate prescribing rates are highest (ie, when no intervention is applied). However, as with other efforts to change clinical practice, continual monitoring of patient outcomes is advisable.
Our study has limitations. First, the number of clinicians within each cluster was small. Although a high proportion of invited clinicians chose to participate, some did not, which may limit generalizability. Similarly, trial findings might not generalize to primary care practices dissimilar to those enrolled. Second, results are dependent on EHR and billing data, which are imperfect for performance measurement—although in the present context they have demonstrated validity.38
Third, logistic and other nonlinear regression methods may lead to effect sizes biased toward the null in randomized studies when important covariates are omitted.39,40 Fourth, although interactions were not significant, combining nudges in some cases could have attenuated the effects we observed (eg, if prescribing rates hit a “floor”). Therefore, our results might underestimate the independent effects of each individual intervention.
Fifth, our safety analyses were limited to return visits to the clinical organizations studied (possibly underestimating intervention risks) and did not investigate harms caused by unnecessary antibiotics (possibly underestimating risks of not applying the interventions). Sixth, we did not distinguish between narrower- and broader-spectrum antibiotics.
Seventh, we did not measure heterogeneity of intervention effects by type of practice, clinician, or EHR product. Eighth, we could not directly measure differences in clinicians’ coding habits between types of practice settings. Ninth, persistence of effects is unknown. Last, elements of our analytic approach were determined post hoc, based on guidance during the peer review process. Effect estimates from these post hoc analyses had the same direction and statistical significance as our prespecified analyses.
Among primary care practices, the use of accountable justification and peer comparison as behavioral interventions resulted in lower rates of inappropriate antibiotic prescribing for acute respiratory tract infections.
Corresponding Author: Jason N. Doctor, PhD, Leonard D. Schaeffer Center for Health Policy and Economics, School of Pharmacy, University of Southern California, Verna & Peter Dauterive Hall (VPD), 635 Downey Way, Los Angeles, CA 90089-3333 (email@example.com).
Correction: This article was corrected online on May 9, 2016, to correct a typographic error in the text and data errors in eTables 4 and 6.
Author Contributions: Drs Meeker and Doctor had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Meeker, Linder, Fox, Friedberg, Persell, Goldstein, Hay, Doctor.
Acquisition, analysis, or interpretation of data: Meeker, Linder, Fox, Friedberg, Goldstein, Knight, Hay, Doctor.
Drafting of the manuscript: Meeker, Linder, Friedberg, Goldstein, Knight, Doctor.
Critical revision of the manuscript for important intellectual content: Meeker, Linder, Fox, Friedberg, Persell, Goldstein, Hay, Doctor.
Statistical analysis: Meeker, Linder, Knight, Hay, Doctor.
Obtained funding: Meeker, Linder, Fox, Doctor.
Administrative, technical, or material support: Meeker, Linder, Persell, Knight, Hay.
Study supervision: Meeker, Linder, Doctor.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.
Funding/Support: This study was supported by the American Recovery & Reinvestment Act of 2009 (RC4 AG039115) from the National Institutes of Health/National Institute on Aging and Agency for Healthcare Research and Quality (Dr Doctor, University of Southern California). The project also benefited from technology funded by the Agency for Healthcare Research and Quality through the American Recovery & Reinvestment Act of 2009 (R01 HS19913-01) (Dr Ohno-Machado, University of California, San Diego). Data for the project were collected by the University of Southern California's Medical Information Network for Experimental Research (Med-INFER) which participates in the Patient Scalable National Network for Effectiveness Research (pSCANNER) supported by the Patient-Centered Outcomes Research Institute (PCORI), Contract CDRN-1306-04819 (Dr Ohno-Machado).
Role of the Funders/Sponsors: The funders/sponsors had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.
Additional Contributions: We gratefully acknowledge Dana Goldman, PhD (University of Southern California), for providing consultation; Laura Pearlman, SB (University of Southern California), for assistance with programming and data management; Kensey Pease, BA (University of Southern California), for general project support; Elisha Friesema, BA (Northwestern University), for supporting protocol development; Alan Rothfeld, MD (COPE Health Solutions), Felix Carpio, MD (AltaMed Health Services), Michael Hochman, MD (AltaMed Health Services), Sarita Mohanty, MD (LA Care), and Maria Chandler, MD (The Children’s Clinic of Long Beach and Memorial Care Hospital), for clinical management at the participating Los Angeles practices; and Caroline Birks, MD (Partners Healthcare System and Massachusetts General Hospital), for assisting with clinical management of the participating Boston practices. Dr Goldman, Ms Pearlman, Ms Friesema, Dr Mohanty, Dr Chandler, Dr Birks, and Dr Rothfeld received compensation for their roles in the study.
Create a personal account or sign in to: