[Skip to Content]
[Skip to Content Landing]
Review
September 6, 2006

Instruments for Evaluating Education in Evidence-Based PracticeA Systematic Review

Author Affiliations
 

Author Affiliations: Department of Medicine, University of Alabama School of Medicine, and Department of Veterans Affairs Medical Center, Birmingham (Drs Shaneyfelt and Houston); Department of Medicine, University of Minnesota Medical School, Minneapolis (Dr Baum); Department of Medicine, Division of General Internal Medicine, David Geffen School of Medicine at University of California, Los Angeles (Dr Bell); Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison (Dr Feldstein); Henry Ford Hospital, Detroit, Mich (Dr Kaatz); Department of Medicine, University of Chicago, Chicago, Ill (Dr Whelan); and Department of Medicine, Yale University School of Medicine, New Haven, Conn (Dr Green).

JAMA. 2006;296(9):1116-1127. doi:10.1001/jama.296.9.1116
Context

Context Evidence-based practice (EBP) is the integration of the best research evidence with patients' values and clinical circumstances in clinical decision making. Teaching of EBP should be evaluated and guided by evidence of its own effectiveness.

Objective To appraise, summarize, and describe currently available EBP teaching evaluation instruments.

Data Sources and Study Selection We searched the MEDLINE, EMBASE, CINAHL, HAPI, and ERIC databases; reference lists of retrieved articles; EBP Internet sites; and 8 education journals from 1980 through April 2006. For inclusion, studies had to report an instrument evaluating EBP, contain sufficient description to permit analysis, and present quantitative results of administering the instrument.

Data Extraction Two raters independently abstracted information on the development, format, learner levels, evaluation domains, feasibility, reliability, and validity of the EBP evaluation instruments from each article. We defined 3 levels of instruments based on the type, extent, methods, and results of psychometric testing and suitability for different evaluation purposes.

Data Synthesis Of 347 articles identified, 115 were included, representing 104 unique instruments. The instruments were most commonly administered to medical students and postgraduate trainees and evaluated EBP skills. Among EBP skills, acquiring evidence and appraising evidence were most commonly evaluated, but newer instruments evaluated asking answerable questions and applying evidence to individual patients. Most behavior instruments measured the performance of EBP steps in practice but newer instruments documented the performance of evidence-based clinical maneuvers or patient-level outcomes. At least 1 type of validity evidence was demonstrated for 53% of instruments, but 3 or more types of validity evidence were established for only 10%. High-quality instruments were identified for evaluating the EBP competence of individual trainees, determining the effectiveness of EBP curricula, and assessing EBP behaviors with objective outcome measures.

Conclusions Instruments with reasonable validity are available for evaluating some domains of EBP and may be targeted to different evaluation needs. Further development and testing is required to evaluate EBP attitudes, behaviors, and more recently articulated EBP skills.

×