Welch TR, Bullen MJ. The Effect of a Teaching Award on the Quality of Continuing Medical Education Participant Evaluations. Arch Pediatr Adolesc Med. 2000;154(1):81–82. doi:10-1001/pubs.Pediatr Adolesc Med.-ISSN-1072-4710-154-1-pei90185
To improve compliance with the completion of speakers' evaluation forms in a pediatric hospital continuing medical education program.
Preintervention and postintervention analysis.
Pediatric hospital in Cincinnati, Ohio.
Attendees at pediatric grand rounds programs.
Main Outcome Measure
Analysis of speaker evaluation forms for each of 20 pediatric grand rounds programs were used as the basis for speakers' awards.
Spontaneous written comments were found on a mean of 7.3 evaluations per preintervention program and 13.5 evaluations per postintervention program (P<.01). The distribution of objective scores in 3 items examined was wider postintervention than preintervention (P<.01).
When participants in continuing medical education programs know that their evaluations of an activity are used as the basis for an educational award, they may be more reflective in completing such evaluations.
THE FIFTH essential of the Essentials and Guidelines for Accreditation of Sponsors of Continuing Medical Education1 (CME) by the Accreditation Council for CME mandates the evaluation of CME activity by participants.2 Many programs satisfy this requirement by incorporating a questionnaire-format evaluation form into the attendance record so that participants must complete an evaluation of the activity to claim credit.
It has been our experience, especially with recurring programs, that such forms frequently are completed with minimal reflection. This, in turn, has minimized their utility in providing helpful feedback to speakers and program planners. To address this problem, and to offer recognition for outstanding CME providers, we instituted a system by which participants' evaluations were tied to a teaching award.
Children's Hospital Medical Center, Cincinnati, Ohio, is a 330-bed tertiary care pediatric hospital serving a referral region with a population of 1½ million. The hospital's CME department provides programs for virtually all of the pediatricians in the region, as well as for many family physicians and general practitioners who care for children.
The major ongoing CME activity for this group is a series of weekly pediatric grand rounds programs. The program is attended by an average of 140 participants, both at the base hospital and at multiple off-campus, satellite-networked sites.
The participant evaluation instrument for this program includes an objective rating of 5 items—(1) objectives met, (2) educational aids, (3) pertinence to practice, (4) presentation quality, and (5) knowledge gained—on a scale of 0 (lowest) to 5 (highest). Space for individual comments is also included. The CME office staff tabulates these evaluations, and the results are reported to individual speakers as well as to the Children's Hospital Medical Center CME committee.
Beginning in January 1998, it was announced to attendees of the grand rounds programs that participant evaluations would be used by the CME committee to choose the recipient of a quarterly CME teaching award. This information was also provided in a newsletter.
The hypothesis being tested by this intervention was that participants would be more reflective in completing evaluations after the intervention. Since reflection is a somewhat subjective quality, several surrogates were chosen.
Evaluations for 20 consecutive grand rounds programs prior to and following the intervention were reviewed. Next, for each item, the total number of scores for each of 6 possible (0-5) were tabulated. The assumption was that there would be a wider distribution of scores if participants were being more reflective in their evaluations. Finally, the number of evaluations that contained individual written comments was tabulated. The assumption was that such comments indicated more thorough consideration on the part of participants. Comparisons between the preintervention and postintervention periods were made by the χ2 test.
The preintervention series consisted of the first 20 weekly grand rounds programs starting in January 1997. The postintervention series consisted of the first 20 weekly grand rounds programs starting in January 1998.
Average attendance at the programs was not significantly different between the series (preintervention, 128 participants per week; postintervention, 119 per week). The 2 series did not differ significantly in obvious ways such as faculty locale (preintervention, 5 visiting speakers; postintervention, 7) or professional status (4 presentations in each group included nonphysician professionals).
In the preintervention group, an average of 7.3 evaluations per week included spontaneous written comments about the program. In the postintervention group, an average of 13.5 evaluations per week had such comments (P<.01).
Three of 5 items on which objective evaluation of programs are based (objectives met, quality of presentation, and knowledge gained) were chosen for more detailed analysis. The other 2 (educational aids and pertinence to practice) tended to be less reliably completed and were very event specific.
For each of these items, the total number of scores for each of 6 possible (0-5) were tabulated. A 6 × 2 table was thus generated for the groups (preintervention and postintervention), and a χ2 test was applied to each score. In each case the distribution of total scores was significantly different in the postintervention period (P<.01).
Evaluation of CME activities is a cornerstone of the Accreditation Council for CME accreditation process.1,2 Planning of future programs and selection of future speakers are typically based on the results of program evaluations. Additionally, summaries of comments from evaluation instruments are used to provide constructive feedback to teachers. All of these uses of evaluation instruments are predicted on their careful, reflective completion by participants. Although this has not, to the best of our knowledge, been studied systematically, a look at the audience in many CME meetings is revealing. Evaluation instruments are sometimes completed in a very cursory fashion, occasionally even before a program has finished.
It is easy to speculate on the reasons for such behavior. The regulatory requirements of contemporary medical practice, including those regarding CME, are increasingly viewed as onerous. Participants who perceive no direct benefit or outcome from completing a form may have minimal reason to do so with care. By tying the responses to these forms to a speakers' award system, however, participants are able to see a direct, highly visible outcome of their efforts in providing evaluations. To reinforce this further, the teaching awards are presented every 3 months at the grand rounds programs themselves. Thus, participants receive regular reinforcement of the importance of their behavior.
Benefits occur to many with this system. Good teachers receive formal recognition in front of their peers, and other speakers receive more useful feedback regarding their teaching style. Our CME planning committee receives more thoughtful evaluations that, in turn, are useful in future program planning. Finally, the participants themselves benefit if the overall quality of educational programs is raised.
Editor's Note: Put gold at the end and the rainbow becomes brighter.—Catherine D. DeAngelis, MD
Accepted for publication July 29, 1999.
We thank Mead Johnson Pharmaceuticals Inc, Cincinnati, Ohio, and Dave Fulkerson for sponsoring the CME teaching award.
Reprints: Thomas R. Welch, MD, Division of Nephrology and Hypertension, Children's Hospital Research Foundation, 3333 Burnet Ave, TCHRF5, Cincinnati, OH 45229-3039 (e-mail: firstname.lastname@example.org).