[Skip to Content]
Access to paid content on this site is currently suspended due to excessive activity being detected from your IP address 23.23.54.109. Please contact the publisher to request reinstatement.
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Download PDF
Table 1. 
Clinical Teaching Rounds
Clinical Teaching Rounds
Table 2. 
Comparison of Ratings for Faculty Attending and Not Attending Clinical Teaching Rounds (CTRs)
Comparison of Ratings for Faculty Attending and Not Attending Clinical Teaching Rounds (CTRs)
1.
Irby  DM Teaching and learning in ambulatory care settings: a thematic review of the literature. Acad Med. 1995;70898- 931Article
2.
Irby  DM How attending physicians make instructional decisions when conducting teaching rounds. Acad Med. 1992;67630- 638Article
3.
Wendelberger  KSimpson  D The use of teaching scripts by pediatric faculty.  Proceedings of 33rd Annual Research in Medical Education Conference. Association of American Medical Colleges1994; Boston, MassAbstract 48
4.
Ende  J Feedback in clinical medical education. JAMA. 1983;250777- 781Article
5.
Irby  DMEvans  JLarson  L Trends in clinical evaluation. Morgan  MRIrby  DMeds.Evaluating Clinical Competence in the Health Professionals. St Louis, Mo CV Mosby Co1978;20- 29
6.
Ende  JPomeranz  AErickson  F Preceptors' strategies for correcting residents in an ambulatory care medicine setting: a qualitative analysis. Acad Med. 1995;70224- 229Article
7.
Simpson  DELawrence  SLKrogull  SR Using standardized ambulatory teaching situations for faculty development. Teach Learn Med. 1992;458- 61Article
8.
Harris  I New expectations for professional competence. Curry  LWergin  Jeds.Educating Professionals–Responding to New Expectations and Accountability. San Francisco, Calif Jossey-Bass Publishers Inc1993;17- 52
9.
Schum  TRYindra  KKoss  RNelson  D Students' and residents' ratings of teaching effectiveness in a department of pediatrics. Teach Learn Med. 1993;5128- 132Article
Educational Intervention
March 1998

Clinical Teaching RoundsA Case-Oriented Faculty Development Program

Author Affiliations

From the Department of Pediatrics (Drs Lye and Wendelberger) and the Department of Educational Services (Drs Simpson and Bragg), the Medical College of Wisconsin, Milwaukee.

Arch Pediatr Adolesc Med. 1998;152(3):293-295. doi:10.1001/archpedi.152.3.293
Abstract

Objective:  To improve clinical teaching with emphasis on improving provision of feedback through a faculty development series modeled on clinical rounds.

Method  Seven 1-hour conferences were held for the pediatric faculty during the academic year 1994-1995. Clinical rounds were emulated, with a simulated learner functioning as the patient with a chief complaint of some instructional problem. The conferences progressed from discussion about teaching in a particular situation, to videotapes of clinical teaching, and finally to live clinical teaching. Evaluation of the conferences was assessed by attendance records, participants' evaluations of the conferences, and comparing student and resident evaluations of faculty who attended (ie, those who attended ≥2) with faculty who did not attend. Comparisons were made for the academic year before and after the conferences using paired t tests.

Results  Forty percent of the faculty attended 2 or more conferences. Mean conference ratings were 4.00 to 4.35, (1 is poor; 5, excellent). Faculty who attended had a significant improvement in ratings for feedback (P=.01) and overall teaching effectiveness (P=.04). Ratings for faculty who did not attend did not change.

Conclusion  These conferences were well received by the faculty and are an effective way to improve clinical teaching.

RESIDENT and medical student education has shifted into the ambulatory setting where the clinician-teacher is constrained by time and financial incentives to provide efficient care.1 Formal inpatient rounds with the whole ward team have been replaced by encounters between a single preceptor and a single learner. Consequently, clinical teaching must be enfolded into time-limited ambulatory patient visits in a rapid-paced environment.

Irby1 argues that teaching in the ambulatory setting places different demands on the teacher owing to the lack of opportunity for preparation and the limited time available for in-depth instruction. Based on his review of the English literature, Irby concludes that the preceptor in the ambulatory setting must provide brief, focused teaching. The ability of expert teachers to provide focused teaching depends on a complex and tightly interconnected set of medical and pedagogical knowledge that, like diagnostic knowledge, is stored in memory in the form of prototypes or scripts. These teaching scripts include disease-specific goals for instruction, key teaching points, teaching methods, and a knowledge of the common problems that learners have at the different levels of their training. Through activation of these scripts, clinical teachers can quickly diagnose the learners' problems and employ instructional strategies consistent with that assessment.2 For example, a group of pediatric clerkship directors were asked what common errors they would expect when in July a third-year medical student evaluated a 3-year-old child with an exacerbation of asthma.3 Expected common errors were a disorganized or incomplete presentation of the acute illness; failure to obtain important family, social, or environmental history; and difficulty assessing the severity of the patient's condition. Teaching would then address each of these common errors.

The aim of this study was to improve clinical teaching through strategies designed to enhance clinical teachers' scripts. We sought to target a critical teaching effectiveness variable that was particularly in need of improvement–giving specific feedback; 1 of 7 recommendations by Irby1 for improving teaching in the ambulatory setting. There are readily available guidelines for providing feedback that highlight its instructional value4; however, studies examining feedback in clinical medicine indicate that learners fail to recognize feedback despite the faculty's emphasis.5 A possible explanation for this phenomenon is offered by Ende et al6 who observed that strategies used by preceptors for correcting interns were indirect and minimized exposing the interns' errors. As a result, residents, given this indirect feedback, failed to recognize their errors.

Building on these concepts, we sought to implement Irby's recommendations for teaching in the ambulatory setting by offering sessions that deliberately targeted the process of developing clinical teachers' scripts. In response to the need to enhance faculty teaching with particular emphasis on providing feedback in the challenging environment of the ambulatory setting, this article describes the development, implementation, and effect of a series of clinical teaching rounds (CTRs) on preceptors' provision of feedback to learners.

PARTICIPANTS AND METHODS
PARTICIPANTS

The focus of the program included the use of systematic approaches aimed at components of teaching scripts: improving learner assessment, selection of appropriate objectives (given time and resource constraints), and methods of delivery and ongoing strategies for provision of feedback to the learner. Three variables guided the design of our faculty development activity: (1) use an existing faculty meeting time to maximize attendance; (2) minimize the perceived threat to faculty of teaching in front of their colleagues; and (3) use a familiar format (clinical rounds) to build faculty's teaching scripts specific to feedback.

The Department of Pediatrics meets weekly over lunch for an academic conference. During the academic year 1994-1995, 7 of these conferences were reserved for CTRs, thereby maximizing the number of clinical faculty participants including those who may not be primarily interested in improving their teaching skills.

As with any faculty development program, improving one's skills as a teacher requires active learning that includes practice with feedback. In piloting the CTRs format, concerns arose about the threat for faculty who demonstrated their clinical teaching skills in front of their colleagues. To address this issue, CTRs progressed from a low-threat environment with discussion about how one would teach a particular situation, to videotaped encounters of one faculty's clinical teaching, and finally to live clinical teaching demonstration with volunteer faculty and simulated learners.7

Clinical rounds are commonly used in medicine to enhance clinicians' knowledge. As clinicians are familiar with these formats, we emulated the structure to build faculty's feedback scripts that contain both medical and teacher knowledge. A simulated learner functioned as the patient with a chief complaint of some common instructional problem typically occurring with third-year medical students (eg, giving a disorganized presentation).

As clinical teaching rarely occurs in front of one's faculty colleagues, early rounds used interactive lectures to introduce concepts of teaching scripts as recommended by Irby2 and the concept of explicit feedback as suggested by Ende.4 Subsequent sessions focused on anticipated learner errors, using faculty's existing knowledge of learners and disease; and strategies for providing feedback through case-based, small group discussion. Critical discussion and reflection8 focused on the anticipated learner errors, actual learner errors, selection of 1 or 2 teaching points, and feedback from the simulated learner on the feedback received during the simulated case (Table 1).

Faculty's ability to anticipate common learner errors and 1 or 2 teaching points, using their pedagogical knowledge, was underscored by presenting some basic information about the learner (eg, third-year medical student on required pediatrics clerkship, second clinical rotation of the year) and the patients primary medical problem. The list of anticipated errors and teaching points generated by attendees was then compared with the actual learner errors and teaching points, and presented or demonstrated in the teaching case.

Given the demands of the clinical setting, full attendance at these sessions was not anticipated. Each session was designed so that it built on the previous sessions, but with enough redundancy so that those who had not previously attended could learn from it alone.

DATA SOURCES

Three sources of data were used to evaluate the effectiveness of the CTRs. The noon academic conferences, when we presented the CTRs, is not a required faculty activity. Faculty typically attend based on the relevance of the session to their work. To determine if faculty valued the CTRs, attendance records were kept. All attendees were asked to complete an evaluation form for each CTR that provided a second source of data. The third source comes from the ongoing evaluations of the faculty's clinical teaching provided by students and residents. Students and residents routinely rate faculty using a standardized clinical teaching evaluation form that has high reliability and validity.9 Using attendance records, ratings of faculty who attended CTRs were compared with faculty who did not attend. Comparisons were made using the feedback and overall teaching effectiveness items taken from the clinical teaching evaluation form.

RESULTS

The CTRs were well attended with 58 (58%) of the 101 pediatric faculty attending at least 1 of the conferences and 40% attending 2 or more of the conferences. Faculty rated each CTR on content, presentation, quality of audiovisual materials, relevance to their practice, and overall quality. Mean ratings on these categories were 4.00 to 4.35 (SD = 0.75-0.86); 1 is poor and 5 is excellent. Typical comments from the attendees included: "A nice method (compare/contrast to demonstrate teaching styles and attempts at feedback)." "Excellent reality situations, concrete suggestions made." "This series should be weekly—lose momentum." "This type of forum is useful and constructive." "Overview by facilitators at end was very helpful."

Individual faculty ratings on feedback and overall teaching effectiveness were examined before (academic year 1993-1994) and after (academic year 1995-1996) the intervention. To have sufficient data for analysis a cutoff of 2 or more CTR sessions was used; only 25% attended 3 or more, and 15% attended 4 or more. Those who attended 2 or more of the CTR sessions and who were on the faculty from 1993 to 1996 made up the treatment group (n=25). Those faculty who did not attend served as controls (n=31). Using paired t tests to analyze the within group change score, results revealed that the treatment group had a statistically significant improvement in both feedback (P=.01) and overall teaching effectiveness (P=.04). There was no significant change for either variable for the controls (Table 2).

COMMENT

Excellent clinical teachers use teaching scripts that include assessment of the learner, critical objectives (by disease/clinical problem), methods for instruction (eg, short lectures, feedback, and others), and availability of resources. By highlighting the components of the teaching script, the investigators were able to systematically design and present a successful clinical teaching development program emphasizing feedback. Students' and residents' ratings of feedback for CTR attendees improved during the study period. Although faculty participation was not randomized, the inference that improvement in learner ratings is due to the intervention is strengthened by the lack of improvement in ratings in nonattendees over the same period. Selection bias by participants is possible but was minimized by holding the CTRs during a regularly scheduled conference time that is not specifically focused on medical education. In addition, comparisons of faculty ratings over time (including CTR participants) demonstrate no significant changes prior to this intervention.

Clinical teaching rounds were favorably received by the pediatric faculty who consistently attended. Attendees provided positive formative evaluations. The most difficult aspect of the program was overcoming the reluctance of faculty to teach in front of their colleagues. Participation was facilitated through the use of videotapes and eliciting the help of recognized experts. A major strength of the program was the use of medical students who simulated actual teaching cases and provided feedback to faculty about what they actually learned vs what the faculty thought they had taught.

The CTRs were easy and inexpensive; invited speakers are the only major expense. The benefit of having a realistic learner, with an appropriate level of knowledge, to compare what was learned during the session to what the faculty thought they had taught was invaluable. Clinical teaching rounds may prove to be a viable, cost-effective approach that addresses a nemesis of clinical teaching—providing constructive and timely feedback.

Back to top
Article Information

Accepted for publication September 11, 1997.

This project was partially supported by a grant from the Learning Resources Subcommittee of the Curriculum and Evaluation Committee, Medical College of Wisconsin, Milwaukee.

Presented as a poster at the Annual Meeting of the Association of American Medical Colleges, Innovations in Medical Education Exhibits,Washington, DC, November 1, 1995.

Editor's Note: I never cease to be amazed at the reluctance of faculty to learn how to teach. A 40% show rate is hardly stellar, but it's probably par or above par for medical faculty everywhere.—Catherine D. DeAngelis, MD

Reprints: Patricia Lye, MD, MS, Medical College of Wisconsin Department of Pediatrics, 8701 Watertown Plank Rd, Milwaukee, WI 53226.

References
1.
Irby  DM Teaching and learning in ambulatory care settings: a thematic review of the literature. Acad Med. 1995;70898- 931Article
2.
Irby  DM How attending physicians make instructional decisions when conducting teaching rounds. Acad Med. 1992;67630- 638Article
3.
Wendelberger  KSimpson  D The use of teaching scripts by pediatric faculty.  Proceedings of 33rd Annual Research in Medical Education Conference. Association of American Medical Colleges1994; Boston, MassAbstract 48
4.
Ende  J Feedback in clinical medical education. JAMA. 1983;250777- 781Article
5.
Irby  DMEvans  JLarson  L Trends in clinical evaluation. Morgan  MRIrby  DMeds.Evaluating Clinical Competence in the Health Professionals. St Louis, Mo CV Mosby Co1978;20- 29
6.
Ende  JPomeranz  AErickson  F Preceptors' strategies for correcting residents in an ambulatory care medicine setting: a qualitative analysis. Acad Med. 1995;70224- 229Article
7.
Simpson  DELawrence  SLKrogull  SR Using standardized ambulatory teaching situations for faculty development. Teach Learn Med. 1992;458- 61Article
8.
Harris  I New expectations for professional competence. Curry  LWergin  Jeds.Educating Professionals–Responding to New Expectations and Accountability. San Francisco, Calif Jossey-Bass Publishers Inc1993;17- 52
9.
Schum  TRYindra  KKoss  RNelson  D Students' and residents' ratings of teaching effectiveness in a department of pediatrics. Teach Learn Med. 1993;5128- 132Article
×