[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
January 2007

Effect of Multisource Feedback on Resident Communication Skills and Professionalism: A Randomized Controlled Trial

Author Affiliations

Author Affiliations: Department of Pediatrics, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio.

Arch Pediatr Adolesc Med. 2007;161(1):44-49. doi:10.1001/archpedi.161.1.44

Objective  To determine whether augmenting standard feedback on resident performance with a multisource feedback intervention improved pediatric resident communication skills and professionalism.

Design  Randomized controlled trial.

Setting  Children's Hospital Medical Center, Cincinnati, Ohio, from June 21, 2004, to July 7, 2005.

Participants  Thirty-six first-year pediatric residents.

Interventions  Residents assigned to the multisource feedback group (n = 18) completed a self-assessment, received a feedback report about baseline parent and nurse evaluations, and participated in a tailored coaching session in addition to receiving standard feedback. Residents in the control group (n = 18) received standard feedback only. The control group and their residency directors were blinded to parent and nurse evaluations until the end of the study.

Main Outcome Measures  Residents' specific communication skills and professional behaviors were rated by parents and nurses of pediatric patients. Both groups were evaluated at baseline and after 5 months. Scores were calculated on each item as percentage in the highest response category.

Results  Both groups had comparable baseline characteristics and ratings. Parent ratings increased for both groups. While parent ratings increased more for the multisource feedback group, differences between groups were not statistically significant. In contrast, nurse ratings increased for the multisource feedback group and decreased for the control group. The difference in change between groups was statistically significant for communicating effectively with the patient and family (35%; 95% confidence interval, 11.0%-58.0%), timeliness of completing tasks (30%; 95% confidence interval, 7.9%-53.0%), and demonstrating responsibility and accountability (26%; 95% confidence interval, 2.9%-49.0%).

Conclusion  A multisource feedback intervention positively affected communication skills and professional behavior among pediatric residents.

Trial Registration  Clinicaltrials.gov Identifier: NCT00302783

Communication and professionalism form the foundation of the patient-physician relationship and are essential to quality health care. Twenty percent of patient dissatisfaction results from problems in communication and 10% arises from some form of perceived disrespect.1 Improved patient-clinician communication not only increases patient satisfaction and decreases malpractice claims,2,3 but it also improves patient outcomes.4 In addition, disciplinary action by a medical board while in practice is strongly associated with unprofessional behavior as a trainee.5 Traditional methods of assessing communication skills and professionalism among resident physicians are inadequate.6 The Accreditation Council for Graduate Medical Education Outcomes Project requires that residency training programs evaluate 6 core competencies and promote improved performance.7 Among these are interpersonal and communication skills that “result in effective information exchange and teaming with patients, their families, and other health professionals”7 and professionalis m “as manifested through a commitment to carrying out professional responsibilities, adherence to ethical principles, and sensitivity to a diverse patient population.”7 While innovative, valid, and reliable assessment methods are available, their value as performance improvement tools is not well established. To be effective, methods must foster learning, inspire confidence, and enhance residents' ability to self-monitor.8

Multisource feedback has been used to reliably evaluate communication skills and professionalism among practicing physicians and residents in a variety of medical settings.9-11 Also known as 360° feedback, multisource feedback is a questionnaire-based assessment that gathers perspectives from several people within one's sphere of influence, as well as self-assessment.12 To varying degrees, physicians are willing to accept feedback from multiple sources and contemplate or initiate change as a result.11,13-16 Studies that have assessed the effect of multisource feedback interventions in medicine have been limited by reliance on physicians to self-report changes in behavior. One study of internal medicine residents demonstrated significant improvement among low-performing residents in response to structured feedback on patient satisfaction ratings.17 Feedback was beneficial in the targeted group. It is unclear whether multisource feedback would be beneficial in a cohort of residents with varying levels of baseline competency. In the absence of a randomized controlled trial, there is considerable uncertainty about the benefits of multisource feedback as a formative tool for residency training programs.

The purpose of this study was to test whether multisource feedback, including self-assessment and tailored coaching, improves resident communication skills and professionalism. We hypothesized that the performance of residents who were assigned to receive multisource feedback in addition to standard feedback would improve substantially more than that of residents who received standard feedback alone, as measured by parent and nurse ratings of specific behaviors over time.

Study participants

Residents were recruited between June 21, 2005, and August 21, 2005, at Cincinnati Children's Hospital Medical Center, University of Cincinnati, Cincinnati, Ohio. Residents were eligible if they were entering their first year of training and were scheduled for 2 pediatric inpatient rotations in which parent and nurse evaluations would be collected. Residents from combined internal medicine–pediatric training programs were excluded to ensure that all participants had similar training and exposure to feedback over the course of the study. Potential participants were identified by viewing the resident schedule produced by the chief residents. Forty-four residents were exclusively scheduled for pediatric rotations. Of the 44, 8 were ineligible because they were not scheduled to rotate on the selected wards during the study. All residents meeting eligibility criteria were invited to participate by either a residency director or a chief resident. Residents provided consent to allow their parent ratings, nurse ratings, and self-assessments to be analyzed and anonymously reported for research purposes. Residents could opt to exclude their evaluation data from study analysis, but all residents would still be evaluated by parents and nurses and receive feedback on their performance as mandated by the residency training program. Potential participants were not informed of the study design or the location and timing of parent and nurse evaluations. Eligible residents who agreed to participate provided verbal consent. Participants received no incentive to participate. This study was approved by our institutional review board.

Evaluation settings

We selected parents and nurses to survey from 1 general pediatric inpatient rotation and 1 subspecialty pediatric inpatient rotation. The outpatient setting was excluded owing to feasibility concerns because study initiation coincided with the implementation of an electronic medical record system.

Randomization and study design

We designed a stratified randomization scheme to balance the setting and timing of the evaluations. This was essential because of the context-dependent nature of ratings and the tendency for residents to improve with experience. One of us (J.C.K.) used computer-generated random numbers to assign residents to intervention and control groups. All residents underwent 2 evaluations separated by a mean of 5 months. Parent and nurse evaluators were unaware of the residents' study assignment.


After collection of baseline evaluations, residents in the multisource feedback group completed a self-assessment that contained 24 items. Ten items mirrored the parent evaluation items and 14 items mirrored the nurse evaluation items (see the “Outcome Measures” section).

Feedback Reports

Feedback reports were generated to summarize the results of parent and nurse evaluations. For the behavior described by each item, residents in the multisource feedback group could view their parent and nurse ratings as a median with rating range and compare their performance with that of their peers on the same rotation and with their self-assessment. Feedback reports also included resident-specific qualitative comments made by parents and nurses. Reports were distributed to residents and residency directors in advance of coaching sessions.

Coaching Sessions

The coaching sessions were designed to encourage the residents in the multisource feedback group to identify strengths and weaknesses, develop specific goals for improvement, and discuss strategies to attain those goals. Sessions were approximately 30 minutes long and used a format adapted from the Center for Creative Leadership Coaching for Development Workshop. Two residency directors served as coaches (J.A.G. and Mia Mallory, MD) and were unaware of the residents' study assignment until after baseline evaluations were collected. Coaches were trained using a mock feedback report and a role-playing resident. In addition, coaches followed an outline during each session to ensure that the key components were included. Sessions started with the coach eliciting the resident's general response to the feedback report, including whether the sources of evaluation were perceived as credible and whether evaluations were thought to be representative of performance. The coach reinforced behaviors that the resident identified as viewed positively by parents and nurses. For those behaviors that residents identified as viewed negatively by parents or nurses, the coach explored the perceived costs and benefits of changing the behavior. Then the coach asked the resident to set specific behavioral goals and discuss strategies that might be used to attain those goals.

Standard Feedback

Residents assigned to the control group received only standard feedback. This included a minimum of monthly written evaluations by their supervisory attending physician and senior resident. The residency directors were also blinded to parent and nurse evaluations for the residents assigned to the control group until the end of the study.

Outcome measures

Parent and nurse evaluation instruments were adapted from American Board of Internal Medicine surveys of communication skills and humanistic qualities,18 described in detail elsewhere.19 The Patient Satisfaction Questionnaire, which was designed for adult patients to rate their physicians, consists of 10 behavior-specific questions. The items relate to being friendly, using plain language, being respectful, being truthful, showing interest, communicating effectively during the physical examination, sharing decisions, explaining problems, encouraging questions, and listening carefully. Patients rate the performance of the physician by choosing among 5 ordinal responses (poor, fair, good, very good, and excellent). This survey tool was chosen as the primary outcome measure for the trial because of its known psychometric properties and the availability of published data from past applications. For 3 of the 10 items, we substituted “your child” for “you,” whenever appropriate, so that the Patient Satisfaction Questionnaire would be applicable to parent raters. The survey retained a high level of internal consistency with these modifications (Cronbach α coefficient, .95).19,20 In addition, parents were asked to provide qualitative comments about what they liked most and what they would change if they could change one thing about the care they received from the resident during the hospital stay.

The nurse evaluation was also adapted from an American Board of Internal Medicine instrument and consists of items different from those of the Patient Satisfaction Questionnaire. The items relate to communicating effectively with patients and families, timeliness of completing tasks, demonstrating responsibility and accountability, planning the course of care effectively, being sensitive and empathetic, establishing rapport with patients and families, respecting confidentiality, demonstrating honesty and integrity, communicating effectively with staff, treating staff with respect, completing tasks reliably, accepting suggestions graciously, anticipating postdischarge needs, and being a good team member. This evaluation was modified to use a reporting format that asked nurses to report whether or how often a particular experience occurred by choosing among 5 ordinal responses (never, rarely, sometimes, usually, or always) rather than rating how good the performance was. Internal consistency for this modified tool is high (Cronbach α coefficient, .96).19 Qualitative comments were also collected from nurses.

Data collection methods

A trained research assistant surveyed parents by using a standardized technique on the day of anticipated discharge or the last day of the resident's rotation. The research assistant informed parents that the evaluations would be reviewed by the resident and a coach to help guide improvement efforts. The parent was informed that the evaluation was anonymous and confidential. Parents who wished to participate gave verbal consent. The name and picture of the resident being evaluated were shown, and recognition of the resident was confirmed. The research assistant read the items to parents, showed answer options using a laminated card, and recorded responses on a laptop computer. Parents received no incentive to participate.

Before study initiation, nurses received an e-mail informing them of the study and requesting their assistance in evaluating residents. In addition, the project was discussed at nursing staff meetings, where the evaluation forms were reviewed and questions about the project were answered. Nurses were not specifically trained about evaluation procedures. Like parents, nurses were informed that the purpose was to help residents identify strengths and areas for improvement. On the last day of the rotation, nurses received 1 e-mail for each resident to be evaluated. Nurses had the opportunity to evaluate 2 to 4 (mean, 3) residents per month. Nurses were informed that the evaluations were anonymous and confidential and were instructed to rate residents based on behavior that had been directly observed. Participants entered responses using a Web-based evaluation instrument. Nurses received no direct incentive to participate.

Residents were contacted by e-mail to complete the Web-based self-assessment survey. In addition, residents provided demographic information including age, sex, and race or ethnicity. Race and ethnicity options were defined by the investigator and collected to describe the participants.

Sample size

Thirty-six resident schedules met eligibility requirements for inclusion in the study. Prestudy power calculations were based on this fixed sample size (18 per group) and the standard deviation observed in a past application of the Patient Satisfaction Questionnaire.20 There was 80% power to detect a 21% difference in ratings between the groups at α = .05 using a 1-sided test. No reliable estimates of intervention effect size were available at the outset of the study, but we believed that proceeding with the trial was worthwhile because anything less than 21% might not represent an educationally meaningful difference between the groups.

Statistical methods

We used χ2 and t tests to test differences in participant characteristics between groups. On each behavior-specific item, we calculated ratings as percentage in the highest response category. For example, if 10 parents evaluated a resident and 7 of the 10 marked “excellent” in response to the item related to using understandable language, the percentage of highest response was 70%. This approach, which is commonly used by analysts in the business sector to manage positive-response bias, has been described elsewhere.19 In essence, respondents were either completely satisfied or not. The percentage of highest response captures whether any element of the performance was unacceptable.21 In addition, this approach eliminates the need to convert ordinal responses to mean scores, which creates problems both in terms of measurement and in interpretation of results.22,23 Subsequently, for each item, we calculated the change between baseline and follow-up evaluations and the difference in this change in scores between groups. These differences were compared using t tests.

Participant characteristics

All 36 eligible residents agreed to participate and were subsequently randomized (Figure). The multisource feedback and control groups were similar (Table 1). Residents in the multisource feedback group completed the self-assessment survey at a mean (SD) of 13.8 (16.7) days after the collection of baseline evaluations. One resident did not complete the study because of a scheduling change. While baseline evaluations were collected and interventions completed, no follow-up evaluations were obtained. Therefore, this resident was excluded from subsequent analyses. At the end of the study, both groups had been evaluated by similar numbers of parents and nurses. Parent and nurse ratings were similar for both groups at baseline (Table 2 and Table 3).

Flow of residents through trial.

Flow of residents through trial.

Table 1. 
Baseline Participant Characteristics*
Baseline Participant Characteristics*
Table 2. 
Parent Ratings*
Parent Ratings*
Table 3. 
Nurse Ratings*
Nurse Ratings*
Parent Ratings

Parent ratings increased from baseline for both groups (Table 2) as to the percentage of parent ratings of excellent on items related to using plain language, communicating effectively during the physical examination, sharing decisions, encouraging questions, and listening carefully. In the multisource feedback group, there were statistically significant increases in ratings for 4 additional items about being friendly, being respectful, showing interest, and explaining problems. However, the differences in the change in ratings between multisource feedback and control groups did not reach statistical significance.

Nurse Ratings

Nurse ratings tended to increase or stay the same for residents in the multisource feedback group and to decrease for residents in the control group (Table 3). The difference in the change in scores between the groups was statistically significant and favored the intervention for items about communicating effectively with the patient and family, timeliness of completing tasks, and demonstrating responsibility and accountability. Similar trends were seen in nurse ratings for planning the course of care effectively, being sensitive and empathetic, establishing rapport with patients and families, respecting confidentiality, demonstrating honesty and integrity, communicating effectively with staff, treating staff with respect, completing tasks reliably, accepting suggestions, anticipating postdischarge needs, and being a good team member. For these items, however, the differences in the change in scores between multisource feedback and control groups did not reach statistical significance.


To our knowledge, this is the first randomized controlled study of multisource feedback in a cohort of residents with varying levels of baseline competency. This builds on the work of Cope et al,17 who demonstrated the potential for feedback on patient satisfaction to modify the behavior of low-performing residents. Our multisource feedback intervention, which included a self-assessment exercise, receipt of a feedback report, and participation in a tailored coaching session, was more efficacious than standard feedback alone in promoting selected areas of resident communication and professionalism as perceived by nurses but not parents. Nurse ratings indicated statistically significant differences that favored the multisource feedback group in how often residents communicated effectively with patients and families, how often they completed tasks in a timely manner, and how often they demonstrated responsibility and accountability. Parent ratings increased in both groups. While ratings increased more for the multisource feedback group on all items, differences between groups were not statistically significant.

The decrease in nurse ratings for residents in the control group was unexpected. While it is possible that resident skill deteriorated over the course of the study and the multisource feedback process protected residents from this decline, it seems more likely that the decline in nurse ratings represents an expectation that resident communication skills and professionalism would improve during training. Alternatively, the improvements of the residents in the multisource feedback group may have led to a heightened expectation for all residents. Regardless, nurses were able to detect significant differences between the groups and may be more discriminating as a result of relevant experiences with residents in training.24 Research exploring nurse expectations and performance criteria and how these change over time is warranted.

This study has potential limitations. While we had a sufficient number of participants to detect differences based on nurse ratings, we may have been underpowered to find statistically significant differences based on parent ratings. Also, the generalizability of our findings may be limited by the single-institution design. In addition, it is possible that, despite concealing the study design, residents became aware of the timing and location of parent and nurse evaluations. Awareness that one is the subject of evaluation can affect behavior (Hawthorne effect).25 Also, it is not known whether there was diffusion of the coaching behaviors by residency directors. If present, both would diminish our ability to detect a difference that favored the multisource feedback group. While the current study used a research assistant to collect data, there is evidence to suggest that physician self-selection of raters does not bias results.9 Construction of an economical and efficient data collection infrastructure is essential if residency programs are to have sustained success with multisource feedback.

Many questions about use of multisource feedback in residency training remain. While parents did not detect a difference between the groups, what role did parent feedback have in motivating the changes that were detected by nurses? Will residents in other disciplines experience similar benefits? How might peer evaluation affect the outcome? Would multisource feedback be enhanced if it were conducted over all years of training? While research in business settings suggests that coaching is crucial in maximizing the effectiveness of multisource feedback,26 which elements of the intervention are essential to promote improvement among residents? To answer these questions and realize the directive of the Institute of Medicine to improve the quality of medical education research,27 multisite studies using multifactorial designs are needed.28 The absence of dedicated funding for medical education research, however, makes testing innovations across institutions challenging.29 Calls to create an ongoing fund for competitive research grants financed by existing government sources of health education program support have not been answered.27,30

Although questions remain, our findings support continued inclusion of multisource (or 360°) feedback in the Accreditation Council for Graduate Medical Education Toolbox of Assessment Methods7 and offer evidence of benefits that accrue with its use. The rigorous design used in this study indicates that augmenting standard evaluation and feedback with an innovative multisource feedback intervention has a positive effect on communication skills and professional behavior in pediatric residents.

Back to top
Article Information

Correspondence: William B. Brinkman, MD, MEd, Department of Pediatrics, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave, MLC 7035, Cincinnati, OH 45229-3039 (Bill.Brinkman@cchmc.org).

Accepted for Publication: August 3, 2006.

Author Contributions: Dr Brinkman had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Brinkman, Geraghty, Lamphear, and DeWitt. Acquisition of data: Brinkman, Geraghty, and Gonzales del Rey. Analysis and interpretation of data: Brinkman, Geraghty, Lamphear, Khoury, DeWitt, and Britto. Drafting of the manuscript: Brinkman and Gonzales del Rey. Critical revision of the manuscript for important intellectual content: Brinkman, Geraghty, Lamphear, Khoury, DeWitt, and Britto. Statistical analysis: Brinkman, Khoury, and Britto. Obtained funding: Brinkman and Lamphear. Administrative, technical, and material support: Brinkman, Geraghty, and Lamphear. Study supervision: Brinkman, Geraghty, Gonzales del Rey, DeWitt, and Britto.

Financial Disclosure: None reported.

Funding/Support: This project was supported by the Ambulatory Pediatric Association Young Investigator Grant Program and Cincinnati Children's Hospital Medical Center. Dr Brinkman received a National Research Service Award Primary Care Research fellowship (2 T32 HP 10027-08).

Role of the Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.

Previous Presentation: This research was presented at the University of Cincinnati in partial fulfillment of a master's degree in education.

Acknowledgment: We thank Azadeh Namakydoust, MS, who served as a part-time research assistant for this study, for data collection.

Pichert  JWMiller  CSHollo  AHGauld-Jaeger  JFederspiel  CFHickson  GB What health professionals can do to identify and resolve patient dissatisfaction.  Jt Comm J Qual Improv 1998;24303- 312PubMedGoogle Scholar
Levinson  WRoter  DLMullooly  JPDull  VTFrankel  RM Physician-patient communication: the relationship with malpractice claims among primary care physicians and surgeons.  JAMA 1997;277553- 559PubMedGoogle ScholarCrossref
Hickson  GBFederspiel  CFPichert  JWMiller  CSGauld-Jaeger  JBost  P Patient complaints and malpractice risk.  JAMA 2002;2872951- 2957PubMedGoogle ScholarCrossref
Institute of Medicine, Health Professions Education: A Bridge to Quality.  Washington, DC The National Academies Press2003;
Papadakis  MATeherani  ABanach  MA  et al.  Disciplinary action by medical boards and prior behavior in medical school.  N Engl J Med 2005;3532673- 2682PubMedGoogle ScholarCrossref
Ginsburg  SRegehr  GHatala  R  et al.  Context, conflict, and resolution: a new conceptual framework for evaluating professionalism.  Acad Med 2000;75 ((10 suppl)) S6- S11PubMedGoogle ScholarCrossref
Accreditation Council for Graduate Medical Education Outcomes Project,http://www.acgme.org/outcome/comp/compMin.aspAccessed October 11, 2006
Epstein  RMHundert  EM Defining and assessing professional competence.  JAMA 2002;287226- 235PubMedGoogle ScholarCrossref
Ramsey  PGWenrich  MDCarline  JDInui  TSLarson  EBLoGerfo  JP Use of peer ratings to evaluate physician performance.  JAMA 1993;2691655- 1660PubMedGoogle ScholarCrossref
Hall  WViolato  CLewkonia  R  et al.  Assessment of physician performance in Alberta: the physician achievement review.  CMAJ 1999;16152- 57PubMedGoogle Scholar
Lipner  RSBlank  LLLeas  BFFortna  GS The value of patient and peer ratings in recertification.  Acad Med 2002;77 ((10 suppl)) S64- S66PubMedGoogle ScholarCrossref
Bracken  DTimmreck  CChurch  A The Handbook of Multisource Feedback: The Comprehensive Resource for Designing and Implementing MSF Processes.  San Francisco, Calif Jossey-Bass2001;
Fidler  HLockyer  JMToews  JViolato  C Changing physicians' practices: the effect of individual feedback.  Acad Med 1999;74702- 714PubMedGoogle ScholarCrossref
Lockyer  JViolato  CFidler  H Likelihood of change: a study assessing surgeon use of multisource feedback data.  Teach Learn Med 2003;15168- 174PubMedGoogle ScholarCrossref
Rees  CShepherd  M The acceptability of 360-degree judgements as a method of assessing undergraduate medical students' personal and professional behaviours.  Med Educ 2005;3949- 57PubMedGoogle ScholarCrossref
Sargeant  JMann  KFerrier  S Exploring family physicians' reactions to multisource feedback: perceptions of credibility and usefulness.  Med Educ 2005;39497- 504PubMedGoogle ScholarCrossref
Cope  DWLinn  LSLeake  BDBarrett  PA Modification of residents' behavior by preceptor feedback of patient satisfaction.  J Gen Intern Med 1986;1394- 398PubMedGoogle ScholarCrossref
PSQ Project Co-Investigators, Final Report on the Patient Satisfaction Questionnaire Project.  Philadelphia, Pa American Board of Internal Medicine1989;
Brinkman  WBGeraghty  SRLanphear  BP  et al.  Evaluation of resident communication skills and professionalism: a matter of perspective?  Pediatrics 2006;1181371- 1379PubMedGoogle ScholarCrossref
Tamblyn  RBenaroya  SSnell  LMcLeod  PSchnarch  BAbrahamowicz  M The feasibility and value of using patient satisfaction ratings to evaluate internal medicine residents.  J Gen Intern Med 1994;9146- 152PubMedGoogle ScholarCrossref
Jones  TOSasser  WE  Jr Why satisfied customers defect.  Harv Bus Rev 1995;7388- 99Google Scholar
Jamieson  S Likert scales: how to (ab)use them.  Med Educ 2004;381217- 1218PubMedGoogle ScholarCrossref
Peterson  RWilson  W Measuring customer satisfaction: fact and artifact.  J Acad Marketing Sci 1992;2061- 71Google ScholarCrossref
Misch  DA Evaluating physicians' professionalism and humanism: the case for humanism “connoisseurs”.  Acad Med 2002;77489- 495PubMedGoogle ScholarCrossref
Mayo  E The human problems of an industrial civilization.  New York, NY Macmillan Publishers1933;
Smither  JLondon  MFlautt  RVargas  YKucine  I Can working with an executive coach improve multisource feedback ratings over time? a quasi-experimental field study.  Personnel Psychol 2003;5623- 44Google ScholarCrossref
Institute of Medicine, Academic Health Centers: Leading Change in the 21st Century.  Washington, DC National Academies Press2003;
Reed  DAKern  DELevine  RBWright  SM Costs and funding for published medical education research.  JAMA 2005;2941052- 1057PubMedGoogle ScholarCrossref
Carline  JD Funding medical education research: opportunities and issues.  Acad Med 2004;79918- 924PubMedGoogle ScholarCrossref
Wartman  SA Revisiting the idea of a national center for health professions education research.  Acad Med 2004;79910- 917PubMedGoogle ScholarCrossref