Figure. Applicant criteria were ranked on a scale of 1 to 20, with 1 being most important and 20 being not important at all. Numbers indicate the mean rank that criterion received; error bars, standard deviation; asterisk, significant difference between the 2 groups with the associated P value (Wilcoxon rank sum test). USMLE indicates US Medical Licensing Examination.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Puscas L, Sharp SR, Schwab B, Lee WT. Qualities of Residency Applicants: Comparison of Otolaryngology Program Criteria With Applicant Expectations. Arch Otolaryngol Head Neck Surg. 2012;138(1):10–14. doi:10.1001/archoto.2011.214
Objectives To evaluate the criteria used by otolaryngology programs in ranking residency candidates and to compare residency candidate ranking criteria among otolaryngology programs and applicant expectations.
Design Cross-sectional, anonymous survey administered during the 2009 and 2010 match cycles.
Setting Otolaryngology residency programs.
Participants Otolaryngology residency program applicants (PAs) and otolaryngology program directors (PDs).
Main Outcome Measures The PDs were asked to rank the importance of 10 criteria in choosing a residency candidate on a 20-point scale (with 1 indicating utmost importance; 20, not important at all). The PAs were asked to express their expectations of how candidates should be ranked using those same criteria.
Results The interview and personal knowledge of the applicant (mean rank, 3.63) were the most important criteria to PDs, whereas the interview and letters of recommendation (mean rank, 3.65) were the most important criteria among PAs. Likelihood to rank program highly and ethnicity/sex were the least valued by PDs and PAs.
Conclusions Although PDs and PAs agree on the least important criteria for ranking otolaryngology residency candidates, they disagree on the most important criteria. This information provides insight into how programs select residency candidates and how this compares with applicant expectations. Furthermore, this information will assist applicants in understanding how they might be evaluated by programs. Improved understanding of the match process may increase the likelihood of having a good fit between otolaryngology programs and matched applicants.
The match administered by the National Resident Matching Program is one of the most important events in the professional life of a physician. Because the match determines the type, length, and location of postgraduate training and adherence to the results of the match is binding for both applicants and programs, the importance of the match result cannot be underestimated. This importance is especially true for competitive specialties, such as otolaryngology–head and neck surgery (OHNS) in which the number of applicants exceeds the number of training spots. In the 2010 match, for example, 395 applicants vied for 280 positions in OHNS.1
Both prospective residents and residency programs have much at stake in achieving a successful outcome in which matched programs and applicants are a good fit with one another. This is reflected in the number of publications across multiple specialties that deal with resident selection criteria.2-4 This study complements a recent publication5 by our group that investigated the criteria applicants used in selecting an OHNS residency program. This study found significant differences between what applicants chose as the most important criteria in selecting a residency program and what those programs expected.
Using the same respondents and methods, this study addresses the other component of the match process: selecting residency applicants. The objectives of this study were (1) to assess the criteria used by otolaryngology programs in selecting residency candidates and (2) to compare residency candidate ranking criteria among otolaryngology programs and applicant expectation of how candidates should be selected.
Exemption status was granted by the Duke University School of Medicine Institutional Review Board to conduct an anonymous survey study of otolaryngology residency program applicants (PAs) and otolaryngology program directors (PDs) during the 2009 and 2010 match cycles. During the time frame of the study, there were 105 otolaryngology residency programs in the United States.1,6 Because each residency program has its own PD, the PDs were used as proxies for the otolaryngology programs themselves. Using the mailing list maintained by the Society of University Otolaryngologists Program Directors' Association, a solicitation to complete the anonymous survey was sent to every PD. To study the PAs, a solicitation to participate along with a link to the survey was posted on the Web site OtoMatch.com, which is popular among otolaryngology applicants.
After demographic data were obtained, the survey asked respondents to rank in order of importance the following criteria in evaluating prospective residents: ethnicity/sex, extracurricular activities, interview, letters of recommendation, likelihood to rank program highly, medical school grades, personal knowledge of the applicant, reputation of applicant's medical school, research experience, and US Medical Licensing Examination (USMLE) scores. The PDs were asked to rank the importance of these qualities in evaluating prospective residents. The PAs were asked to rank each of these criteria according to how they believed OHNS programs should evaluate applicants. These 10 factors were ranked on a scale of 1 to 20, with 1 indicating most important; 5, very important; 10, important; 15, less important; and 20, not important at all. Ranks were mutually exclusive so no 2 factors could receive the same rating. We used a 1- to 20-point scale rather than a 1- to 10-point scale to allow for greater spread among the individual items. Because of this approach, respondents had greater flexibility to assign relative value to different criteria. For example, if someone thought that 2 items were very important, one could rate those factors as 1 and 2 and begin rating the others at 7. The 10 criteria were drawn from our own residency program's experience and review of the literature.
In collaboration with the Duke Department of Biostatistics and Bioinformatics, comparison between PD criteria and PA expectations were analyzed using the 2-sided Wilcoxon rank sum test. Results are presented as means and standard deviations, however, because the data were distributed in an acceptable pattern for parametric analysis. Subanalysis of data included the following stratifications: program size, city population, geographic location, US News & World Report ranking,7 and availability of protected research time. Comparison of subanalysis groups was performed using an unequal t test to determine whether there were any differentiating trends among PD attitudes toward ranking criteria based on these various program characteristics. Statistical evaluation was completed using SAS statistical software, version 9.2 (SAS Institute Inc).
A total of 41 PDs and 84 PAs completed the survey. One PA survey was censored because of frivolous responses (age older than 84 years, more than 100 publications), leaving 83 for inclusion in the analyses. No duplicate entries were identified. The PA demographic data are given in Table 1, whereas residency program data are reported in Table 2. The Figure reveals the overall mean rank assigned by the PDs and PAs to individual criteria. (Criteria were ranked using a scale of 1-20, with 1 being most important and 20 being not important at all.) The interview and personal knowledge of the applicant were most important to the PDs (mean [SD] rank, 2.63 [2.72] and 3.63 [3.27], respectively), whereas the interview and letters of recommendation were most important among the PAs (mean [SD] rank, 2.55 [1.92] and 3.65 [2.34], respectively). The PDs and PAs agreed on the least important criteria: likelihood to rank program highly (PDs: mean [SD] rank, 14.28 [4.30]; PAs: mean [SD] rank, 13.48 [4.96]) and ethnicity/sex (PDs: mean [SD] rank, 17.15 [4.63]; PAs: mean [SD] rank, 16.31 [5.02]). However, given the standard deviations, these factors were not uniformly ranked as least important by all respondents.
For PDs, the top 3 in descending order of importance were the interview (mean [SD] rank, 2.63 [2.72]), personal knowledge of the applicant (mean [SD] rank, 3.63 [3.27]), and USMLE scores (mean [SD] rank, 4.63 [2.78]). For PAs, the top 3 in descending order of importance were the interview (mean [SD] rank, 2.55 [1.92]), letters of recommendation (mean [SD] rank, 3.65 [2.34]), and grades (mean [SD] rank, 5.06 [2.86]).
Significant differences between the 2 groups were found among 4 criteria: personal knowledge of the applicant, letters of recommendation, extracurricular activities, and reputation of the applicant's medical school. The PDs believed that personal knowledge of the applicant was the second most important criterion in assessing a future resident (mean [SD] rank, 3.63 [3.27]), whereas the PAs rated this factor as the fifth most important (mean [SD] rank, 5.65 [4.11]; P = .005). Letters of recommendation were ranked second among the PAs (mean [SD] rank 3.65 [2.34]) and ranked fourth among the PDs (mean [SD] rank, 5.07 [3.10]; P = .005). The PAs believed that extracurricular activities were more important (mean [SD] rank, 8.45 [4.22]) than did the PDs (mean [SD] rank, 10.00 [4.06]; P = .01). Finally, reputation of the applicant's medical school was more important to PDs (mean [SD] rank, 8.54 [3.36]) than it was to PAs (mean [SD] rank, 10.96 [4.83]; P = .009). Significant differences were not found among the relative importance to PDs and PAs of interview, USMLE scores, medical school grades, research experience, likelihood to rank program highly, or ethnicity/sex.
Subgroup analyses were performed to determine whether there were any differentiating trends among programs' attitudes toward applicants based on various program characteristics. Programs were compared by size: those programs with 1 to 3 residents per year vs programs with 4 to 5 residents per year. Smaller programs significantly ranked medical school grades at a higher value (mean rank, 4.81) than larger programs (mean rank, 7.80; P = .03). Programs in the top 20 hospitals rated reputation of the applicant's medical school (mean rank, 6.00) significantly higher than programs not in the top 20 (mean rank, 9.25; P = .009). When we examined subgroups based on geographic area, programs differed in several comparisons. Programs in the West valued extracurricular activities significantly higher (mean rank, 3.50) than programs located in the Midwest (mean rank, 11.07; P = .03) or the East (mean rank, 9.96; P = .02). Programs in the West also valued medical school grades (mean rank, 3.50) significantly higher than programs in the East (mean rank, 6.04; P = .02). No differences were found when programs were compared according to whether they had a dedicated research rotation.
The ultimate aim of the process of reviewing, interviewing, and ranking applicants is to match residents who will perform well in the respective OHNS training programs. The great challenge is to identify those traits in applicants that predict good outcomes—not only while in training but also after residents leave the supervision and structure of the training program. Several articles8,9 on this topic have been written pertaining to OHNS. Unfortunately, this prediction is difficult at this time given the conflicting information available on common performance measures, such as medical school grades and USMLE scores.10-13 In the absence of widely accepted and validated predictors of success, OHNS residency programs must use the criteria available to them to identify applicants best suited for their training programs.
The goal of this study was to survey residency programs to identify which factors they considered important in evaluating potential residents. A further, novel aim was to determine whether the goal differed from how applicants expected programs to assess prospective residents. This information provides insight into how programs rank residency candidates and how this compares with applicant expectations. Furthermore, these data will assist applicants in understanding how they may be evaluated by programs to help direct their application efforts.
The interview was considered most important by both the PAs and the PDs. The interview allows PDs and PAs to get a “feel” for each other and to easily disseminate information and ask and answer questions. It also gives each party the opportunity to decide whether one can work with the other for the duration of the training period. Numerous other studies14-19 have also shown the prime importance given to the interview in determining the rank list. McCaffrey20 found that medical students believe the interview is the most important criterion by which programs rank applicants, and our study showed that PAs believe it should be the most important factor used to assess an applicant. However, it is still undetermined whether the interview is able to predict subsequent performance during residency.21,22
Interestingly, personal knowledge of the applicant was rated second by the PDs while the PAs ranked this criterion as fifth most important. Our survey did not specifically ask how this personal knowledge was obtained, but in the survey under this criterion, “rotations or research” was given as an example. The PAs ranked letters of recommendation as second while the PDs ranked this criterion as fourth most important. Perhaps PAs believed that because the letters are written by people who knew them well, the letters could be trusted to be an accurate representation of the applicant.
Given the utmost importance placed on personal interaction (either through the interview or personal knowledge of the applicant) by PDs, PAs interested in a particular residency program may want to consider doing a rotation at that site. For every applicant doing his/her OHNS rotation either at home or away, our survey confirms that this rotation is an audition, and an applicant's performance will weigh heavily in the ranking process. This has been found to be true in another competitive specialty, namely, orthopedic surgery, in which away rotations increased the chances of matching.23 Because PAs believed that letters of recommendation were second only to the interview as most important, PDs may want to consider contacting the authors of those letters to gain more information regarding the applicant.
The reputation of a medical school is admittedly a highly subjective assessment. Although some may use the amount of research dollars in the budget or the level of National Institutes of Health funding as a guide, these are not reflective of any individual medical student's performance or potential. Perhaps this is why PAs ranked this lower than PDs, leading to a significant difference in the ranking of this item. One can also consider an applicant's class rank, but this information may not be known by every applicant. In addition, the use of an individual's class rank is fraught with difficulty because there is no way to verify this information until after the applicant graduates, and by this time the match is already completed.
Our study has several limitations. The response rate from programs was 39%, but it is not possible to assess the response rate from applicants because there is no way to know how many OHNS applicants saw or followed the link to the survey on OtoMatch.com. This is an inherent limitation to any study designed to solicit anonymous survey responses, so to help increase survey response, 2 match cycles were studied. Although a larger response pool would further validate these findings, the data obtained from this cross-sectional study provide insight into the match process of selecting applicants.
Because PDs were used as a proxy to evaluate the attitudes of residency programs, it is possible that the beliefs of the PDs do not accurately reflect the beliefs of the faculty within that program. There is also an issue of multiple comparisons because there are 10 different criteria being assessed, and there is an issue of dependence among the factors because the ranks were mutually exclusive. Some respondents may have believed that 2 factors were equally important but were forced by the survey design to assign 1 rank per factor. To minimize this issue, we used a scale of 1 to 20 rather than a scale of 1 to 10 so that those completing the survey would have greater flexibility to discriminate among ranks.
Currently, there is no validated method for identifying and assessing a resident who is the ideal candidate for a residency program. Personality testing, task-based questions related to the responsibilities of a resident, evaluation under simulated stressful conditions, and character assessment are all important facets of applicant review that complement each other, and some have been formally investigated in selecting residents.8,24-26 Ideally, a method can be developed that incorporates all of these things to allow for an accurate judgment of a potential resident's performance in a given program.
In conclusion, this study provides insight on the current criteria PDs use to rank PAs and how those criteria compare with the expectations of PAs of how they should be evaluated. The top 5 criteria for both groups were medical school grades, interview, letters of recommendation, personal knowledge of the applicant, and USMLE scores. However, PDs and PAs significantly differed in the order of these criteria.
Correspondence: Walter T. Lee, MD, Division of Otolaryngology–Head and Neck Surgery, Duke University Medical Center (DUMC 3805), Durham, NC 27710 (firstname.lastname@example.org).
Submitted for Publication: July 21, 2011; accepted September 27, 2011.
Author Contributions: All authors had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Sharp and Lee. Acquisition of data: Puscas, Sharp, and Lee. Analysis and interpretation of data: Puscas, Sharp, Schwab, and Lee. Drafting of the manuscript: Puscas and Lee. Critical revision of the manuscript for important intellectual content: Sharp, Schwab, and Lee. Statistical analysis: Sharp, Schwab, and Lee. Administrative, technical, and material support: Schwab. Study supervision: Lee.
Financial Disclosure: None reported.
Additional Information: Drs Puscas and Sharp contributed equally to this work.
Additional Contributions: Maragatha Kuchibhatla, PhD, Department of Biostatistics and Bioinformatics, Duke University, performed data analysis. Ramon M. Esclamado, MD, provided intellectual contributions to this project.
Create a personal account or sign in to: