Satisfaction with match vs match success (P<.001, 2-sided t test). Diamonds encompass 95% confidence intervals and show mean ordinate values for each group (4.2 and 6.0) by the center horizontal. Overall mean satisfaction is 5.9 (horizontal line).
Satisfaction with match vs how highly respondents matched (P<.001, indicating a significant correlation). The center line represents the least-squares regression line, and the outer lines represent the 95% confidence intervals. A circle represents a single data point, and multiple data points are shown by the number of radiating lines.
Satisfaction with match on a 7-point scale vs the number of programs associated with apparent match rule noncompliance (P = .71, indicating a nonsignificant correlation). The center line represents the least-squares regression line, and the outer lines represent 95% confidence intervals. A circle represents a single data point, and multiple data points are shown by the number of radiating lines.
How highly the applicant matched vs the number of programs associated with reported match rule noncompliance (P = .29, indicating a nonsignificant correlation). The center line represents the least-squares regression line, and the outer lines represent 95% confidence intervals. A circle represents a single data point, and multiple data points are shown by the number of radiating lines.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Lansford CD, Fisher SR, Ossoff RH, Chole RA. Otolaryngology–Head and Neck Surgery Residency Match: Applicant Survey. Arch Otolaryngol Head Neck Surg. 2004;130(9):1017–1023. doi:10.1001/archotol.130.9.1017
To examine satisfaction with the match process and reported failures to comply with the match rules among applicants of the January 2002 Otolaryngology–Head and Neck Surgery match.
A survey was mailed to all applicants completing the 2002 San Francisco Matching Program match.
Surveys were mailed to 312 applicants, and the 151 returned surveys were entered into a database, which was then subjected to statistical analysis.
Main Outcome Measures
Survey questions asked whether the applicant matched and how highly, how well the applicant considers the match to fulfill its goals, how many interviews the applicant attended, and how many of these included perceived noncompliance with San Francisco Matching Program rules by region of the country.
Satisfaction with the match correlated significantly (P<.001) to how highly the applicant matched among those successfully matching. The satisfaction among matching applicants was significantly better (P<.001) than those not matching. The 151 respondents had a total of 970 interviews. The respondents reported that they identified noncompliance with the match rules in 42 (4.3%) of these encounters. Most (87%) respondents reported full adherence to the match rules, and the degree of adherence did not correlate significantly to applicants' satisfaction (P = .71).
Applicants' satisfaction with the match process depended significantly on their match outcome. Rule noncompliance was rare and not significantly related to applicant satisfaction. This study suggests that otolaryngology applicants perceive high levels of satisfaction with the match and infrequent breaches of stated match rules.
Since 1983, medical students applying to US residency positions in otolaryngology have done so through the San Francisco Matching Program (SFMatch) under the direction and authority of the Association of Academic Departments of Otolaryngology.1 This service uses a computer-run algorithm to mediate confidential rank-ordered preferences of applicants and residency programs and results in training contracts on "Match Day," which typically occurs in January.
Development of this matching system and those like it, such as the National Resident Matching Program (NRMP), was undertaken to protect medical students from several problems associated with the ad hoc fashion of enrolling in a residency position prior to a centralized and coordinated match. "Exploding offers" in which an applicant had a limited time to accept an offer before it was revoked pressured some students into accepting their first offer before interviewing at other, potentially more desirable, programs. "Insiderism" also played a prominent role in determining resident selection, placing personal contacts ahead of individual merit. In an effort to check these practices, several medical specialties began using the NRMP in 1952.2 Motivated by the same problems, the field of otolaryngology–head and neck surgery adopted the SFMatch in 1983.
Recently, a class-action lawsuit against the Accreditation Council of Graduate Medical Education and several teaching hospitals has gained national attention. The thrust of this lawsuit is that the NRMP violates US antitrust laws by unfairly restricting competition among residency programs for residents' services, thereby lowering residents' salaries.3,4 Although this lawsuit remains unsettled, the SFMatch might also become a target of similar litigation.
The purpose of the present survey was to determine the efficacy of and satisfaction with the SFMatch from the viewpoint of applicants for specialty training in otolaryngology–head and neck surgery. Observance of the match rules, as perceived and reported by the applicants, was used to clarify its effect on match satisfaction and function.
On April 22, 2002, surveys were mailed to the 312 individuals who had submitted rank lists for postgraduate year 2 residency positions in otolaryngology–head and neck surgery, starting in 2004, through the SFMatch. Only applicants with addresses within the United States and Puerto Rico were surveyed. Three applicants (1 who matched and 2 who did not) with mailing addresses other than the United States or Puerto Rico were not included because the additional postage required for the response would compromise the anonymity. The 312 individuals surveyed consisted of 246 applicants who successfully matched with an otolaryngology–head and neck surgery training program and 66 who did not match. The survey was anonymous, and respondents were asked not to identify themselves. The surveys included postage-paid addressed return envelopes. The institutional review board at Duke University allowed this study by exemption under 45 CFR 46 101(b).
The survey included verbatim reproduction of match rules (Table 1),5 which were provided to applicants in the application packet. Questions included in the survey are given in Figure 1. The last question of the survey showed a map of the United States and Puerto Rico divided into regions as follows: Northeast: Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont; Southeast: Alabama, Delaware, District of Columbia, Florida, Georgia, Kentucky, Louisiana, Maryland, Mississippi, North Carolina, Puerto Rico, South Carolina, Tennessee, Virginia, and West Virginia; Midwest: Illinois, Indiana, Iowa, Michigan, Minnesota, Missouri, Ohio, and Wisconsin; West: Arizona, Arkansas, California, Colorado, Idaho, Kansas, Montana, Nebraska, Nevada, New Mexico, North Dakota, Oklahoma, Oregon, South Dakota, Texas, Utah, Washington, and Wyoming. This question asked applicants to indicate the number of programs violating match rules and the number of programs at which they interviewed for each region.
Data from the survey as well as the postmark state and date were entered into a FileMaker Pro (Claris Corporation, Santa Clara, Calif) computer database. Statistical analysis was then performed using JMP IN (SAS Institute, Cary, NC) and SPSS (SPSS Incorporated, Chicago, Ill) software. Responses were grouped by postmark location into 1 of the 4 regions outlined above.
A total of 151 responses were received, with postmark dates ranging from April 25 to June 14, 2002, yielding a response rate of 48%. Of these respondents, 142 said they had matched and 9 said they had not. The number of responses received was comparable among regions: 38 from the Northeast (all matched successfully), 37 from the Southeast (3 unmatched), 30 from the Midwest (2 unmatched), and 35 from the West (3 unmatched), with 11 postmarks not present or not legible (1 unmatched). These proportions are comparable with the whole 312 SFMatch applicants: 80 from the Northeast (11 unmatched), 73 from the Southeast (17 unmatched), 83 from the Midwest (19 unmatched), and 76 from the West (19 unmatched) (written communication from Doug Perry, MSW, Director of SF Match, November 20, 2003, and December 18, 2003). Of the survey respondents, 94% matched, compared with 79% of SFMatch applicants completing a rank list. Those survey respondents who matched did so at a mean ± SD position of 2.3 ± 1.9 (median, 2) on their rank list. One respondent who matched did not indicate how highly. This compares with data from the SFMatch for all 246 matching applicants in otolaryngology in 2002, who matched at a mean ± SD position of 2.8 ± 2.5 (median, 2) on their rank list (written communication from Doug Perry, Director of SF Match, November 20, 2003, and December 18, 2003).
Responses to the 7-point scale assessing satisfaction that the match fulfilled its goals ranged from 1 (unsatisfied) to 7 (fully satisfied), with a mean ± SD of 5.9 ± 1.3 (median, 6). Satisfaction that the match fulfills its purpose was significantly less in those who did not match (mean ± SD, 4.2 ± 0.4) compared with those who did (mean ± SD, 6.0 ± 0.1), (n = 151; P<.001, 2-sided t test) (Figure 2). Similarly, among matching respondents, satisfaction with the match correlated significantly with how highly on the applicant's rank list he or she matched (n = 141; P<.001). The slope of the fit line was –0.27, indicating that as the match result worsened (larger number on an applicant's rank list), the satisfaction decreased fairly steeply (Figure 3). Applicants' satisfaction with the match did not correlate significantly to the number of interviews with reported rule noncompliance (P = .71) (Figure 4).
Aggregate data for perceived match rule noncompliance experienced by the applicants by programs visited for interviews for each region are as follows: Northeast, 14 (5.4%) of 258 interviews; Southeast, 12 (4.7%) of 254; Midwest, 5 (2.2%) of 223; and West, 11 (4.7%) of 235. Across all regions, 42 (4.3%) of 970 interviews involved reported match rules noncompliance. These 42 interviews with reported rule noncompliance were attended by 19 applicants, with the remaining 132 applicants reporting no rule noncompliance. Individual responses ranged from 0 rule noncompliance of 18 programs interviewed to 12 of 12 programs, as in the case of 1 outlying respondent. The median match rule noncompliance reported was 0 and the mean was 0.3 compared with a median number of program interviews of 10 and a mean of 10.2 (Figure 4).
When how highly an applicant matched (the position on his or her rank list) is plotted against the number of interviews with rule noncompliance events perceived, the correlation is nonsignificant (P = .29) (Figure 5). Similarly, no significant relationship was found between the applicant's match success (matched vs unmatched) and the number of times rule noncompliance was perceived (P = .52).
The total number of interviews attended correlates poorly (P = .49) with satisfaction with the match. Likewise, the total number of interviews attended does not significantly correlate to how highly matching applicants matched on their rank list (P = .24). Likewise, respondents not matching appear to have attended fewer interviews, but this trend was not statistically significant (P = .08).
Applicants to the SFMatch are instructed that "observed violations of the matching rules must be reported to the program coordinator and match director." No such reports were received for this group of applicants.
The data produced by the present survey have several limitations and must be interpreted carefully. The survey responses appear to be reasonably representative of the entire group participating in the otolaryngology SFMatch. The response rate (48%) was typical for similar single-mailing surveys,6 the proportion of matched and unmatched survey respondents was comparable with matched and unmatched applicants, and the mean and median values for how highly applicants matched were comparable among survey respondents and the match overall. Geographic distributions of matched and unmatched respondents and SFMatch participants were similar. One might postulate that the anonymity of responses could compromise the data by sacrificing accountability for potential candor, thus inflating perceived rule noncompliance rates. Alternatively, distrust in anonymity of the survey could cause underreporting, since applicants may fear discovery as a "whistleblower." The respondents were, however, slightly more successful in matching than nonrespondents. Survey respondents who matched did so at a mean ± SD position of 2.3 ± 1.9 (median, 2) on their rank list, while all SFMatch candidates who matched did so at a mean ± SD position of 2.8 ± 2.5 (median, 2) on their rank list. Among the minority of unmatched respondents, however, the respondent sample appears somewhat less representative of the overall group. Given that only 9 (6.0%) of the 151 respondents did not match, compared with the overall 2002 otolaryngology match in which 66 (21.1%) of 312 applicants failed to match, the respondents represent a selection bias to the successful applicants. Since the data presented here suggest that better success in matching correlates to better satisfaction that the match performed its goal (Figure 2 and Figure 3), this selection bias toward the successful applicants might therefore generate fewer reports of problems, such as rule noncompliance, than if the unmatched and less satisfied counterparts responded more proportionally. Lastly, an applicant may perceive a breach of match rules when no rule was explicitly broken. In fact, intentionally misleading communications appear common, as shown in the NRMP and urology matches.7,8 Furthermore, reported rule noncompliance might not necessarily implicate interviewing faculty, since the survey data could in some cases reflect the applicant or even a third party breaking the stated rules. A third party might include a resident or ancillary staff giving implicit or explicit feedback to the applicant.
Overall, applicant satisfaction with the otolaryngology residency matching program was high (5.9 ± 1.3 [mean ± SD] of 7). This study demonstrates that applicants' satisfaction that the match fulfills its purpose decreases significantly by worse match results, either not matching or not matching highly, but neither satisfaction nor match success was affected by perceived match rule noncompliance. The data presented suggest potential applicant bias in that those left unmatched or those matching lower on their list may consider the match function unmet. This suggests that unless all applicants match at their first choice, some dissatisfaction with the match will follow. Thus, in a competitive match with many more applicants than open positions, some applicant dissatisfaction appears inevitable, regardless of whether the match process was carried out well. Indeed, this sorting process is the fair intention of the match. Alternatively, a poorly functioning match—one with rule noncompliance, insiderism, or any other shortcoming—could result in a low applicant satisfaction that is justifiable and unbiased by degree of success. Further delineation of the causes for dissatisfaction is beyond the scope of this study. This study did not attempt to compare satisfaction with a centralized match process against satisfaction without one, since appointments were made prior to 1983, although opinions from authors in other medical fields suggest that the centralized match represents an improvement in the process.9,10
Applicants reported noncompliance with the match rules in 4.3% of interviews. By historical standards, this rate is low. The questionnaire given by the American Association of Medical Colleges to candidates in 199011 demonstrated that 4.8% of otolaryngology applicants were asked to commit to 1 or more programs before the match. In that survey, 18.8%, 17.8%, and 12.4% of applicants to radiology, anesthesiology, and neurology, respectively, recalled being asked to commit early. In contrast, a survey of 1999 urology match participants showed that 47% of program directors and 61% of residency applicants asked the other how they would rank.7 These actions breach stated NRMP match rules, as they would with SFMatch rules, and it is ironic to note that despite these transgressions of the rules, the information gleaned may be poor quality: in the 1999 urology match study, 31% of the urology program directors and 44% of urology residency applicants acknowledged dishonesty in communicating match list ranking.7 Unfortunately, once discussion of match list ranking begins, the classical ethical problem, the "Prisoner's Dilemma," ensues, whereby anticipated dishonesty among one's competitors (in the form of inflated commitments) increases the motivation for one also to be dishonest.12 For this reason, it appears that discussion of how high one party will rank the other is unlikely to yield reliable information and that a moratorium on this topic may help steer the interview toward more productive dialogue.
The SFMatch process appears to work well with high levels of applicant satisfaction. Applicant perception of otolaryngology match rule noncompliance appears to be low in an absolute and relative sense. The data presented suggest that applicants' satisfaction that the match fulfills its purpose decreases significantly by worse match results, either not matching or not matching highly. Perceived breaches of match rules did not appear to affect candidates' match success or satisfaction positively or negatively. Nevertheless, premature discussion of rankings appears likely to be nonproductive given the frequency of dishonest comments during interviews shown in the urology match.7 Although applicant satisfaction prior to 1983 is unavailable for comparison, we believe that the match process facilitates fair and informed decision making for medical students and protects them from unfair historical practices predating the centralized match. As national attention focuses on the fairness and utility of the centralized match process in general, this unattractive historical alternative must not be forgotten.
Correspondence: Christopher D. Lansford, MD, Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, Duke University Medical Center, Box 3805, Durham, NC 27710 (firstname.lastname@example.org).
Submitted for publication March 4, 2003; final revision returned January 20, 2004; accepted March 2, 2004.
We thank Jennifer Lansford, PhD, for her editing assistance.
Create a personal account or sign in to: