Tamblyn R, Abrahamowicz M, Dauphinee D, Wenghofer E, Jacques A, Klass D, Smee S, Blackmore D, Winslade N, Girard N, Du Berger R, Bartman I, Buckeridge DL, Hanley JA. Physician Scores on a National Clinical Skills Examination as Predictors of Complaints to Medical Regulatory Authorities. JAMA. 2007;298(9):993-1001. doi:10.1001/jama.298.9.993
Author Affiliations: Departments of Medicine (Drs Tamblyn and Dauphinee) and Epidemiology & Biostatistics (Drs Abrahamowicz, Winslade, Buckeridge, and Hanley, and Mss Girard and Du Berger), McGill University, Montreal, Quebec, Canada; Ontario College of Physicians and Surgeons, Toronto, Ontario, Canada (Drs Wenghofer and Klass); Quebec College of Physicians, Montreal (Dr Jacques); and Medical Council of Canada, Ottawa, Ontario (Dr Blackmore and Mss Smee and Bartman).
Context Poor patient-physician communication increases the risk of patient complaints and malpractice claims. To address this problem, licensure assessment has been reformed in Canada and the United States, including a national standardized assessment of patient-physician communication and clinical history taking and examination skills.
Objective To assess whether patient-physician communication examination scores in the clinical skills examination predicted future complaints in medical practice.
Design, Setting, and Participants Cohort study of all 3424 physicians taking the Medical Council of Canada clinical skills examination between 1993 and 1996 who were licensed to practice in Ontario and/or Quebec. Participants were followed up until 2005, including the first 2 to 12 years of practice.
Main Outcome Measure Patient complaints against study physicians that were filed with medical regulatory authorities in Ontario or Quebec and retained after investigation. Multivariate Poisson regression was used to estimate the relationship between complaint rate and scores on the clinical skills examination and traditional written examination. Scores are based on a standardized mean (SD) of 500 (100).
Results Overall, 1116 complaints were filed for 3424 physicians, and 696 complaints were retained after investigation. Of the physicians, 17.1% had at least 1 retained complaint, of which 81.9% were for communication or quality-of-care problems. Patient-physician communication scores for study physicians ranged from 31 to 723 (mean [SD], 510.9 [91.1]). A 2-SD decrease in communication score was associated with 1.17 more retained complaints per 100 physicians per year (relative risk [RR], 1.38; 95% confidence interval [CI], 1.18-1.61) and 1.20 more communication complaints per 100 practice-years (RR, 1.43; 95% CI, 1.15-1.77). After adjusting for the predictive ability of the clinical decision-making score in the traditional written examination, the patient-physician communication score in the clinical skills examination remained significantly predictive of retained complaints (likelihood ratio test, P < .001), with scores in the bottom quartile explaining an additional 9.2% (95% CI, 4.7%-13.1%) of complaints.
Conclusion Scores achieved in patient-physician communication and clinical decision making on a national licensing examination predicted complaints to medical regulatory authorities.
Quiz Ref IDDecades of research have confirmed that poor skills in patient communication are associated with lower levels of patient satisfaction, higher rates of complaints, an increased risk of malpractice claims, and poorer health outcomes.1- 16 Medical schools have responded by incorporating training in patient communication and clinical skills into the curriculum. However, these skills were not systematically evaluated, nor was a minimum level of proficiency required for medical licensure.17 To address this problem, licensure reforms were undertaken in North America.18 The Medical Council of Canada (MCC) (1993),19 the Educational Commission for Foreign Medical Graduates (1998),20 and most recently the United States Medical Licensing Examination (USMLE) (2004)21 have all introduced a clinical skills examination (CSE)—a nationally standardized assessment of patient-physician communication, clinical history taking, and examination skills—as a requirement for licensure. All US and Canadian medical school graduates must now pass a multiple-case standardized patient assessment, where patient and physician examiners observe and grade clinical and communication skills to predict a candidate's competence to practice.
While mandatory assessment of clinical and communication skills is supported by the general public,22 concerns have been raised about the cost of the examination and the lack of evidence that a 1-day assessment could predict future practice, particularly as it relates to deficiencies in patient-physician communication.23- 27
Since instituting its CSE for all Canadian physicians, the MCC has tested more than 25 000 medical graduates using an examination format similar to the USMLE Step 2 clinical skills examination.19 We investigated the ability of CSEs to predict future complaints in medical practice. We tested the hypothesis that lower scores in patient-physician communication would be associated with a higher rate of patients' complaints about quality of care and communication. We also assessed whether the use of clinical examination scores improved the prediction of complaints beyond results from the traditional written examination.
In Canada and the United States, medical regulatory authorities (state medical boards and provincial colleges of physicians and surgeons) use a common framework to govern how physicians are trained, accepted into practice, regulated, disciplined, and removed from practice.28- 32Quiz Ref IDA principal obligation of state and provincial medical regulatory authorities in both countries is to address and resolve public complaints against physicians. In accordance with a common set of principles and procedures, all complaints that are received in writing are investigated. A triage system is used to collect information from the patient and physician for each complaint, weed out frivolous or vexatious actions, and undertake informal steps to attain early resolution of minor issues. When these informal steps are either unsuccessful or deemed inappropriate, the complaint is managed by a more formal committee or panel process that determines further action. Most complaints are resolved through a graded series of regulatory actions, typically education, cautions, and warnings. For the most serious complaints, and for all complaints involving issues of sexual misconduct, formal disciplinary hearings of a quasi-judicial nature are convened. These hearings can result in a variety of sanctions up to loss of license. When a patient complaint about a physician is made directly to a hospital, the hospital in most state and provincial jurisdictions is required to report problems of professional misconduct to the medical regulatory authority.
The cohorts of physicians who took the MCC clinical skills examination between 1993 and 1996 and were licensed to practice in Ontario and/or Quebec were identified. Nearly two-thirds of the Canadian population and approximately 50% of all physicians reside in these 2 provinces. All complaints filed against these physicians with the medical regulatory authority in either province were retrieved between the date of licensure and March 2005. The MCC identified the 6677 physicians taking the examination during this period and provided the first and last name, sex, medical school, and year of graduation of each candidate to the medical regulatory authority in Ontario and Quebec. These 5 nominal fields were used to link to the registry of licensed physicians in each province. Physicians who matched on all fields were retained. Partial matches were manually inspected and adjudicated. Specialty, postgraduate training location and dates, and license year were obtained from the provincial medical regulatory files as well as from the national training registry of all physicians completing postgraduate medical training in Canada. Of the 6677 physicians, 8.6% could not be linked to Ontario/Quebec medical regulatory files or the national postgraduate training registry. Compared with linked physicians, unlinked physicians were more likely to be older (>45 years, 44.2% vs 11.4%; χ² P < .001), men (73.6% vs 57.4%; χ² P < .001), have trained outside Canada (83.4% vs 12.7%; χ² P < .001), have not yet passed the CSE (15.7% vs 1.8%; χ² P < .001), and have lower traditional written examination scores (495.4 vs 524.7; t test P < .001) and CSE scores (436.8 vs 517.8; t test P < .001).
Physician identity and confidential information were protected by replacing all nominal data with an MCC-generated study number, which was used to link demographic, score, and complaint files for each study physician. The McGill Faculty of Medicine institutional review board provided ethical approval. The provincial privacy commission, the Ontario and Quebec medical regulatory authorities, and MCC approved and oversaw data access, linkage, and anonymization procedures.
Provincial medical regulatory authorities collect standardized information for each written complaint against a physician. This information includes the names of the patients and physicians involved, and a description of the problem, circumstances, medical interventions, outcome, and the location of the incident. The investigation process includes a review of the letter with the complainant, the physician response, the patient's medical records, information from the hospital if applicable (eg, for surgical complications), and information from witnesses. All evidence is reviewed by physician investigators (Quebec) or a complaints committee (Ontario) who determine the legitimacy of the complaint, the type and seriousness of problem, and the recommended approach for resolution and subsequent action. Complaints are classified by investigators into 1 of 55 (Quebec) or 57 (Ontario) mutually exclusive categories (eg, complication due to medical or surgical error, breach of confidentiality, incomplete medical reports), along with the outcome (retained or not) and the action (warning, counseling/training, license withdrawal, suspension, or restriction).
All complaints recorded for study physicians were retrieved by medical regulatory personnel. Data included the physician study number, date of filing and closure, the classification of problem type, and the outcome (retention decision and action taken). Complaint classification codes from the respective regulatory authorities were grouped into 6 categories based on comparable groupings used by the Ontario and Quebec regulatory authorities: communication and attitude; quality of care; professionalism; office-related problem; physician health-related behavior problem (eg, mental illness); and other (eg, false advertising). Assignment of complaint classification codes was independently verified by medical regulatory investigators who arbitrated disagreements on final assignment.
The primary outcome was the complaint rate: the number of complaints retained as valid by the medical regulatory authority after investigation per year of practice time. Because judgment about the validity of a complaint may vary between provincial regulatory authorities, we conducted a sensitivity analysis including all complaints to assess whether our findings were influenced by retention decisions. The subset of retained complaints that were related to problems in communication and quality of care were assessed as secondary outcomes, as these problems should be more strongly associated with the competencies being assessed by the examination.
The complaint rate for each physician was calculated using as the denominator years in practice, defined as the number of years between the final year of postgraduate training exit date and the end of follow-up (March 2005). To assess the validity of using exit date from postgraduate training as the starting date for practice time, we retrieved for 1161 Quebec physicians a count of the number of years in which the physician billed for patient services to the provincial insurance agency. In comparison with actual billing data between 1993 and 2003, our approach modestly overestimates the number of practice-years (mean [SD] from billing, 4.2 [2.4] years; from training exit year estimate, 4.9 [2.2] years). However, there was very good agreement between the 2 methods (intraclass correlation, 0.67; 95% confidence interval [CI], 0.54-0.75) and no relationship between practice-years and communication score (Pearson r = −0.06). Thus, potential errors in measurement of practice-years should not confound the association between complaints and communication score.
Traditional Written Examination. This examination tests an individual's competence to enter postgraduate training. It is generally taken at the end of medical school and must be passed to be eligible for licensure. Medical knowledge is assessed using approximately 450 multiple-choice questions to assess knowledge in medicine, surgery, obstetrics-gynecology, psychiatry, pediatrics, and preventive medicine.33 Clinical decision making is assessed using key feature problems.34 Examinees are asked to respond to critical aspects of diagnosis or management in 36 to 40 clinical problems using write-in or menu-selection response formats.34 Unlike multiple-choice questions, key feature questions focus exclusively on the components of a case where physicians are required to make critical decisions where errors could have an effect on patient outcome. Grading is based on the relative quality of the response, rather than a single correct answer, and errors of both omission and commission are considered in scoring. The score is calculated as the weighted sum of the multiple-choice (weight = 0.75) and clinical decision-making skills components (weight = 0.25), where the weights reflect the amount of testing time devoted to each component. A criterion-based passing score is established by a modified Nedelsky method,33,34 and scores for first-time takers are standardized to a mean (SD) of 500 (100). For the study population, the Cronbach α estimate of the reliability of the written examination varied from 0.90 to 0.92 for the multiple-choice component, and from 0.60 to 0.69 for the clinical decision-making component in different administrations.
Clinical Skills Examination. This examination tests competence in data collection (history, physical examination), patient communication, and problem solving (diagnosis and management) through a 20-case objective structured clinical examination, and can be taken after 1 year of postgraduate training.19 Most physicians take the examination in the second postgraduate year or the first half of the third postgraduate year (93% of physicians taking the examination between 1993 and 1996). Data collection is assessed in a 5- or 10-minute interaction with a standardized patient, by trained physician observers using case-specific checklists.19 Patient-centered communication is assessed in 3 to 4 cases, selected to represent situations where communication is required for effective management (eg, discuss refusal of treatment for a terminal illness, counsel an adolescent about birth control). Quiz Ref IDExamples of patient-physician communication that would receive a low score include condescending, offensive, or judgmental behaviors, or ignoring patient responses during the encounter. Problem solving is assessed by postencounter written responses to short-answer questions on diagnosis, investigation, interpretation of test results, and management. Responses are scored by physician examiners using an answer key. The passing score for the overall examination is established using criterion-referenced methods,19,33- 35 and scores for first-time takers are standardized to a mean (SD) of 500 (100). For the study population, the Cronbach α estimate of the reliability of the CSE scores ranged from 0.25 to 0.50 for communication, 0.59 to 0.75 for data acquisition, and 0.41 to 0.67 for problem solving in different administrations.
Physician characteristics that may be associated with communication ability or complaint rate were measured as potential confounders and effect modifiers.6,10 They included information on the sex of the physician, international medical graduate status, and specialty, which were retrieved from the MCC master file, postgraduate training registry, and the medical regulatory authorities. Practice province also was considered a potential confounder because differences may exist in health service delivery and the management of complaints between jurisdictions.
Correlations between examination scores were estimated by Pearson product-moment correlation coefficients. Score reliability was assessed using a weighted Cronbach α, where weights were based on the number of candidates taking the examination in each administration. Disattenuated correlations also were calculated to determine the expected correlation if both scores were measured with perfect reliability, using the formula36
The relationship between the CSE scores and complaint rate was assessed using multivariate Poisson regression (SAS version 9.1, SAS Institute, Cary, North Carolina), adjusting for physician sex, specialty, country of training (Canada or international), and province. A 2-sided test with a P value of .05 was used to assess statistical significance. Number of complaints was the dependent variable, and number of years in practice was used to measure person-time for each physician. The predictive ability of each examination score was assessed in a separate model that adjusted for sex, specialty, international medical graduate status, and practice province, using continuous scores as well as score quartiles. To determine if the relationship between examination scores and complaints was modified by characteristics that may be associated with communication scores, including practice jurisdiction, physician sex, and foreign training, we assessed interactions between the examination score and these characteristics and used the likelihood ratio test to determine if the interaction terms improved the model fit.
Licensing examinations aim to assess a required level of proficiency, and thus minimum thresholds of communication ability may exist, below which the complaint rate is high and above which the rate is lower and relatively uniform. To assess whether a linear association provided an appropriate representation of the association between examination score and the complaint rate, we tested the multivariate Poisson models for nonlinearity using generalized additive models (GAM) nonparametric extension of Poisson regression.37 The adjusted effect of examination score was estimated using smoothing splines with 4 df and the statistical significance of the nonlinear effect was tested by nonparametric χ2 test. All models were estimated separately for primary and secondary outcomes.
To determine if including the CSE communication score improved the prediction of complaints beyond the traditional written examination results, we first estimated the independent relationship between scores achieved in the traditional written examination and complaint rate. The CSE communication score was then added to the model that included the traditional written examination score, and improvement in the prediction of complaints was assessed by the likelihood ratio test. The explanatory power of the CSE communication score in predicting complaints was estimated by the population attributable fraction, the proportion of all complaints that were explained by physicians in the bottom communication score quartile,38 after adjustment for existing predictors.
Power was estimated using the approach proposed by Signorini39 for Poisson regression. Based on a type I error of 5%, a baseline complaint rate of 3.1% in the study population, and 3424 physicians followed up for a mean 6.5 years, the study had a power of 95% to detect a relative rate difference of 12% per 2-SD decrease in score.
Among 6677 physicians taking the CSE between 1993 and 1996, 3424 (51.3%) were licensed to practice in Ontario and/or Quebec. At the time of the examination, 71.6% of study physicians were 25 to 30 years of age, 55.5% were men, and 12.3% were international medical graduates. Following the examination, 84% completed postgraduate training in primary care or medical subspecialties, and two-thirds entered practice in Ontario (Table 1). The mean score of the study population for both the clinical skills and traditional written examinations was approximately one-quarter of an SD above 500. However, the range was considerable—approximately 7 SDs for the CSE and 5 SDs for the traditional written examination. Overall, 230 physicians (6.7%) failed the CSE on the first attempt, and 52 of these physicians never passed the CSE but were licensed to practice during the transition to the new licensure requirements.
Correlations between the clinical skills and traditional written examinations overall scores and subscores varied between r = 0.10 and r = 0.40 (Table 2). The communication score had the lowest correlation with the traditional written examination scores and with other scores on the CSE. Even when corrected for unreliability, the correlation between the communication and traditional written examination scores was low (disattenuated r = 0.23). Communication ability previously has been shown to be a domain independent from more cognitive abilities that are assessed in traditional written examinations.40
Overall, 1116 complaints were filed in a total of 22 585 practice-years (4.9 complaints per 100 practice-years) (Table 3). The mean (SD) follow-up time per physician was 6.5 (2.4) years, corresponding to the first 2 to 12 years in practice. Of the 3424 physicians, 21.5% had at least 1 complaint filed, and 17.1% had complaint(s) retained in their file after investigation. Quiz Ref IDThe majority (81.9%) of retained complaints were for attitude/communication and quality-of-care problems. Communication problems in management and inappropriate treatment/follow-up were the most common causes of quality-of-care complaints. Among the 696 retained complaints, none led to an immediate loss of license, 71 (10.2%) led to recommendations for additional counseling/training or discipline, and the remainder led to verbal and written warnings.
Lower CSE communication scores were associated with a higher rate of retained complaints, particularly in the lowest quartile of these scores (Table 4). The 853 physicians in the bottom communication score quartile had 236 retained complaints filed in their combined total of 5542 practice-years. This yielded an overall rate of 4.26 complaints per 100 practice-years compared with 2.51 per 100 practice-years for physicians in the top communication score quartile (Table 4). In multivariate models that adjusted for other physician characteristics, significantly higher complaint rates also were found for male vs female physicians, surgeons and primary care physicians vs medical subspecialists, and physicians practicing in Ontario vs those practicing in Quebec (Table 4). Even after adjustment for these characteristics, physicians in the lowest communication score quartile had an excess complaint rate of 1.75 per 100 practice-years compared with physicians in the top score quartile (adjusted relative risk [RR], 1.52; 95% CI, 1.30-1.78), and an excess complaint rate of 2.15 per 100 practice-years compared with the upper 3 quartiles (adjusted RR, 1.43; 95% CI, 1.22-1.68). The population attributable fraction indicated that 10.0% (95% CI, 6.0%-13.9%) of all retained complaints were explained by physicians in the bottom communication score quartile.
There was no evidence of significant nonlinearity (P = .25 for the GAM nonparametric test). According to the linear model, a 2-SD decrease in communication score was associated with a relative 38% increase in the complaint rate (1.17 more complaints per 100 practice-years) (Table 4). The relationship between communication scores and complaint rate was significantly stronger in Quebec (RR, 1.84; 95% CI, 1.51-2.24) compared with Ontario (RR, 1.34; 95% CI, 1.25-1.49). Physician sex and international medical graduate status were not significant modifiers of the communication score effect. Sensitivity analysis incorporating all complaints (retained and not retained) showed the same significant increase in the relative rate of complaints with declining communication score (6.55 per 100 practice-years in the lowest quartile compared with 4.78, 4.46, and 4.05 in the third, second, and upper quartile, respectively); however, the risk was smaller for all complaints (RR, 1.30; 95% CI, 1.22-1.39).
Among the CSE scores, only the communication score was significantly associated with complaint rates (Table 5). The CSE data acquisition and problem-solving scores had no relationship to complaint rate, including quality-of-care complaints. The CSE communication score was most strongly associated with the risk of communication complaints. The traditional written examination also was significantly associated with complaint rate, with the strongest association being for the clinical decision-making (CDM) score. The association between multiple-choice scores and complaint rate was significant for overall retained complaints but not significant for communication or quality-of-care complaints. Statistically significant nonlinearity was found in the relationship between CDM scores and overall complaint rate (P = .02, for 3 df GAM test). The complaint rate increased with declining CDM scores between 600 and 450, with no systematic effect beyond this score range.
The CSE communication score, when added to a model that included traditional written examination CDM score, significantly improved the prediction of overall retained complaints and communication complaints, but not complaints about quality of care (Table 5). After adjustment for the traditional written examination CDM score, an additional 9.2% (95% CI, 4.7%-13.1%) of retained complaints and 11.2% (95% CI, 5.8%-16.9%) of communication complaints were explained by physicians in the bottom communication score quartile.
In a longitudinal study of physicians who took the MCC clinical skills examination and entered practice in Ontario and/or Quebec, scores obtained in patient-physician communication were statistically significant predictors of future complaints to medical regulatory authorities. The credibility of the association was strengthened by evidence of a linear relationship between complaint rates and communication scores, a slightly stronger association when the outcome was limited to communication complaints, consistency of the direction and statistical significance of the association in Ontario and Quebec, and the persistence of the association after adjustment for physician sex, specialty, international medical graduation status, and time in practice.
We observed a complaint rate of 0.0491 per physician. This rate is within the range of US state medical boards, where the mean complaint rate for all licensed physicians (including those with no complaints) varied from 0.02 per physician in Wisconsin to 0.20 per physician in Alabama between 2001 and 2003.41 Similar to others, we found that communication problems were the most common reason for complaints42: 49.1% of complaints in our study compared with 55% of complaints to 1 US state medical board between 1989 and 200043 and 74.7% in an investigation of hospital complaints between 2001 and 2003.6
Our results provide some feedback for medical educators and licensing authorities. Our study supports the predictive validity of providing a standardized assessment of communication skills prior to entry into practice. Almost 1 in every 5 physicians had a retained complaint filed with the provincial medical regulatory authorities in the first 2 to 12 years of practice. The risk of complaints was significantly greater among physicians in the lowest quartile of communication scores. This result suggests that direct observation and assessment of patient communication skills may be useful in identifying trainees who are more likely to experience difficulties in practice. Assessment of communication could play a role at different stages in training—to select candidates for medical school admission44 or to identify trainees who may benefit from more intensive communication skill training, as these skills can be improved with training.45
In addition, our results suggest that a minimum passing standard should be established for communication on the CSE, as has been done in the US Step 2 Clinical Skills Examination.21 To do so, the number of cases in which communication is assessed would need to be increased from the 3 to 4 cases to approximately 10 to 14 to obtain a sufficiently reliable score to make pass-fail decisions.46 The MCC has already increased the number of cases in which communication is assessed to meet this reliability threshold.
Complaints were mainly associated with 2 subscores—clinical decision making and communication. Clinical decision-making assessment was specifically designed to select problems and test aspects of the decision-making process where physicians were more likely to make errors that would have an effect on patient outcome.34 This approach to the selection of test material may explain why this component of the examination was predictive of complaints, while the data collection and problem-solving components of the CSE were not. The key features approach to clinical decision-making assessment was first instituted by the MCC in 1992, and to our knowledge this is the first evaluation of its ability to predict future practice outcomes.47 It may be useful to increase the use of key feature problems in traditional written assessment, as this format appears to be more predictive of quality-of-care complaints than ordinary multiple-choice questions. Selecting case and test elements for the national CSE on the same basis as key feature written problems also may be beneficial. The discriminating ability of data acquisition and problem-solving assessment on the CSE may be improved by selecting aspects of data collection that are critical for a given clinical problem, and where physicians tend to make errors.
Our study had several limitations. The poor-to-moderate reliability of the communication score component of the examination likely led to an underestimation of the strength of the relationship between communication and complaints.48 The use of practice-years as a denominator for estimating the rate of complaints would not take into account differences between physicians in the frequency of patient contact, the type of patients, and the procedures performed, all of which may be associated with the risk of complaints. However, it seems unlikely that physicians with lower scores in communication would systematically seek out work activities and patient populations that are more likely to generate complaints.13Quiz Ref IDOn the other hand, higher rates of complaints that we found for surgeons, family physicians, and male physicians, even after adjustment for lower scores in communication, may be related to higher practice volume or differences in work activities or practice populations. As higher complaint and malpractice claim rates also have been found for these physician subgroups in other studies,1,10 a better understanding of the contributing factors would be important. Finally, we did not have information on language of greatest proficiency for the physician or language in which the test was taken, and could not include these factors in the analyses.
In summary, we found that communication and clinical decision-making ability were important predictors of future complaints to regulatory authorities. Current examinations could be modified to test these attributes more efficiently and at earlier points in the training process. Future research should examine whether remediation of communication problems can reduce complaints, and whether other indicators of the quality of practice could be assessed by a clinical skills examination.
Corresponding Author: Robyn Tamblyn, PhD, McGill University, 1140 Pine Ave W, Montreal, QC Canada, H3A 1A3 (email@example.com).
Author Contributions: Dr Tamblyn had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Tamblyn, Abrahamowicz, Dauphinee, Jacques, Klass, Winslade.
Acquisition of data: Tamblyn, Dauphinee, Wenghofer, Jacques, Klass, Smee, Girard, Bartman.
Analysis and interpretation of data: Tamblyn, Abrahamowicz, Dauphinee, Wenghofer, Jacques, Klass, Blackmore, Girard, Du Berger, Bartman, Hanley, Buckeridge.
Drafting of the manuscript: Tamblyn, Abrahamowicz, Dauphinee, Smee, Blackmore, Girard, Bartman, Hanley.
Critical revision of the manuscript for important intellectual content: Tamblyn, Dauphinee, Wenghofer, Jacques, Klass, Winslade, Du Berger, Hanley, Buckeridge.
Statistical analysis: Tamblyn, Abrahamowicz, Girard, Du Berger, Bartman, Hanley, Buckeridge.
Obtained funding: Tamblyn, Dauphinee.
Administrative, technical, or material support: Tamblyn, Dauphinee, Wenghofer, Jacques, Klass, Smee, Blackmore, Winslade.
Study supervision: Tamblyn, Abrahamowicz, Dauphinee, Jacques, Klass.
Financial Disclosures: None reported.
Funding/Support: The Medical Council of Canada (MCC) and the Canadian Institutes of Health Research (CIHR) provided funding for the study. The CIHR provided operating funds for the study and fellowship support for Dr Winslade.
Role of the Sponsors: The CIHR had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript. The MCC, the College of Physicians and Surgeons of Ontario, and the College of Physicians of Quebec, as organizations, were not involved in the approval of this article. Coauthors from the Medical Council of Canada (Dr Blackmore and Mss Smee and Bartman), the College of Physicians and Surgeons in Ontario (Drs Klass and Wenghofer), and College of Physicians of Quebec (Dr Jacques) provided oversight for the retrieval, linkage, and anonymization of data from their respective institutions, as well as input on the design and conduct of the study; collection, management, and interpretation of the data; and preparation, review, and approval of the manuscript Mss Girard and Du Berger performed the statistical analysis.