Ward JE, Gattellari M, Solomon MJ. Management of Patients With Colorectal CancerDo Australian Surgeons Know the Scientific Evidence?. Arch Surg. 2002;137(12):1389-1394. doi:10.1001/archsurg.137.12.1389
Not all Australian surgeons are aware of the status of the current evidence for the management of colorectal cancer.
Postal survey of Fellows of the Royal Australasian College of Surgeons.
One hundred ninety-five surgeons (127 general surgeons and 68 subspecialist colorectal surgeons) from a response fraction of 89%.
Main Outcome Measures
Overall awareness score for 23 clinical recommendations and a subscore for 10 of these for which evidence is compelling rather than inconclusive (9 for and 1 against incorporation in clinical practice).
Although no surgeon indicated the status of the evidence correctly for all 23 items, 61% of respondents correctly identified 12 or more items. Surgeons who practiced in capital cities had significantly higher scores than those who practiced outside cities (β = .16; B = 1.01; 95% confidence interval [CI], 0.14-1.89; P = .02). Surgeons who had been in practice for relatively more years had significantly lower scores than younger surgeons (β = −.17; B = −0.059; 95% CI, −0.11 to 0.01; P = .02). Surgeons involved in research had significantly higher scores (β = .18; B = 1.11; 95% CI, 0.23-1.99; P = .01), as did those respondents who had been involved in guideline development (β = .18; B = 1.42; 95% CI, 0.24-2.63; P = .02). Subscores showed a significantly greater awareness of compelling evidence (level I or level II) (P<.001). There was no relationship between awareness of the evidence for adjuvant therapy and surgeons' perceptions of the usefulness of guidelines about this aspect of clinical management.
Our innovative preguidelines survey has shown that not all surgeons were aware of the evidence underpinning the management of colorectal cancer, affirming the need for guidelines. Predictors of low awareness could be used to target efforts to disseminate and implement guidelines.
RECENTLY IN the ARCHIVES, Flint1 wrote,
"Evidence-based surgical practice requires a commitment to attempting to master the surgical scientific literature."
"Evidence-based surgical practice requires a commitment to attempting to master the surgical scientific literature."
With the burgeoning size and scope of the surgical literature, however, this is a daunting challenge. He further defines surgical judgment as "the ability to apply scientific knowledge and technical skills for each individual patient," specifically combining evidence and experience to tailor individual clinical decisions. Hence, awareness of the evidence is an essential attribute of the "thinking surgeon."
Despite increasing global interest in evidence-based health care, surprisingly little is understood about how to ensure that evidence from scientific research is incorporated into practice.2 Studies evaluating these processes or the skills required of physicians to understand and incorporate evidence are unusual. While evidence-based clinical practice guidelines are seen as a means to provide convenient summaries of the "best available scientific evidence" to physicians, strategies to assure their implementation remain elusive. It is widely accepted that guidelines now must be developed using methodologically defensible processes. Less recognition has been afforded the need for a better understanding of guidelines implementation.3
As elsewhere,4,5 Australian health authorities agreed that there was a need to develop evidence-based guidelines for the management of colorectal cancer.6 In defending this initiative, key opinion leaders anticipated that these guidelines would raise awareness among surgeons of adjuvant chemotherapy for patients with resected node-positive colon cancer (ie, Dukes C), for example.7 Furthermore, they stated that " . . . there are some practices commonly used in the management of large bowel cancer for which there is no supportive evidence and which should be abandoned."7 Clinical practice guidelines were seen as a useful national initiative in response to variations in clinical practice.
In Australia, there was a unique opportunity to determine the extent to which Australian colorectal surgeons were or were not aware of relevant scientific evidence before the guidelines were published. In taking this opportunity, we also considered it useful to determine the professional attributes of surgeons least likely to already be aware of the scientific evidence. As resources for implementation in Australia of evidence-based guidelines are limited, such an approach could permit targeting of the guidelines toward those most likely to benefit. We further sought to explore surgeons' awareness of the evidence where particularly compelling: that is, evidence derived from a sound meta-analysis of randomized control trials (level I) or evidence derived from at least 1 methodologically rigorous randomized control trial (level II).8
We surveyed all active Australian members of the Section of Colon and Rectal Surgery of the Royal Australasian College of Surgeons (n = 147) and of the Colorectal Surgical Society of Australia (n = 72) (excluding M.J.S.). The former are general surgeons with a self-nominated interest in colorectal surgery, while the latter are subspecialist surgeons who have completed postfellowship training in colorectal surgery. Membership is exclusive. Questionnaires were mailed to surgeons in late 1998, with standardized follow-up strategies administered during 1999.9
We selected 23 key clinical recommendations for the management of colorectal cancer that we ascertained would be included in the guidelines. These addressed preoperative assessment (n = 1), surgery or perioperative care (n = 11), adjuvant therapy for rectal cancer (n = 4), adjuvant therapy for colon cancer (n = 2), treatment of advanced disease (n = 3), and protocols for follow-up (n = 2). In our questionnaire, we asked surgeons to indicate whether there was "sufficient evidence to support use," "sufficient evidence against use," or "inconclusive evidence" for each of these 23 recommendations. A fourth response option, "unsure," was provided for each statement. Our questionnaire also elicited professional information from respondents. As 2 preliminary draft versions of the guidelines had been circulated (although not widely) before our survey was administered, surgeons also were asked whether they had read either or both copies (yes or no) and whether they had been involved in the development of the draft guidelines (yes or no). To explore self-assessment of the need for evidence-based guidelines, we asked respondents specifically to indicate how useful they would find guidelines about "adjuvant therapy for rectal cancer," using a 4-point Likert scale ("very useful" to "not at all useful").
For each of the 23 recommendations, J.W. and M.G. independently classified the status of the current evidence as cited in the final guidelines,6 using an accepted taxonomy of levels of evidence.8 For the purposes of this study, those recommendations for which there was level I or level II evidence to support use were classified as having "sufficient evidence to adopt." Those recommendations for which there was level I or level II evidence against use were classified as having "sufficient evidence to abandon." While we accept that an alternative evaluation of surgical research might otherwise endorse as conclusive lower levels of scientific evidence, we classified those recommendations for which there was only level III evidence or lower as having an inconclusive evidence base. Similarly, those recommendations for which the guidelines specifically stated that more research was needed also were considered, ipso facto, to have an inconclusive evidence base.
The evidence for 19 (83%) of the 23 items was independently classified by J.W. and M.G. Our chance-adjusted κ statistic of 0.72 (P<.001) indicates substantial interrater reliability.10 Further, M.G. reclassified each recommendation 2 weeks after her initial coding. Intrarater agreement also was high (20 [87%] of 23) (chance-adjusted κ statistic = 0.80; P<.001). Differences between classifications were resolved subsequently by consensus before analysis.
All analyses were conducted using SPSS for Windows (SPSS Inc, Chicago, Ill). Surgeons received 1 point for every correct response given. To calculate respondents' overall awareness of the evidence, we first summed all correct answers per respondent to calculate a score out of 23 for each. Hence, a higher score indicated greater awareness of the overall evidence. We next computed a subscore for each respondent, using only their responses to a subset of 10 items: 9 recommendations for which there was sufficient evidence for and 1 item for which there was sufficient evidence against incorporation in routine practice.
Independent-samples t tests and 1-way analysis of variance were conducted to determine univariate associations between study variables and overall and subscore awareness of the evidence base. Multiple linear regression was undertaken to determine independent (adjusted) predictors of higher scores. Variables considered to be potential predictors were: involvement in research activities (yes or no), type of hospital where most operations are performed (teaching or other), location of practice (capital city or other), type of surgeon (general or colorectal subspecialist), nature of major appointment (consultant in private practice or salaried specialist), having read either draft of the guidelines (yes or no), involvement in the development of the guidelines, caseload (in terms of the number of new patients with colorectal cancer seen per month), and number of years since awarded Fellowship of the Royal Australasian College of Surgeons. The latter 2 variables were considered as categorical, created using quartile cutoffs if a nonlinear univariate relationship with the outcomes was observed.
Only variables univariately associated with the outcomes at an α level of at least .25 were included in multivariate analyses.11 A nonautomated backwards elimination modeling strategy was employed to select significant independent predictors of the outcome with a P value of .05 to determine significance. However, whether surgeons had read the guidelines and whether they were involved in guideline development were held constant, irrespective of the statistical significance of the outcomes to determine the independent effect of demographic and practice characteristics, having adjusted for surgeons' knowledge of guidelines.
Finally, we examined the relationship between respondents' perceptions of the potential usefulness of guidelines about adjuvant therapy for rectal cancer ("very," "moderately," "slightly," and "not at all" useful) and their awareness of the relevant evidence by first calculating a subscore using respondents' answers to 4 items about adjuvant therapy in rectal cancer (possible range, 0-4). We next applied ordinal regression to determine any association between this subscore and surgeons' rating of the usefulness of future guidelines about this topic. The χ2 test was used to test for the proportional odds assumption, with a nonsignificant finding (P>.05), indicating that the analysis met this assumption.
From 219 eligible surgeons, we received 195 completed surveys (89% response rate).9 Participating surgeons had been Fellows of their professional college for a mean ± SD of 18.5 ± 8.58 years. Respondents consulted with a median number of 3.0 new patients with colorectal cancer per month (interquartile range, 1.5-6.0). Table 1 presents the characteristics of participating surgeons.
Table 2 presents respondents' awareness of the evidence for each of 23 clinical recommendations. The overall score per respondent ranged from 4 to 21 of 23. Surgeons correctly identified the evidence base for a mean ± SD of 12.3 ± 3.03 items (median, 13; interquartile range, 10-14; mode, 13). No surgeon correctly identified the evidence for all 23 recommendations. If 12 is arbitrarily considered a "passing grade," then 119 (61%) respondents passed.
Multivariate linear regression analysis demonstrated 4 independent predictors of a higher overall score after adjusting for other variables, namely, location of practice, number of years in surgical practice, involvement in research, and involvement in guidelines development (Table 3). Specifically, surgeons who practiced in capital cities had significantly higher scores than those who practiced outside cities (β = .16; B = 1.01; 95% confidence interval [CI], 0.14-1.89; P = .02). Surgeons who had been in practice for relatively more years had significantly lower scores than younger surgeons (β = −.17; B = −0.059; 95% CI, −0.11 to 0.01; P = .02). Surgeons involved in research had significantly higher scores (β = .18; B = 1.11; 95% CI, 0.23-1.99; P = .01), as did those respondents who had been involved in guideline development (β = .18; B = 1.42; 95% CI, 0.24-2.63; P = .02) (Table 3). Having read any draft version of the guidelines was not associated with overall score at the multivariate level, however (β = .03; B = 0.16; 95% CI, −0.73 to 1.05; adjusted R2 = 0.16, P = .73).
The unshaded section of Table 2 presents respondents' awareness of the evidence for 10 clinical recommendations considered at the time of guidelines development to be based on compelling evidence (ie, level I or level II evidence for [n = 9] or against [n = 1]). Nearly half (47.7%) indicated that there was sufficient evidence for adjuvant therapy in rectal cancer (Table 2). By contrast, 97.8% indicated there was sufficient evidence to support the use of prophylactic antibiotics at the time of resection. Further, the proportion of respondents for each of those 13 recommendations that did not have compelling evidence, indicating inconclusive evidence, ranged from 16.9% to 67.7% (Table 2).
When subscores for the 10 recommendations based on level I or level II evidence were calculated, an average of 6.1 (SD = 1.66) items were correctly classified (range, 1-10; median, 6; interquartile range, 5-7; mode, 6). Only 2 respondents correctly classified the status of the evidence for all 10 items. Accepting a score of 5 as a passing grade, 165 (84.6%) surgeons met this criterion. Multivariate linear regression analysis demonstrated only 1 significantly and independently predictive variable of a higher subscore. Specifically, colorectal subspecialist surgeons demonstrated greater awareness of the compelling evidence base than general surgeons (β = .69; B = 0.20; 95% CI, 0.16-1.23; P = .01) (adjusted R2 = 5%) (Table 4).
We next hypothesized that surgeons might be more likely to correctly identify the evidence base where it was compelling (based on level I or level II evidence) (n = 10 recommendations) compared with items supported by inconclusive evidence (ie, based on level III evidence) (n = 13). For each surgeon, the percentage of correct items identified was computed for each subset of items (first, those based on level I or level II evidence and, second, those based on level III evidence). On average, surgeons correctly identified the evidence base for 61% of the 10 items supported by compelling evidence; fewer than half of the 13 items (48%) supported by level III were correctly identified as inconclusive (paired t193 = −7.79, P<.001).
A small number of respondents correctly identified the evidence about the use of adjuvant therapy for rectal cancer for all 4 items (n = 27 [13.8%]). Almost half (n = 94 [48.2%]) correctly identified the status of the evidence for at least 3 items, however. Applying a cutoff of 3 correct responses as a passing grade, 72 surgeons (36.9%) scored less than 3 on this subscale. Just over one third (n = 71 [36.4%]) reported that guidelines summarizing the evidence for the use of adjuvant therapy for rectal cancer would be "very" useful. A similar number (n = 75 [38.5%]) reported that guidelines would be "moderately" useful. Just over one fifth (n = 45 [23.1%]) reported that guidelines would be only "slightly" or "not at all" useful. There was no statistically significant association between surgeons' awareness of the evidence about the use of adjuvant therapy and their perceptions of the usefulness of guidelines about it (cumulative odds ratio = 0.88; 95% CI, 0.67-1.33; P = .33).
Our research has provided unique insights into Australian surgeons' awareness of the scientific evidence underpinning the management of colorectal cancer, a prerequisite to evidence-based surgical practice. By assessing their awareness of the status of the evidence base for 23 clinical recommendations, we were able to calculate an overall score for each. No surgeon correctly identified the status of the evidence for all 23 recommendations. Only 61% scored 12 or higher. Of these 23 recommendations, 9 were based on compelling evidence in support of their incorporation in routine surgical practice. One item was based on compelling evidence to abandon its use. These 10 items were used to calculate a subscore, encapsulating surgeons' awareness of level I or level II evidence. Fortunately, there was a significantly greater awareness of the evidence for these 10 items than for the remaining 13, suggesting that surgeons differentially recall evidence that is derived from methodologically rigorous studies.
As our study was conducted before national promulgation of evidence-based guidelines, our findings will be of interest to those charged with their implementation.12 Surgeons who had been in practice for the greatest length of time, those who practiced in rural regions, those not involved in research, and, perhaps more predictably, those not involved in the development of the national guidelines at the time of our survey represent those groups of surgeons most likely to gain from active dissemination of the guidelines once published. That our research also has shown that surgeons' perceptions of the usefulness of guidelines bear no relationship to their awareness of the evidence cautions against conventional strategies to promote guidelines only among those expressing an interest in receiving them. We explored this only in relation to adjuvant therapy, however.
The need for national evidence-based guidelines for the management of colorectal cancer can be affirmed by our results. Evidence-based health care involves "the conscientious and judicious use of current best evidence in making decisions about the care of individual patients."13 Therefore, physicians must be either fully apprised personally of all relevant research studies or have access to valid evidence summaries to explain benefits, harms, and adverse effects of treatment to patients. Incomplete or inaccurate perceptions of the evidence will compromise the veracity of surgeons' efforts to inform patients about their choices and the evidence for benefit and harm. With the incidence of colorectal cancer increasing, the potential for variation in the management of colorectal cancer due to incomplete awareness of the evidence base could be alarming.
Methodologically, the usual caveats pertaining to the use of self-administered questionnaires hold.9 However, our high response rate (89%) minimized response bias.9 Our study was conducted at a time when the primary evidence was shifting. The lag-time between research completion and publication might influence others' classifications of the evidence, although our own process achieved high interrater and intrarater reliability. We acknowledge the inevitable delays between the drafting of guidelines and broad consultation and eventual publication. Constant replenishment of the evidence-base from ongoing research means that an approach such as ours to establish preguidelines awareness may be time-limited. Subsequent refutations of primary research or the conclusions of meta-analyses based on a limited number of trials also will confound the relationship between awareness at a specific point in time and routine practice.14 We further recognize that our classification of recommendations as having sufficient evidence for or against implementation on the basis of level I or level II evidence potentially may have biased our results against those recommendations with lower levels of evidence but which others would consider as having conclusive evidence based on clinical experience. Refinement of levels of evidence for surgical practice may assist the design of future studies.
Last, surgeons' awareness of evidence has not yet been shown empirically to predict their clinical behavior, although it is not unreasonable to expect that a rational relationship exists between the two. A validation study to examine the link between knowledge and behavior would reveal the nature of any relationships, whether linear or modifiable by other factors. As 2 audits of patterns of care currently are underway in Australia,15,16 such a study could be pursued. Future research also could monitor the effect of national guidelines on surgeons' awareness of this body of scientific evidence. Hence, awareness of the evidence has the potential to be a valid and feasible surrogate outcome measure with which to monitor implementation of guidelines.
Accepted for publication July 27, 2002.
We thank all surgeons who completed our survey, John Oakley, MBBS, FRACS, the Section of Colorectal Surgery, Royal Australasian College of Surgeons, and the Colorectal Surgical Society of Australiasia for organisational support.
Our study was approved by the Research Ethics Committee (Royal Prince Alfred Hospital Zone).
Corresponding author: Jeanette E. Ward, MBBS, MHPEd, PhD, FAFPHM, Division of Population Health, Locked Bag 7008, Liverpool, New South Wales 1780, Australia (e-mail: firstname.lastname@example.org).