Author Affiliations: Centre for Statistics in Medicine, Oxford (Dr Altman) and BMJ, London (Dr Schroter), England; and Johns Hopkins School of Medicine, Baltimore, Md (Dr Goodman).
Context Investigation of the nature and frequency of statistician involvement
in medical research and its relation to the final editorial decision.
Methods Authors of original research articles who submitted to BMJ and Annals of Internal Medicine from May
through August 2001 were sent a short questionnaire at the time of manuscript
submission. Authors were asked if they received assistance from a person with
statistical expertise, the nature of any such contribution, and reasons why,
if no statistical input was received.
Results The response rate was 75% (704/943); methodological input was reported
for 514 (73%) of these papers. In 435 papers (85%), such input was provided
by biostatisticians or epidemiologists and, if deemed significant, was typically
associated with authorship. A total of 33 of 122 methodologists (27%) whose
main contribution started at the analysis stage received neither acknowledgment
nor authorship. Research without methodological assistance was more likely
to be rejected without review (71% vs 57%; χ2 = 10.6; P = .001) and possibly less likely to be accepted for publication
(7% vs 11%; χ2 = 2.37; P = .12).
Conclusions Statistical input to medical research is widely recommended but inconsistently
obtained. Individuals providing such expertise are often not involved until
the analysis of data and many go unrecognized by either authorship or acknowledgment.
Statistical content and complexity of medical research has increased
steadily over recent decades.1- 3
Although there is considerable evidence that methodological errors are common
in articles in medical journals,4- 6
much published research does not have substantive contribution from a statistician.
Anecdote suggests that many physicians have difficulty getting expert advice
or involvement in their research and statisticians are often brought in only
at the analysis stage or later.
In 1949, Luykx7 wrote, "It is now almost
inconceivable that a study of any dimensions, in medical science, can be planned
without the advice of a statistician." He clearly appreciated that the most
important contribution of the statistician to medical research is at the design
stage. We are unaware of any study of the degree to which persons with quantitative
expertise are involved in medical research. We report a survey of authors
submitting to 2 major medical journals to investigate the nature and frequency
of such involvement and its relation to the final editorial decision.
We sent a short questionnaire to authors of research articles immediately
after manuscript submission. Eligible papers were all those submitted to the BMJ and Annals of Internal Medicine
from May through August 2001. The only papers excluded were those not reporting
original research on humans. Questionnaires were sent by mail to BMJ authors and electronically to Annals authors.
Recognizing that several kinds of researchers have statistical expertise,
we asked authors the following question: "Was a statistical consultant (or
someone with graduate training or a qualification in quantitative research
methods) involved at any stage in your research?" If they answered yes, authors
were asked about the qualifications (degrees) of the person providing that
contribution and whether they were employed as a biostatistician, epidemiologist,
or other. For analysis, some answers of other (eg, statistician, clinical
epidemiologist) have been reclassified into one of the other categories.
We also asked about the extent of involvement of the methodologist at
each stage of the project (none or minimal, moderate, or significant), how
they were credited in the paper (author, acknowledged, or neither), and whether
they received payment or salary support for the particular project. If there
were 2 such individuals, they were asked to supply information for the senior
person or the one whose contributions most affected the paper. Where there
had been no expert statistical input, authors were asked to indicate reasons
from a list of possibilities, whether they would have liked to have had a
statistical consultant involved in their project, and whether they had previously
worked with a statistician.
Authors were told that the processing of their paper by the journal
would not be affected by their answers or whether they responded. The cover
letter was signed by statistical advisors to the 2 journals (D.G.A. and S.N.G.)
and noted that the study was supported by the editors of the journals, Richard
Smith (BMJ) and Frank Davidoff (Annals). Authors were asked to send their completed questionnaire directly
to the relevant statistician (D.G.A. or S.N.G.) and their replies were blinded
to any editorial staff at the journals. For the BMJ,
reminders were sent after 3 to 5 weeks only to nonresponding authors who had
provided an e-mail address (around 90%). For the Annals, a reminder was sent by e-mail to all nonrespondents after 1 week.
Each abstract was read to classify papers according to their study design
as a randomized controlled trial (RCT), systematic review (meta-analysis),
observational study, economic study, or other (at the BMJ, the whole paper was examined in cases of uncertainty). The category
"observational" included epidemiologic studies of various designs, nonrandomized
clinical cohort studies, prognostic studies, qualitative studies, and surveys.
From each journal's editorial database, each paper was classified as rejected
without review, rejected after review, or accepted for publication. Proportions
were compared using χ2 tests, with 1 degree of freedom unless
stated otherwise. Stata statistical software release 6.0 (Statacorp, College
Station, Tex) was used for all analyses.
Responses were received from 75% of authors (704/943). The response
rate from BMJ authors was higher than from Annals authors (585/741 [79%] vs 119/202 [59%]). The included
studies comprised 103 RCTs (11%), 52 systematic reviews (6%), 730 observational
studies (78%), 27 economic studies (3%), and 31 other (3%). The distribution
was similar for both journals apart from more economic evaluations at Annals (8% vs 1.5%). The response rate was broadly similar
in all categories.
Input from a methodologist was reported for 514 of 704 papers (73%):
273 were biostatisticians (53%), 162 were epidemiologists (32%), and 79 were
neither (15%). Reported qualifications varied greatly but the majority of
biostatisticians and epidemiologists had a PhD or equivalent degree and only
a few did not have a masters-level qualification. Biostatisticians were most
likely to be involved in RCTs, epidemiologists in systematic reviews, and
other methodologists in economic studies and other designs (Table 1).
The input from the methodologist was most often classified as moderate
or significant for analyzing the data (95%), developing the study design (67%),
and writing the paper (68%) (Table 2).
The first moderate or significant contribution from the methodologist occurred
at the analysis stage for 30% of biostatisticians (82/270), 16% of epidemiologists
(26/162), and 20% of others (16/79; χ22 = 12.1; P = .002). Biostatisticians were more likely to make their
first important contribution before the analysis stage for RCTs (35/43 [81%])
than for other study designs (153/227 [68%]; χ2 = 3.35; P = .07).
Table 3 shows that biostatisticians
were much less likely to be authors and more likely not even to be acknowledged
than epidemiologists or other methodologists (2 authors reported that a biostatistician
had declined authorship). Among those who were reported to have made a significant
contribution to at least 1 of the 5 study stages, 22% of biostatisticians
were not authors compared with 4% of epidemiologists and 10% of other methodologists
(χ22 = 21.7; P<.001).
One in 7 of these biostatisticians were neither authors nor acknowledged.
For those who had made only a minimal contribution prior to analysis, authorship
was less common, and 27% of these methodologists were not even acknowledged.
Specific payment for the project described in the paper was reported
for 37% of biostatisticians, 20% of epidemiologists, and 29% of others. Authorship
was reported for 78% of methodologists who received payment and 74% who did
not. Payment was more common for RCTs (30/65 [46%]) than for other types of
study (122/438 [28%]).
For 190 papers (27%), there had not been any expert methodological input.
The most common reason given (61%) was "we felt we had the necessary skills
within the research team." More seriously, 13% of these authors noted that
no statistical consultant was available. Many of these authors (64/180 [36%])
would have liked a statistical consultant involved in their project, regardless
of whether they felt they had the necessary skills (37%) or not (31%). Most
(147/183 [80%]) had previously worked with a statistician.
Table 4 shows the editorial
outcome of the papers. A total of 60% of the papers were rejected without
going to external peer review (BMJ, 65%; Annals, 42%). Papers with no methodologist were more likely to be rejected
without going to peer review compared with papers with methodological input
(71% vs 57%; χ2 = 10.6; P = .001).
Rejection without peer review was less common for RCTs and systematic reviews
(48/118 [41%]) than for other study designs (379/586 [65%]; χ2
= 23.7; P<.001). Papers with a methodologist were
more likely to be accepted for publication (55/514 [11%]) than those without
(13/190 [7%]; χ2 = 2.37; P = .12).
We believe this study represents the first attempt to survey authors
to ascertain the use and nature of expert methodological assistance in the
development and analysis of clinical research submitted for publication. We
found that such assistance was used for about 75% of the papers, more than
80% of such input was provided by biostatisticians or epidemiologists, research
not using this assistance was more likely to be rejected, and if such input
was deemed significant, it was typically associated with authorship. However,
27% of methodologists whose contribution was moderate or significant but started
at the analysis phase received neither acknowledgment nor authorship.
Because the survey was conducted of submitted, not published papers,
it represents a more generalizable snapshot of the use of statistical expertise
in clinical research at present than any study of published research. Although
the study was conducted at 2 major medical journals, more than 85% of the
papers were rejected and many will be published elsewhere.
The importance of having persons with quantitative expertise as part
of the research team has been stressed for decades, but this survey provides
evidence that their potential contributions may still not be fully appreciated.
Biostatisticians and epidemiologists have expertise not just in analysis but
in the design of studies and it is at that phase that their input is likely
to be most valuable. Expert analysis cannot salvage poorly designed research,
yet it is clear that many authors do not use methodological assistance for
study design. In papers with methodologists involved, 24% of those individuals
(and 30% of statisticians) had little input before data analysis. It is perhaps
not coincidental that in such papers nearly a third of the biostatisticians
received no acknowledgment of any involvement. In papers without methodologists,
18% of authors gave the lack of quantitative methods as a reason.
Empirical data on the value of expert statistical help in the conduct
of medical research have been indirect, mainly confined to analyses of statistical
errors in published papers.4- 6
Two recent studies found that the quality of published controlled clinical
trials8 and the reporting of statistical adjustment
procedures9 were better in papers with a methodologist
(biostatistician or epidemiologist) among the authors. The present study shows
that authorship is an imperfect indicator of a methodologist's involvement.
Indeed, in both of these studies,8,9
a methodologist was apparently involved in 38% of papers compared with 73%
in the present study. In addition, the contribution of methodologists may
not always be apparent just from the presence or absence of gross errors.
With the use of increasingly complex statistical technologies, such collaborators
are often necessary for an accurate assessment of whether the assumptions
of such methods have been met and an accurate representation of the attendant
uncertainty in the conclusions. A statistical collaborator can be essential
in crafting appropriately nuanced language in the discussion section, subtleties
that are difficult to capture in a standardized instrument of report quality.
Our study must be regarded as mainly descriptive, as there are many
aspects that make inferences difficult. First, the response rate of 75%, while
reasonable, probably selected for authors who were more likely to use statistical
help. Second, authors self-select when they choose a journal and this group
of manuscripts may represent a level of research more likely to use both quantitative
methods and collaborators with related expertise. Both Annals and BMJ may be known by many authors
as having stronger statistical reviewing policies than most other journals.10 Third, we did not assess the quality of the submitted
papers nor did we independently assess the extent to which specialized statistical
expertise was necessary in these papers; thus, it is not possible to posit
the right proportion of papers that should have reported having statistical
assistance. Finally, the studies most likely to be accepted at either of these
journals, RCTs and large cohort studies, are also those most likely to use
expert quantitative help. Thus, the relationship of acceptance rates to statistical
assistance is undoubtedly strongly confounded by study type. Stratification
by design may have reduced but not eliminated the problem. Nevertheless, this
study provides a picture of the norms and practices of this aspect of the
medical research enterprise in 2001 and identifies several areas for possible
exploration and improvement in the future.
Altman DG, Goodman SN, Schroter S. How Statistical Expertise Is Used in Medical Research. JAMA. 2002;287(21):2817–2820. doi:10.1001/jama.287.21.2817