Context Reviews performed almost a decade ago showed considerable gaps in the
quality of reporting and methods applied to economic evaluations of health
care interventions. Measures taken by the research community to address the
issue included the promulgation of guidelines and the publicizing of good
practice in economic evaluation.
Methods To assess the quality of methods of systematic reviews, economic evaluations
in health care, and reporting methods, we conducted full-text searches of
private and public databases for the period 1990 through March 2001 and corresponded
with researchers active in the field. A total of 102 reports were identified,
but only 39 were included. Quality of systematic reviews was assessed by a
6-item checklist.
Results Quality of review methods was reasonable, but more attention needs to
be paid to search methods and standardization of evaluation instruments. The
reviews found consistent evidence of serious methodological flaws in a significant
number of economic evaluations. Lack of clear descriptions of methods, lack
of explanation and justification for the framework and approach used, and
low-quality estimates of effectiveness for the interventions evaluated were
the most frequent flaws. Modest improvements in quality of conducting and
reporting economic evaluations appear to have taken place in the last decade.
Conclusions Proper allocation of resources on the basis of economic evaluations
remains uncertain. Editorial teams and regulatory bodies should perform quality
assurance based on a single widely accepted and validated standard instrument.
Economic evaluations (analytical studies comparing costs and outcomes
of investing resources in ≥1 alternatives) have increased in availability
and acceptance as a tool for decision making in health care in the last 2
decades.1,2 However, the costs
of decisions based on methodologically weak evidence are widely accepted.3 A number of reviews published in the period 1990-1994
illustrated the variability of the methods used in conducting and reporting
economic evaluations.3 Although the findings
could be partly explained by possible variations in review methods and by
the known absence of editorial policies to assess economic evaluations prior
to publication,4,5 initiatives
aimed at increasing the uniformity, quality, and reporting of economic evaluations
were undertaken. The initiatives (production of guidelines for regulatory
bodies for submission and editorial management in medical journals and further
research into the quality of economic evaluation methods) should have led
to an increase in the quality of economic evaluations during the last years
of the previous decade. We examined systematic reviews of economic evaluations
in health care to assess the quality of methods used in the reviews and the
quality of conducting and reporting economic evaluations in the last decade.6
We searched for studies from the period 1990 to March 2001 in all languages
on a variety of databases, corresponded with members of the International
Health Economist Association, and handsearched issues of Health Economics from 1992 to March 2001. A detailed description of
search strategy, sources, and terms used is available in the online Appendix
(available in PDF format).
Two reviewers examined each citation for relevance. Those deemed relevant
were retrieved in full. Two reviewers compared each study against the selection
criteria independently, resolving disagreements by discussion and, when necessary,
the third reviewer adjudicated. We included systematic reviews of economic
evaluations of health care interventions defined as studies assessing methodological
quality using explicit criteria. We identified and retrieved 102 reports of
reviews possibly satisfying our inclusion criteria. Fifty-four were excluded
from further analysis, 9 are awaiting assessment, and the remaining 39 were
included. References to the 54 excluded reviews and 9 awaiting assessment
are available in the online Appendix (available in PDF format).
For each included review, we extracted author(s) and year of study,
topic and study question, type (ie, cost-benefit analysis) and number of included
economic evaluations, year of publication or preparation of included economic
evaluations, instrument used to assess quality of included economic evaluations,
and main study conclusions. Quality of systematic review methods was assessed
using the following criteria that were adapted from different sources7-9: (1) Is it unlikely
that important relevant studies were missed? (2) Were the inclusion criteria
used to select articles appropriate? (3) Was the assessment of studies reproducible?
(4) Were the design and/or methods and/or topic of included studies broadly
comparable? (5) How reproducible are the overall results? (6) Will the results
help resource allocation in health care? Each question was answered with "impossible
to judge," "no," "partly," or "yes."
We performed a calculation of Spearman rank-order coefficient correlating
inter-reviewer agreement on an initial sample of 20 studies using 2 independent
reviewers. As correlation was high (0.98), the remaining studies were assessed
by a single reviewer.
Two reviewers extracted data on methods of assessing the quality of
economic evaluations included in each of the reviews in our study. As a wide
variety of assessment instruments were used, the criteria used in each instrument
were grouped and analyzed by variables listed in the BMJ checklist for editors and authors of economic evaluations.3,10 We hoped that this would enable us
to find some common methodological quality items used for assessment in the
reviews to allow us to draw some conclusions. We further subdivided grouped
items into methodological quality and reporting quality items.
We grouped reviews according to whether they assessed general methodological
quality or by intervention, by study design, or by specific methods used in
economic evaluations. A summary of the 39 included reviews is in the online Table A
(available in PDF format).
Four of the 6 quality criteria (inclusion criteria, reproducibility
of assessment, comparability of included economic evaluations, and impact
on resource allocation in health care) were fulfilled in at least 75% of reviews.
The remaining 2 criteria (thoroughness of searches and reproducibility of
overall results) were completely fulfilled in 12% and 73.5% and partly in
52.9% and 23.5% of reviews, respectively. A detailed methodological assessment
of each review is available from the corresponding author.
Common search weaknesses were restricted use of databases and lack of
efforts to identify unpublished material. Reproducibility of overall review
results was hampered by the disparate nature of quality assessment instruments
used in the reviews. Twenty-six reviews used ad hoc instruments with a variable
number of items (3-25), 5 used the Drummond et al11
10-item checklist, 5 used the BMJ 35-item checklist,3 and 2 used US panel recommendations.12-14
One review used a checklist of unclear structure and origin.15
Funding sources were available for 28 (71%) reviews. Twenty-one (53%)
were publicly funded, 2 (5%) were privately funded, and 5 (13%) had mixed
funding.
Quality assessment criteria used in each review were compared with those
in the BMJ checklist. Many reviews did not use quality
assessment instruments that covered all the criteria as the BMJ checklist. In some cases this was because a review focused on a
narrow methodological issue.16-18
Overall, the instruments used appeared to be appropriate to the scope of the
reviews.
We included 6 reviews assessing the quality of 644 economic evaluations
in health care across a wide range of general and specialty medical journals,
different countries and settings, including industry submissions to a reimbursement
authority (unpublished data, 2000).19-24
All identified major flaws in a substantial number of evaluations. The prevalence
of major methodological flaws appeared higher in the population assessed by
Hill et al,23 probably because of a higher
degree of scrutiny by the Australian reimbursement authority.
We included 19 reviews assessing the quality of 776 economic evaluations
(not allowing for the overlap between Demicheli25
and Jefferson26) focusing on vaccines, preventive
interventions for human immunodeficiency virus, adjuvant therapy for breast
cancer, vascular and orthopedic surgery, and antenatal screening (unpublished
data, 2000).21,25-42
The evaluated interventions were mainly preventive. All included studies reached
the same conclusions, albeit with different emphasis, such as the presence
of uncertainty due to variable epidemiological assumptions, estimates of effect
of evaluated interventions, and poor reporting, writing, or use of methods.
Six reviews assessing the quality of specific economic study design
included 5 studies that assessed 362 cost-utility analyses over a time span
of 20 years.43-47
Overall results show a small and slow improvement over the years, but the
authors raise concerns about the standard of peer review in some of the smaller
specialty journals. We were unable to identify similar depth of scrutiny for
other economic study types.
Nine reviews assessing the quality of a broad range of specific methods
(statistical analysis of costs, health status measurement, contingent valuation,
and cost estimation) in 1407 economic evaluations reported poor methods.15-18,48-52
All reviews cast serious doubts on the validity of the conclusions reached
by the economic evaluations assessed and all propose stricter criteria for
quality control.
Eleven reviews assessed and commented on changes in quality of economic
evaluations over time. Six reported improvements mostly up to the late 1990s,15,23,34,44,47
one reported quality improvement over the 1980s,19
one the opposite,24 and 4 reported no improvement.27,28,50,51
The major methodological findings of the reviews are: lack of clarity
on study questions, viewpoint, and epidemiological assumptions; unclear conceptual
and decision-making context; lack of clear descriptions of methods used to
define effectiveness, utilities, benefits, and resource and cost estimates;
basic calculation errors in a significant minority of studies; variability
in the assumptions underlying the choice of estimates of effect; choice of
discount rate and perspective often not explained; and sensitivity analysis
more likely to be performed in more recent evaluations.
Although the provision of some descriptive information (study viewpoint,
cost basis) may be improving over time, a sizeable proportion of economic
evaluations could not justify their conclusions on the basis of methods used.
There appeared to be no difference in the methodological quality of conducting
and reporting economic evaluations, although evaluation of the former was
difficult as few reviews had raw data from the evaluations at their disposal.
Although overall quality of reviews is satisfactory, more attention
needs to be paid to search strategies and the use of comparable instruments
to assess quality of included studies.
The findings of the reviews indicate the presence of serious methodological
flaws in a significant number of economic evaluations of health care interventions,
regardless of publication status, period of preparation or publication, topic,
or type of evaluation. Overall, there appear to have been some modest, but
slow, improvements in quality in the last decade, but the evidence for this
observation is thin. There is evidence of lower quality in evaluations published
in specialty journals. There is no evidence of language bias, but there is
evidence of low quality of unpublished evaluations submitted by the pharmaceutical
industry within a reimbursement scheme.
There is evidence of considerable confusion in the design, reporting,
and description of economic evaluations. Reviews found a proportion of evaluations
of unclassifiable study design, studies that ignored basic research and economic
methodological principles, and ones that reported results lacking clarity.
There could be many explanations for our findings, ranging from lack
of appreciation by researchers and editorial teams of the complexities of
economic evaluation method23,47
to resistance in accepting that "any method" will no longer suffice,35 or lack of direction in the quality control of economic
submissions to journals,4 with the exception
of the BMJ.3,46
There are 2 possible major limitations to our descriptive synthesis
of results. First, it is possible that a number of primary studies were included
more than once in the research synthesis studies included in our review. For
example, a cost-utility analysis included in Gerard et al46
also could have been included in the analysis by Demicheli and Jefferson.25 If this kind of double counting were extensively
present, a proportion of the same poor-quality evaluations could bias the
results of our review.
Second, few methodological studies used the same instrument to assess
quality, possibly leading to lack of overall comparability of their results.
We believe these problems not to have had a major impact on our findings.
All included systematic reviews unequivocally point to the variable nature
of methods for conducting and reporting economic evaluations and to the slow
and modest progress in overall quality over the last decade. This finding
appears to be independent of review focus or assessment methods. There appears
to be little difference in the conclusions of those reviews using disparate
instruments and those which used the same instrument.
We believe that urgent action should be taken to address the problem
of poor methods in economic evaluations. First, absolute transparency of reporting
is needed, with maximum use of journal Web sites to obviate space constraints.53 Economic models used in evaluations should be readily
accessible to reviewers and readers. Second, basic formal training in economic
evaluation should be given to all those involved in economic evaluation or
their assessment. Third, the use of a validated and accepted instrument for
quality assessment is a priority for any future monitoring of economic evaluations.
In our view, the BMJ checklist could be adopted by
general and specialty journals and regulatory and grant-giving institutions
as a quality assessment instrument. Modifications of the BMJ checklist for in-depth scrutiny of particular methodological aspects,
such as the ones described by Gerard et al,46
should be performed on the basis of the research results. Lastly, we propose
continuous monitoring of the quality of economic evaluation methods and more
research into specific study designs, often-used interventions, and comparisons
of economic evaluations in decision-making and editorial settings.
Caution should be taken when deciding or justifying allocation of resources
on the basis of economic evaluations, especially if based on unpublished studies
or studies published in specialty journals. Editorial teams, regulatory institutions,
and researchers should implement and assess quality assurance based on a single
widely accepted and validated standard instrument.
1.Elixhauser A. Health care cost-benefit analysis and cost-effectiveness analysis from
1979 to 1990: a bibliography.
Med Care.1993;31:JS1-JS150.Google Scholar 2.Elixhauser A, Halpern M, Schmier J, Luce BR. Health care cost-benefit analysis and cost-effectiveness analysis from
1991 to 1996: an update.
Med Care.1998;36:MS1-MS145.Google Scholar 3.Drummond MF, Jefferson TO.for the BMJ Working Party. Guidelines for authors and peer-reviewers of economic submissions to
the
British Medical Journal. BMJ.1996;313:275-283.Google Scholar 4.Jefferson TO, Demicheli V. Are guidelines for peer-reviewing economic evaluations necessary? a
survey of current editorial practice.
Health Econ.1995;4:383-388.Google Scholar 5.Demicheli V, Hutton J. Peer review of economic submissions. In: Godlee F, Jefferson T, eds. Peer Review in
Health Science. London, England: BMJ Books;
1999:172-180.
6.Boynton J, Glanville J, McDaid D, Lefebvre C. Identifying systematic reviews in MEDLINE: developing an objective
approach to search strategy design.
J Inform Sci.1998;24:137-154.Google Scholar 7.Oxman AD, Cook DJ, Guyatt GH. Users' guides to the medical literature, VI: how to use an overview.
JAMA.1994;272:1367-1371.Google Scholar 8.Oxman AD, Guyatt GH. Validation of an index of the quality of review articles.
J Clin Epidemiol.1991;44:1271-1278.Google Scholar 9.Mulrow CD, Cook DJ. Systematic Reviews: Synthesis of Best Evidence for
Healthcare. Philadelphia, Pa: American College of Physicians; 1998:17-20.
10.Gerard K, Seymour J, Smoker I. A tool to improve quality of reporting published economic analysis.
Int J Technol Assess Health Care.2000;16:100-110.Google Scholar 11.Drummond M, O'Brien B, Stoddart G.
et al. Methods of Economic Evaluation of Health Care Programmes. 2nd ed. New York, NY: Oxford University Press Inc; 1997.
12.Weinstein MC, Siegel JE, Gold MR.
et al. Recommendations of the Panel on Cost-Effectiveness in Health and Medicine.
JAMA.1996;276:1253-1258.Google Scholar 13.Siegel JE, Weinstein MC, Russell LB.
et al. Recommendations for reporting cost-effectiveness analyses.
JAMA.1996;276:1339-1341.Google Scholar 14.Seigel J, Weinstein M, Torrance G. Reporting cost-effectiveness studies and results. In: Gold M, Siegel J, Russell L, Weinstein M, eds. Cost-effectiveness in Health and Medicine. New York, NY: Oxford University
Press Inc; 1996.
15.Briggs AH, Gray AM. Handling uncertainty when performing economic evaluations of healthcare
interventions.
Health Technol Assess.1999;3:1-134.Google Scholar 16.Barber JA, Thompson SG. Analysis and interpretation of cost data in randomised controlled trials:
review of published studies.
BMJ.1998;317:1195-1200.Google Scholar 17.Brazier J, Deverill M, Green C, Harper R, Booth A. A review of the use of health status measures in economic evaluation.
Health Technol Assess.1999;3:1-164.Google Scholar 18.Diener A, O'Brien B, Gafni A. Health care contingent valuation studies: a review and classification
of the literature.
Health Econ.1998;7:313-326.Google Scholar 19.Adams ME, McCall NT, Gray DT, Orza MJ, Chalmers TC. Economic analysis in randomised controlled trials.
Med Care.1992;30:231-238.Google Scholar 20.Garcia-Altes A. Twenty years of health care economic evaluations in Spain: are we doing
well?
Health Econ.2001;10:715-729.Google Scholar 21.Badia M, Rovira J, Segu JL, Porta M. Economic assessment of drugs in Spain.
Pharmacoeconomics.1994;5:123-129.Google Scholar 22.Chang WY, Henry BM. Methodological principles of cost analyses in the nursing, medical,
and health services literature, 1990-1996.
Nurs Res.1999;48:94-104.Google Scholar 23.Hill SH, Mitchell A, Henry D. Problems with the interpretation of pharmacoeconomic analyses: a review
of submissions to the Australian Pharmaceutical Benefits Scheme.
JAMA.2000;283:2116-2121.Google Scholar 24.Udvarhelyi S, Colditz GA, Rai A, Epstein AM. Cost-effectiveness and cost-benefit analyses in the medical literature:
are methods being used correctly?
Ann Intern Med.1992;116:238-244.Google Scholar 25.Demicheli V, Jefferson T. An exploratory review of the economics of recombinant vaccines against
hepatitis B. In: Ronchi E, ed. The Economic Aspect of Biotechnologies
Related to Human Health: Biotechnology and Medical Innovation: Socioeconomic
Assessment of the Technology: the Potential and the Products. Paris,
France: Organisation for Economic Co-operation and Development; 1997.
26.Jefferson T, Demicheli V. Is vaccination against hepatitis B efficient? a review of world literature.
Health Econ.1994;3:25-37.Google Scholar 27.Evers SM, Van Wijk AS, Ament AJ. Economic evaluation of mental health care interventions: a review.
Health Econ.1997;6:161-177.Google Scholar 28.Evers SM, Ament AJ, Blaauw G. Economic evaluation in stroke research: a systematic review.
Stroke.2000;31:1046-1053.Google Scholar 29.Fergusson D, van Walraven C, Coyle D, Laupacis A.for International Study of Peri-Operative Transfusion (ISPOT) Investigators. Economic evaluations of technologies to minimize perioperative transfusion:
a systematic review of published studies.
Transfus Med Rev.1999;13:106-117.Google Scholar 30.Gambhir SS, Schwimmer J. Economic evaluation studies in nuclear medicine: a methodological review
of the literature.
Q J Nucl Med.2000;44:121-137.Google Scholar 31.Holloway RG, Benesch CG, Rahilly CR, Courtright CE. A systematic review of cost-effectiveness research of stroke evaluation
and treatment.
Stroke.1999;30:1340-1349.Google Scholar 32.Hutton J, Iglesias C, Jefferson TO. Assessing the potential cost-effectiveness of pneumococcal vaccines:
methodological issues and current evidence.
Drugs Aging.1999;15(suppl 1):31-36.Google Scholar 33.Jefferson T, Demicheli V. Economic evaluation of influenza vaccination and economic modelling:
can results be pooled?
Pharmacoeconomics.1996;9(suppl 3):67-72.Google Scholar 34.Lord J, Thomason MJ, Littlejohns P.
et al. Secondary analysis of economic data: a review of cost-benefit studies
of neonatal screening for phenylketonuria.
J Epidemiol Community Health.1999;53:179-186.Google Scholar 35.Petrou S, Henderson J, Roberts T, Martin M-A. Recent economic evaluations of antenatal screening: a systematic review
and critique.
J Med Screen.2000;7:59-73.Google Scholar 36.Saleh KJ, Gafni A, Saleh L.
et al. Economic evaluations in the hip arthroplasty literature: lessons to
be learned.
J Arthroplasty.1999;14:527-532.Google Scholar 37.Shackley P, Slack R, Michaels J. Costing vascular surgery: a review of current reporting practice.
J Vasc Surg.1999;30:668-678.Google Scholar 38.Schrappe M, Lauterbach K. Systematic review on the cost-effectiveness of public health interventions
for HIV prevention in industrialised countries.
AIDS.1998;12(suppl A):S231-S239.Google Scholar 39.Späth H-M, Carrère M-O, Fevers BPT. Analysis of the eligibility of published economic evaluations for transfer
to a given health care system: methodological approach and application to
the French health care system.
Health Policy.1999;49:161-177.Google Scholar 40.van der Weijden T, Knotterus JA, Ament AJ.
et al. Economic evaluation of cholesterol-related interventions in general
practice: an appraisal of the evidence.
J Epidemiol Community Health.1998;52:586-594.Google Scholar 41.Walker D, Fox-Rushby JA. Economic evaluation of parasitic diseases: a critique of the internal
and external validity of published studies.
Trop Med Int Health.2000;5:237-249.Google Scholar 42.Walker D, Fox-Rushby JA. Economic evaluation of communicable disease interventions in developing
countries: a critical review of the published literature.
Health Econ.2000;9:699.Google Scholar 43.Deverill M, Brazier J, Green C, Booth A. The use of QALY and non-QALY measures of health-related quality of
life: assessing the state of the art.
Pharmacoeconomics.1998;13:411-420.Google Scholar 44.Earle CC, Chapman RH, Baker CS.
et al. Systematic overview of cost-utility assessments in oncology.
J Clin Oncol.2000;18:3302-3317.Google Scholar 45.Gerard K. Cost-utility in practice: a policy maker's guide to the state of the
art.
Health Policy.1992;21:249-279.Google Scholar 46.Gerard K, Seymour J, Smoker I. A tool to improve quality of reporting published economic analysis.
Int J Technol Assess Health Care.2000;16:100-110.Google Scholar 47.Neumann PJ, Stone PW, Chapman RH, Sandberg EA, Bell CM. The quality of reporting in published cost-utility analyses, 1976-1997.
Ann Intern Med.2000;132:964-972.Google Scholar 48.Brown J, Sculpher M. Benefit valuation in economic evaluation of cancer therapies: a systematic
review of the published literature.
Pharmacoeconomics.1999;16:17-31.Google Scholar 49.Jacobs P, Fassbender K. The measurement of indirect costs in the health economics evaluation
literature.
Int J Technol Assess Health Care.1998;14:799-808.Google Scholar 50.Sassi F. The Outcomes of Medical Diagnosis: An Economic Perspective [dissertation]. London, England: University of London; 2000.
51.Sassi F, Archard L, Le Grand J. Equity and the economic evaluation of healthcare.
Health Technol Assess.2001;5:1-138.Google Scholar 52.Stone PW, Chapman RH, Sandberg EA.
et al. Measuring cost-utility analyses.
Int J Technol Assess Health Care.2000;16:111-124.Google Scholar 53.Rennie D, Luft HS. Pharmacoeconomic analyses: making them transparent, making them credible.
JAMA.2000;283:2158-2160.Google Scholar