Customize your JAMA Network experience by selecting one or more topics from the list below.
Rief W, Avorn J, Barsky AJ. Medication-Attributed Adverse Effects in Placebo Groups: Implications for Assessment of Adverse Effects. Arch Intern Med. 2006;166(2):155–160. doi:10.1001/archinte.166.2.155
Medication-attributed adverse effects are a frequent reason for poor compliance in practice and in clinical studies and are also common in patients receiving placebo. The occurrence of adverse effects in placebo groups can clarify the assessment of adverse event reporting. We analyzed data from randomized, placebo-controlled trials of statin drugs published since 1992 with sample sizes larger than 100 subjects. Reports of adverse effects and discontinuation rates in placebo groups were evaluated. We compared data on adverse effect profiles in placebo groups between trials and with expected rates from population-based studies. We also sought to determine the range of adverse effect ascertainment methods used in different studies. Methods of ascertainment of adverse events varied widely across studies. Overall, 4% to 26% of patients in the control groups of large trials of statin drugs discontinued placebo use because of perceived adverse effects. The symptom rate in placebo groups varied substantially across trials (up to a ratio of 13:1 for possibly drug-related symptoms, eg, headache, 0.2%-2.7%, or abdominal pain, 0.9%-3.9%) and were often markedly lower than those found in the general population (eg, fatigue, 1.9%-3.4%) in trials of statin drugs vs 17.7% in the general population. In conclusion, the widely varying rates of adverse effects reported by patients taking placebo and the high prevalence of such symptoms in the general population should be considered by both trialists and clinicians. In addition, variability of adverse effect ascertainment is considerable and suggests the need for better standardization in research.
Adverse events are a central part of the risk-benefit profile of a drug, and decisions about which drug to prescribe or take depend heavily on tolerability and the adverse effect profile. Medication adherence in clinical trials, and even more in clinical practice, depends on the subjective experience of adverse effects attributed to the drug. These symptoms may indicate harmful events, but they are also a cause of subjective distress and impaired quality of life.
The placebo groups in randomized clinical trials provide an excellent means to examine the phenomenon of drug adverse effects. Adverse effects reported in placebo groups are typically considered the generic, nonspecific baseline with which the adverse effects of the active drug are to be compared. This nocebo phenomenon1 is substantial; it describes the negative effects attributed to placebo. The knowledge about taking medication or the anxiety about medication effects and illness course can cause patients to monitor symptoms in more detail, resulting in an amplified perception of benign sensations and physical symptoms.2
Although many randomized, controlled trials are of good quality and large enough to detect a therapeutic benefit, many do not provide reliable or detailed information about adverse effects.3,4 However, an adequate risk-benefit assessment depends on the quality of measurement of both efficacy and adverse effects. The better the quality of assessment (eg, reliability and validity of instruments), the more likely effects can be found. If the evaluation of adverse effects is done with less reliable instruments than the evaluation of benefits, the risk-benefit calculation may be flawed.
Adverse effect assessment methods in clinical trials vary substantially. In many studies, patients are simply asked whether they have experienced any symptoms since the last visit. A second source of variability is related to the person recording the symptoms, who must decide whether the reported symptom is relevant, whether its intensity is sufficient to be recorded, and whether it is drug related. In light of the variability and instability of this data collection process, adverse effect assessment that is not structured or standardized may be unreliable.
The purposes of this study were to assess dropout rates because of supposed adverse effects in placebo groups in different trials, to analyze the stability of adverse effect profiles in placebo groups across similar studies, and to compare symptom profiles in placebo groups with reference data from the general population.
We conducted a systematic review of trials of statin drugs to study placebo adverse effect reporting. The statin drug trials were chosen because, although many of the participants had coronary heart disease, hypercholesterolemia is asymptomatic and the drugs are often given for prophylaxis rather than treatment. Therefore, in the placebo arms of these trials, noncardiovascular symptoms are likely to be neither drug induced nor illness associated. Our goal was to identify all statin drug trials published between 1994 and 2003 that comprised more than 100 subjects in the placebo arm, using PubMed and additional literature research. Our search identified nearly identical publications as 2 recently published meta-analyses of statin drug trials.5,6 The following trials were considered: LIPS (Lescol Intervention Prevention Study),7,8 PROSPER (Prospective Study of Pravastatin in the Elderly at Risk),9 PPP (Prospective Pravastatin Pooling project; pooled data from CARE [Cholesterol and Recurrent Events study], LIPID [Long-term Intervention With Pravastatin in Ischemic Disease trial], and WOSCOP [West of Scotland Coronary Prevention study]),10,11 MRC/BHF (Medical Research Council/British Heart Foundation Heart Protection Study),12 MIRACL (myocardial ischemia reduction with aggressive cholesterol lowering),13 SCAT (Simvastatin and Enalapril Coronary Atherosclerosis Trial),14,15 German-Czech fluvastatin trial,16 LCAS (Lipoprotein and Coronary Atherosclerosis Study),17,18 LIPID,19 CARE,20 WOSCOP,21 REGRESS (Regression Growth Evaluation Statin Study),22 PLAC (Pravastatin Limitation in Atherosclerosis in the Coronary Arteries study),23 PLAC-2 (Pravastatin, Lipids, and Atherosclerosis in the Carotid Arteries study),24 KAPS (Kuopio Atherosclerosis Prevention Study),25 ACAPS (Asymptomatic Carotid Artery Progression Study),26 4S (Scandinavian Simvastatin Survival Study Group),27 and EXCEL (Expanded Clinical Evaluation of Lovastatin trial28) (the EXCEL trial was included, although published in 1991, because it was a restudy of a larger lovastatin trial). Reported adverse effects for the placebo groups as well as further sample information of these studies were tabulated depending on whether only possibly drug-related symptoms were mentioned or whether symptoms in general were reported. Confidence intervals were computed to compare the base rates that differed most. We sought to distinguish whether variations were limited to nonspecific symptoms that were unlikely to be related to the pharmacologic action of the active drug or whether these variations can be also found for more clinically important adverse effects. One such adverse effect of statin drugs is muscle pain or myopathy, which can indicate rhabdomyolysis.29 Therefore, variations of base rates of muscle weakness also were analyzed.
The reported frequency of adverse effects in the placebo group was also compared with symptom prevalence rates in the general population. Because we could not use lifetime rates (eg, from the ECA [Epidemiologic Catchment Area] study30), we report data from a representative sample of 2552 adults that assessed 33 symptoms for the last 7 days. This survey also enabled consideration of symptom intensity ratings.
Table 1 gives the discontinuation rates for subjects in the placebo arms of these trials and includes only trials in which the causes for discontinuation were reported. At least 5% to 10% of patients in these trials discontinued use of placebo because of perceived adverse effects.
Table 2 summarizes the incidence of symptoms in the placebo groups that were attributed to the study medication (ie, placebo). These figures were mainly compiled from merged data sets (eg, Physicians' Desk Reference32,33) or other publications because many statin drug trials did not report adverse effect symptom patterns in detail. The rates of specific adverse effects in the placebo arms varied widely among the trials. Depending on the specific symptom, the incidence was as much as 3 to 13 times higher among trials (Table 2). Typical examples included headache, with a range of 0.2% (confidence interval [CI], 0.2%-0.7%) to 2.7% (CI, 1.9%-3.5%), or flatulence, with a range of 0.7% (CI, 0.3%-1.0%) to 4.2% (CI, 3.2%-5.2%) across trials; the nonoverlapping CIs confirm significant differences.
Table 3 summarizes data for symptoms in the placebo group irrespective of whether they were attributed to the drug. These symptom frequencies varied substantially. These variations are also found for clinically important adverse effects, for example, muscle pain. In the studies that provided these data, the symptom frequency of this critical symptom varied nearly 8-fold: EXCEL,28 7.5% of subjects in the placebo group reported muscle weakness; PROSPER,9 1% of subjects in the placebo group reported myalgia; MRC/BHF,12,15 at preanalysis, 6% of subjects in the placebo group reported muscle pain or weakness; MRC/BHF, at main analysis, at each assessment about 6% of subjects reported muscle pain or weakness, and during 5 years the total was 33% in the placebo group; WOSCOP, 19 (0.6%) of 3293 subjects in the placebo group reported myalgia and 97 (3%) reported muscle aches; and PPP,10 2% of subjects in the placebo group reported musculoskeletal pain.
In relation to symptom base rates in the general population samples (columns 7 and 8 in Table 3), the rates in the statin placebo groups were substantially lower. Even if only symptoms of moderate intensity were counted in the general population, 20 of 21 comparisons revealed symptom frequencies in the statin placebo group to be outside the CI for the same symptom in the general population. In 17 of these 20 comparisons, the symptom rates in the clinical trials were lower than in the general population sample.
Symptoms such as headache and back pain are common in industrial societies, with base rates greater than 20%,30 while in the statin placebo samples, these rates were reported as less than 10%. For chronic fatigue, stable rates between 17.5% and 19% were found in samples of the US population, in California workers, in the English population, and in health maintenance organization enrollees.36,37 However, the base rates in the statin placebo groups were reported to be in the range of 1.9% to 3.4%.
We found great variability in reported adverse effect profiles in patients taking placebo in different clinical trials. Thus, these nocebo symptoms cannot be considered as stable, constant “white noise” but are subject to various methodologic and other influences among studies. This calls into question the generalizability of adverse effect reports and the comparability of adverse effect patterns across trials. It also suggests that it is hazardous to merge adverse effect data from different trials, although this is frequently done for treatment groups as well as for placebo groups. If the great variability of adverse effect reports in placebo groups is due to unreliable and invalid assessment methods, the same problem applies for adverse effect reports in active treatment groups.
A number of factors could contribute to variability of adverse effect rates among trials, including sex differences, age, comorbidity, and sample selection. However, in the studies reviewed, none of these seem to be strong enough to explain such differences. For sex, odds ratios in symptom reporting for women rarely exceed 1.5 (eg, see REFERENCES 38, 39, and 40). The mean age of subjects in the statin drug trials generally ranged between 58 and 65 years; this small variance cannot account for the differences. Therefore, we believe that differences in data collection and interpretation are likely responsible for these unstable findings. While unreliable assessment method is only one of several possible explanations for these variations, the relevance of other influences (eg, population differences, and expectations of researchers and patients) can be properly evaluated only if the quality of adverse effect ascertainment is high.
Clinical trials tend to report lower rates for physical symptoms than would be expected from epidemiologic survey data. This difference is all the more striking because the higher prevalence of elderly and ill patients in the statin drug trials should be associated with even higher symptom reports. It is likely that the open-ended questions used in clinical trials led to lower adverse effect rates than would be obtained with structured scales asking about specific symptoms. Open-ended assessment strategies may be helpful in detecting rare adverse effects that might not be included in structured symptom lists. However, structured symptom lists produce more reliable and comparable results among trials because they are less prone to interviewer influences.41 With the example of symptoms in 808 patients positive for human immunodeficiency virus, reliability and clinical validity of provider-reported symptoms have been examined.42 Justice et al42 found that patient and provider agreement was poor. Providers underreported symptoms and demonstrated greater variability by study site, and there was poorer test-retest reliability.
Drug use discontinuation and lack of adherence are common problems in clinical trials and practice. This has been well studied for the statin drugs.43,45 Adverse effects are a potent predictor of treatment noncompliance with lipid-lowering medications, with an odds ratio of 3.9.46 Clinical trials tend to include more patients with positive drug expectations and higher adverse effect tolerance than typically seen in primary care.47 The relevance of subjective adverse effects for drug use discontinuation is, therefore, likely even more pronounced in clinical practice than in clinical trials.
Many adverse effects represent symptoms that are prevalent in persons not taking medication and are also prevalent in study participants before entering a trial. Therefore, adequate baseline assessments of possible adverse effects are necessary.
The assessment of trial completers underestimates the incidence of adverse effects because this is one reason for dropping out. If adverse effects are assessed only at the end of a trial, data for patients who dropped out are excluded. Therefore, multiple assessments and “last observation carried forward methods” are needed.
In many trials, adverse effect symptoms are not measured as rigorously as target symptoms. If unreliable methods to assess adverse effects are used, the risk for not detecting them is dramatically increased. The less reliable the assessment instrument is, the more likely are influences of conflict of interest.48,50
Symptom-caused attributions by patients or physicians are often erroneous. The symptoms described in Table 2 were all attributed to drugs, but the patients were taking placebo. Therefore, the definition of drug-related adverse effects should be replaced with systematic symptom comparisons between placebo and drug groups at baseline as well as during and at the end of the trial.
Most clinical trials are powered to demonstrate drug efficacy. However, this sample size is generally not sufficient to demonstrate differential rates of adverse effects. For example, if 100 patients are given a drug associated with clinical efficacy in 60% of patients (CI, 50.5%-69.9%) and 3% of them experience drug-induced adverse effects (corresponding CI, −0.3% to 6.3%), this trial can easily demonstrate the efficacy of the drug, but it is unlikely to reveal the adverse effect significantly, because the CI for the adverse effect rate includes zero.
Adverse effect data from clinical trials of drug efficacy may not be generalizable to clinical practice because of sample selection.51 In some trials, patients are excluded during the run-in period, for example, if they report subjective complaints while taking placebo before the trials begin (eg, see REFERENCES 12 and 34). Compared with clinical practice, clinical trials might include patients with higher motivation to tolerate adverse effects and with more optimistic drug attitudes; these patients are likely to report fewer adverse effects and discontinue use of medication less frequently than patients in clinical practice.52
Current methods of assessing drug-induced adverse effects in clinical trials are problematic. For drugs approved by the US Food and Drug Administration, Lasser et al53 estimated a 20% probability of acquiring a new black box warning or being withdrawn from the market during the following 25 years (see also Psaty et al47). In many cases, the detection of adverse effects is delayed compared with the detection of efficacy because of the methodologic problems described.
Policy decisions concerning resource allocation often depend on calculations of quality-adjusted life-years.54 Because quality of life is influenced by the adverse effects of treatments, these decisions also depend on adequate adverse effect assessment. These findings also have implications for drug information leaflets for patients to help prevent both underreporting and overreporting.
The assessment of non–life-threatening adverse effects is done with great variability in different clinical trials. Merging adverse effect data or meta-analyses of such patterns are not warranted with these data. Adverse effects have a major role for drug use discontinuation in research and in clinical practice. Improved assessment of adverse effects should help to better estimate risk-benefit ratios of drugs and to advise patients more properly. Adverse effect assessment should be done with the same accuracy as efficacy assessment.
Correspondence: Winfried Rief, PhD, Department of Clinical Psychology, Philipps University, FB04, Gutenbergstrasse 18, 35032 Marburg, Germany (firstname.lastname@example.org).
Accepted for Publication: August 30, 2005.
Financial Disclosure: None.
Create a personal account or sign in to: