Flowchart: inclusion and exclusion of studies. Only primary reasons for exclusion or dropout are reported. DOT indicates directly observed therapy; HAART, highly active antiretroviral therapy; and RCT, randomized controlled trial.
Percentage of patients in included studies with an undetectable viral load (A) and adherence rates of at least 95% (B) in the control (open circles) and intervention (filled circles) groups as a function of capacity. The solid line indicates the predicted average success rate (dashed lines indicate 95% confidence intervals). The point sizes are given relative to the sample sizes, and points have been adjusted for the other confounders in the analysis (circles without numbers are the observed percentages; circles with numbers, the adjusted percentages).
The differences in the proportion of intervention and control patients (success rate difference) with an undetectable viral load (A) and adherence rates of at least 95% (B) as a function of the unique intervention capacity. The solid line indicates the predicted average rate difference (with 95% confidence intervals indicated by the dashed lines). The point sizes are given relative to the sample sizes.
Expected viral load rate differences compared with low, medium, or high standard care capacity (SCC), as illustrated for the study by Remien et al.24 The lines represent the expected effect sizes; the gray shading, the ranges. The estimated effects under the observed (SCC) conditions are indicated by the open circles. Low SCC: instructions for taking medication, feedback clinical results, encouraging to adhere; medium SCC adds verbal and written information (human immunodeficiency virus, highly active antiretroviral therapy, adherence, dealing with adverse effects) and tailored medication planning including cues, week boxes, and social support; and high SCC adds continued attention for adherence problems and solutions during follow-up, provision of materials (eg, alarm devices), and a telephone number in case of problems. The assumption is that all interventions are delivered for at least 12 weeks and on top of standard care.
de Bruin M, Viechtbauer W, Schaalma HP, Kok G, Abraham C, Hospers HJ. Standard Care Impact on Effects of Highly Active Antiretroviral Therapy Adherence InterventionsA Meta-analysis of Randomized Controlled Trials. Arch Intern Med. 2010;170(3):240-250. doi:10.1001/archinternmed.2009.536
Copyright 2010 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.2010
Poor adherence to medication limits the effectiveness of treatment for human immunodeficiency virus. Systematic reviews can identify practical and effective interventions. Meta-analyses that control for variability in standard care provided to control groups may produce more accurate estimates of intervention effects.
To examine whether viral load and adherence success rates could be accurately explained by the active content of highly active antiretroviral therapy (HAART) adherence interventions when controlling for variability in care delivered to controls, databases were searched for randomized controlled trials of HAART adherence interventions published from 1996 to January 2009. A total of 1342 records were retrieved, and 52 articles were examined in detail. Directly observed therapy and interventions targeting specific patient groups (ie, psychiatric or addicted patients, patients <18 years) were excluded, yielding a final sample of 31 trials. Two coders independently retrieved study details. Authors were contacted to complete missing data.
Twenty studies were included in the analyses. The content of adherence care provided to control and intervention groups predicted viral load and adherence success rates in both conditions (P < .001 for all comparisons), with an estimated impact of optimal adherence care of 55 percentage points. After controlling for variability in care provided to controls, the capacity of the interventions accurately predicted viral load and adherence effect sizes (R2 = 0.78, P = .02; R2 = 0.28, P < .01). Although interventions were generally beneficial, their effectiveness reduced noticeably with increasing levels of standard care.
Intervention and control patients were exposed to effective adherence care. Future meta-analyses of (behavior change) interventions should control for variability in care delivered to active controls. Clinical practice may be best served by implementing current best practice.
Human immunodeficiency virus (HIV) can be effectively suppressed with highly active antiretroviral therapy (HAART), but approximately 50% of patients do not achieve and sustain the high levels of medication adherence required for optimal viral suppression (ie, 90%-95%).1- 4 Numerous adherence-supporting interventions have been developed and evaluated, and systematic reviews and meta-analyses have synthesized these studies up to 2006.5- 8 While several interventions have been found to be effective, reviews have not clarified why some interventions are more effective than others. Hence, guidance for improvement of care is limited.
Systematic reviews and meta-analyses of behavior change intervention trials usually examine the active content of interventions (or treatments) in some detail but rarely explore the active ingredients of “usual” or “standard” care provided to control groups. This may be problematic because the content and effectiveness of standard care vary considerably between studies,9,10 and intervention effectiveness is judged in relation to the outcomes in control groups. In these circumstances, intervention effectiveness may only be properly understood when controlling for the standard care provided to controls. Only then will it be possible to accurately characterize the added value of interventions in particular settings.
The active content of behavior change interventions consists of theory- and evidence-based behavior change techniques (BCTs) directed at important determinants of the target behavior.9,11,12 For example, if research shows that patients lack important skills, guided practice (practicing skills and receiving expert feedback) or modeling (observing others perform the skills) may be effective BCTs.13Unhealthy behaviors are shaped by various factors (eg, lack of knowledge, motivation, or skills), so behavior change interventions often include multiple carefully selected BCTs. The application of these BCTs should change the corresponding determinants, consequently behavior, and ultimately improve health.12 Because the total number of relevant BCTs included represents the degree to which the intervention adequately targets behavioral determinants, we refer to this as “intervention capacity” (note that following reviewer comments, we changed this term from “quality” [see de Bruin et al9] to “capacity”).
Standard care provided to control participants in intervention trials may also contain effective BCTs to support the health behavior under study. We recently examined adherence care provided to controls in HAART adherence intervention trials and found that it contained numerous BCTs targeting important determinants of adherence (eg, written information, action plans, and reminders). These techniques are similar to those often described in adherence intervention reports. Moreover, the “standard care capacity” (SCC) (the number of standard care BCTs applied) varied considerably among studies and was strongly related to the proportion of patients who achieved an undetectable viral load.9 Hence, these control groups—reportedly receiving “usual care”—were actually exposed to widely varying forms of effective adherence care.
When both intervention and control groups receive effective care, interventions can only improve behavior and clinical outcomes when they target important behavioral determinants not yet addressed during standard care (ie, by adding relevant BCTs not yet provided during standard care). As the capacity of standard care increases, fewer behavioral problems remain, making it more challenging for interventions to introduce additional BCTs. Hence, when standard care varies as substantially among studies as in the HAART-adherence domain, meta-analyses that control for this variability should produce more accurate estimates of intervention effects. Although several meta-analyses have shown that type of control group (eg, active vs passive; placebo vs “care as usual”) can account for variance in effect sizes,14,15 to our knowledge, a study controlling for the variability in care provided to active controls has not been conducted before.
This study examined whether the outcomes reported by RCTs of HAART adherence interventions could be accurately explained by their capacity (active content), after controlling for variability in SCC between studies. We first examined whether intervention capacity and SCC could accurately predict the treatment success rates in the respective groups and then whether the difference in capacity between conditions predicted effect sizes. We controlled for potential confounders and moderators, such as methodological or population differences.
EMBASE, PsycINFO, and MEDLINE were searched from January 1996 to January 2009 for evaluations of HAART adherence interventions. Search terms were (adherence or compliance) AND (HIV or AIDS) AND (random) in “All Text.” In total, 1342 records were examined (454 were duplicates). References in obtained articles and relevant reviews were also searched. Interventions designed for specific subgroups of patients (ie, psychiatric or addicted patients, patients <18 years old, and those living in developing countries) were excluded. Moreover, “directly observed therapy” interventions were excluded because the effects were not a product of autonomous patient behavior. Finally, after coding of the intervention materials, 3 additional studies16- 18 were deleted because the intervention did not focus specifically on adherence. Figure 1 depicts study inclusion and exclusion judgments.
Two coders (M.d.B. and W.V.) independently coded the potential for bias in each study (agreement on quality pass: 90%; disagreements were resolved through discussion),19 the details of study population, methods, outcome data, and the relative contact intensity with the intervention vs control group (ie, weighing the number and duration of the visits; 0 = same, 1 = more, and 2 = much more in the intervention group; κ = 0.82). We collected intent-to-treat outcome data (viral load undetectable vs detectable; adherence rate, ≥95% vs <95%) at the immediate postintervention assessment, including only those participants providing data (available cases). Authors of included studies confirmed these coding results and provided missing study details.
A priori, we assumed a time lag between exposure to an effective intervention and a nonadherent person achieving adherence levels of at least 95%,7,20 and between improvements in adherence and the effects of improved adherence on viral load (ie, shifting from detectable to undetectable). Because 12 weeks have been suggested as a relevant cutoff for effectiveness, we included postintervention viral load assessments of at least 12 weeks, and postintervention adherence assessment of at least 6 weeks.7 This excluded 2 brief studies from the adherence analyses21,22 and 3 additional studies from the viral load analyses.23- 25
All authors were asked to provide intervention protocols and to complete a standard care checklist. Two coders (H.P.S. and G.K.) independently coded these materials using a 41-item taxonomy of BCTs targeting important determinants of adherence (adapted from Abraham and Michie11).9 Coders were blind to the author and journal details and to the results and discussion sections. Results were discussed under the supervision of the first author (M.d.B.), and disagreements were resolved through discussion.
Both standard care and intervention manuals contained a range of BCTs targeting important adherence determinants and intercoder reliabilities for BCTs were good (mean κ [SD] = 0.75 [0.17]). All standard care BCTs were adherence-promoting activities delivered by patients' health care providers as part of usual care at the study site.
Table 1 shows the coding results for 1 study. Note that the BCTs coded in the intervention manuals (“manual capacity”) do not accurately represent the additional value of the intervention over standard care (ie, the “unique intervention capacity”) in this particular setting: 38% of the intervention BCTs (6 of 16) overlapped with those delivered during standard care. Examining all the studies for which intervention and SCC were known, this overlap ranged from 0% to 50% (mean [SD], 27% [15%]), and it increased with increasing levels of standard care (r = 0.68; P < .001).
The example in Table 1 also illustrates that standard care included BCTs that were not described in intervention manuals. Because all intervention groups received the intervention in addition to standard care, the total range of adherence-promoting BCTs provided to intervention participants were those applied in standard care plus the unique BCTs added by the intervention (ie, the total intervention capacity).
For the purpose of our analyses, all coding results were collapsed to compute the intervention and SCC scores depicted in Table 1 based on 3 rules. First, BCTs applied once or only at the beginning of the intervention (or standard care) were given 1 point. Second, techniques that were tailored to individual patients or required their active participation (instead of top-down instructions) were given 2 points.26- 30 Third, scores for techniques applied repeatedly during follow-up sessions were multiplied by 2. Thus, a tailored technique applied repeatedly during follow-up was allocated 4 points. This weighting provided an index of the strength of a technique in terms of personal relevance or tailoring, and of predominance or repetition. The intervention and SCC scores were computed by adding the points allocated to the BCTs provided in each condition.
For each trial, the proportions of intervention and control patients with an undetectable viral load (viral load success rate) or adherence rate of a least 95% (adherence success rate) were computed. Success rate differences (the success rate in the intervention group minus the success rate in the control group) were used as effect sizes.
We first examined whether the success rates in each group could be adequately predicted from the capacity of adherence care provided to that group (ie, SCC for the control and total intervention capacity for the intervention group). Several potential confounders were included simultaneously to control for between-study differences: (1) inclusion of patients continuing vs patients starting (a new) treatment, (2) dominant ethnicity (white vs nonwhite), (3) no selection vs selection of patients with treatment problems at baseline (detectable viral loads or adherence problems), (4) mean study year, (5) dropout percentage, and either (6) viral load detection threshold (<400 vs 400 copies/mL) or (6), adherence measurement using Medication Event Monitoring System electronic pill-bottle caps (MEMS caps) vs self-reports. We expected the adherence and viral load success rates to be higher on average for white samples,9,31,32 without selection on baseline treatment problems, in more recent studies (improved regimens), when using a detection threshold of 400 copies/mL and for self-reports.33- 35 We expected higher viral load success rates for treatment-experienced patients but had no hypothesis regarding the direction of effect for dropout rates.
In a second step we examined whether the success rate differences between intervention and control groups (ie, the effects sizes) could be adequately explained by the difference in capacity (ie, unique intervention capacity) and whether this adjusted score was a more accurate predictor than the unadjusted manual capacity score. Owing to randomization, the influence of between-study differences (eg, the confounders mentioned in the previous paragraph) should be cancelled out. We therefore controlled these analyses for postrandomization within-study differences only; namely, the relative contact intensity and the differential dropout between conditions.36
All analyses were based on mixed-effects meta-regression models, using restricted maximum likelihood to estimate residual heterogeneity. In the success rate analyses, we used a bivariate mixed-effects model, allowing the true rates of the intervention and control group from the same study to be correlated.37 The intercept and slope of control and intervention groups were allowed to differ by including a dummy variable indicating the group and its interaction with the capacity score in the model. The predictive power of all models is illustrated by calculating a (pseudo) R2 statistic.38
Thirty-one RCTs were included in the review.21- 25,39- 64 We were able to contact 30 (co)authors, and they provided additional study details. Most studies (25) were conducted in the United States, 18 focused on treatment-experienced patients, and 11 studies selected patients with treatment problems. Most (24) studied African American and Latino (Hispanic) participants. Half of the studies reported MEMS-cap data and half self-reported adherence. Adherence data could be obtained from 23 studies with a postintervention assessment of at least 6 weeks, and viral load from 20 studies with postintervention of at least 12 weeks' duration (see the flowchart in Figure 1 for study dropout).
Twenty-one authors (70%) could complete the standard care checklist. The SCC ranged from 2 to 28 points (mean [SD], 13.82 [7.60]). Intervention content could be coded for 28 studies. The manual capacity scores ranged from 4 to 36 points (18.07 [8.62]). Using these scores for each study, the unique (12.52 [6.14]; range, 2-24) and total intervention capacity scores (26.10 [9.82]; range, 8-46) were computed for each study. (Table with study characteristics and capacity scores is available at http://www.MarijndeBruin.eu/Meta/HIVadherence/TableS1.) Table 2 shows an overview of the number of times BCTs were coded in the standard care and intervention materials, and how often the intervention BCTs were actually unique.
We first examined whether total intervention and SCC could accurately predict the proportion of patients with treatment success in each condition. Preliminary analyses showed that the baseline success rates (the intercepts) and the effects of increases in capacity (the regression slopes) were similar for both conditions. Results for the models assuming an identical intercept and regression slope for the capacity of adherence care provided to both conditions are provided herein.
The capacity score predicted viral load (P < .01) and adherence (P < .001) success rates. In addition, a number of confounders explained between-study heterogeneity in the success rates. In the viral load analysis, nonwhite samples had a success rate that was on average 0.274 points lower than that of white samples (P < .001). However, this difference was not found in the adherence analyses (0.048 difference; P = .48). Moreover, a viral load detection threshold of 400 copies/mL yielded success rates that were on average 0.163 points higher than those based on stricter thresholds (P = .04). Other potential confounders did not reach significance in the viral load analyses. For adherence, however, patients continuing treatment had on average a 0.168-point lower success rate in comparison with patients starting (a new) treatment (P < .01). The selection of patients with a detectable viral load or adherence problems yielded an average success rate that was 0.163 points lower (P = .03). The average success rate for adherence was also estimated to decrease by 0.004 points (P = .03) for each percentage point increase in the dropout rate, suggesting that adherent patients were more likely to drop out than nonadherent patients. Finally, success rates based on self-reported adherence were on average 0.478 points higher than those based on MEMS-cap data (P < .001), and the mean study year was not significant (P = .09). Results are shown in Table 3.
Overall, the models for viral load and adherence were highly predictive of the success rates (R2 = 0.72 and R2 = 0.82, respectively). Removing nonsignificant predictors had almost no effect on these results (R2 = 0.64 and R2 = 0.79).
Figure 2 displays the percentage of patients with an undetectable viral load and adherence rates of at least 95% as a function of capacity before (circles without numbers) and after adjustment (circles with numbers) for confounders. The absence of ceiling effects suggests that the unique BCTs added by the interventions were effective regardless of the level of standard care. Moreover, the plots show that approximately half of the control groups received adherence care superior in capacity and effectiveness to the total intervention capacity (ie, SCC plus the unique intervention techniques) provided to half of the intervention groups.
The second step was to determine whether the success rate differences (ie, effect sizes) could be accurately explained by the unique intervention capacity, and whether this was a more accurate predictor than the unadjusted intervention manual capacity score that is usually considered in meta-analyses.
When we regressed the viral load success rate differences on the unique intervention capacity, differential dropout, and relative contact intensity, 1 strong outlier emerged (studentized residual z = −3.57): an intervention with a high-capacity score but a negative effect on viral load. This was the only intervention delivered by trained peers instead of health care professionals, which may have had a “boomerang effect.”65 After excluding this study, unique intervention capacity was a strong predictor of the rate differences (slope coefficient = 0.015; P = .02), whereas the other moderators were not (P = .83 and P = .88, respectively). Although manual capacity was also a significant predictor (P = .03), unique intervention capacity had superior predictive power compared with manual capacity (R2 = 0.78 vs R2 = 0.58).
When the adherence rate differences were regressed onto the same predictors, again 1 outlier emerged (studentized residual of z = 2.35): a medium-capacity study with an extremely high effect size. The outlier came from a small study (N = 17) and was excluded from the analysis. Although the predictive power of the model was lower than for viral load (R2 = 0.29), again unique intervention capacity was predictive of the rate differences (slope coefficient, 0.015; P < .01), whereas contact intensity was not (P = .94). Differential dropout also helped to account for some of the heterogeneity (slope coefficient, −0.010; P = .04). However, manual capacity was not significant (P = .12) and again had lower predictive power (R2 = 0.17). Results are shown in Table 4. Neither removing the studies with a higher likelihood of bias, nor checking for publication bias via Egger regression test changed these conclusions.66
Figure 3 shows the rate differences as a function of the unique intervention capacity for both outcome measures. The 2 outliers are clearly identifiable in these plots. Excluding these from the success rate analyses reported in the previous subsection did not change the pattern of results, except that the capacity predictor explained considerably more heterogeneity in the viral loads (estimate, 0.010 [P < .001] instead of 0.006 [P < .01]).
The results indicate that the success rate (difference) for undetectable viral loads and an adherence rate greater than 95% increases on average by 1.25 percentage points for each additional point in (unique) capacity of adherence care (mean of the 1% and 1.5% found in the analyses). Implementing the strongest available standard care in HIV clinics with currently the lowest SCC could, therefore, result in a 32.5 percentage point increase in the treatment success rate (range in SCC, 2-28). Examining the total capacity range (ie, including the interventions), the impact of adherence care on treatment success rates is estimated at 55 percentage points (complete range in capacity, 2-46).
The success rate difference analyses also showed that the best predictor of the effect sizes was the intervention capacity score adjusted for the overlap with standard care (ie, the unique intervention capacity). The overlap in techniques between the interventions and standard care ranged from 0% to 50%, and this overlap increased with increasing levels of SCC (r = 0.68, P < .001). These findings suggest that intervention effects will be systematically reduced in settings with higher levels of standard care. In the present sample, the effects of some interventions could have been up to twice as large if they had been tested in the setting with the weakest observed standard care.
The final question that needs to be answered is what the estimated impact of each intervention is under equal SCC conditions. To examine this, we identified the typical BCTs applied in low, medium, and high SCC and computed the unique capacity of each intervention under these 3 conditions. Using 1.25 percentage points as the estimated increase in viral suppression rates for each point increase in unique intervention capacity, we computed 3 success rate differences for each study included in the final analyses. As Figure 4 shows, most interventions are expected to improve clinical outcomes in settings with low levels of standard care. However, with increasing SCC the effectiveness of interventions reduces considerably. In settings with the highest SCC, just a few labor-intensive interventions are expected to yield improvements greater than 10 percentage points in viral suppression rates. Hence, most adherence-promoting BCTs included in interventions are also being provided by health care providers in settings with high-capacity standard care.
This meta-analysis showed that both the intervention and control groups in HAART adherence interventions trials were exposed to effective adherence care: the capacity of adherence care explained up to 55 percentage points in treatment success rates. Although most interventions are likely to increase treatment success rates in clinics with weaker standard care, these effects decrease considerably with increasing SCC. With the highest observed standard care level being twice that of the mean SCC (28 vs 14 points), the wide-spread adoption of already-implemented current best practice only (for details, see de Bruin et al9) would be expected to increase HAART treatment effectiveness in developed countries by 17.5 percentage points, an improvement that justifies considerable investment in additional personnel, training, and equipment.67,68 Although interventions do contain adherence-promoting BCTs that could further enhance the capacity and effectiveness of adherence care, there is currently little evidence that any of these would be (cost) effective additions to high-capacity standard care.
The results have 3 other general implications. First, systematic analysis of the content of care provided in active control groups is a prerequisite to understanding the added value of behavior change interventions. When designing new behavior change interventions, researchers should therefore first assess deficits in existing standard care and then address these by selecting carefully chosen theory- and evidence-based BCTs.11,12,69 The taxonomy of behavior change techniques may be a useful tool for that purpose (see the “Coding Manual for Behavior Change Techniques” available at http://www.MarijndeBruin.eu/Meta/HIVadherence/Taxonomy). Second, since variability in active controls is also present in other domains,10,70 comparing intervention effects in meta-analyses of behavior change interventions is meaningful only after controlling for variability in standard care provided to control groups. Third, as Wagner and Kanouse10 also argued, outcomes from (behavioral) interventions in domains with variable standard care cannot be adequately interpreted or generalized to other settings without accurate reports of the level of standard care against which these effects were obtained.
The analyses yielded other noteworthy results for the domain of HAART adherence specifically. The success rate analyses showed that nonwhite patients had a considerably lower chance of achieving an undetectable viral load. Surprisingly, this was not found for adherence greater than 95%. We can think of 2 possible explanations. First, something other than nonadherence caused the difference in treatment effectiveness, such as patients' clinical status at the start of the treatment.32 Second, the adherence measurement instruments lacked cross-cultural validity, so that differences in adherence between white and nonwhite samples were not detected. Our findings clarify that this difference is not the result of poorer standard care or other confounders examined herein.
Mean study year was not significant for adherence or for viral load. We used this variable as an indicator for the increasing effectiveness and reduced complexity of HAART regimens during the past decade. This null finding suggests that the range of mean study year (ie, 1996-2004) may have been too restricted to detect an effect of time for viral load or that something other than medical improvements (eg, advances in standard adherence care) has caused the increasing effectiveness of HIV treatments observed in large HIV cohorts.71,72
A 48% higher adherence success rate was observed for self-reports compared with MEMS-cap data. It is well-known that self-reports may lack sensitivity and MEMS-cap data may lack specificity.33- 35 However, the size of this difference is worrying and indicates the need for the development of better and standardized measurement strategies.
The intensity of contact between intervener and patient, weighing the number of visits and duration of the contacts, was not significant in the rate difference analyses (P > .80). This null finding suggests that the observed effects are not attributable to the amount of time patients spend with the health care professional, which is an assumption in study designs with attention-control groups, but rather to how health care professional use that time (ie, by applying effective BCTs at important determinants of the health behavior).
Our study has certain limitations. Foremost among these is the modest number of RCTs included in the final analysis. Although 31 eligible RCTs were retrieved and 30 authors readily responded to our request for additional study information, for various reasons only 20 could be included in the analyses (ie, immediate postintervention was too brief, there was no assessment of viral load or adherence, data were missing). The data loss due to missing intervention and standard care descriptions (eg, from only 1 article standard care could be coded directly), which occurred despite extensive efforts to minimize this, highlights an urgent need to improve intervention and standard care descriptions available for published trials, an issue that has recently received considerable attention.9,11,73- 75 Such descriptions should allow readers to accurately identify which BCTs were applied in each condition and with what purpose.9,11,75 Only then can readers determine the level of standard care against which any intervention was tested, and so estimate the likely effectiveness in other settings.10 Moreover, meta-analyses could then control for variability in standard care to accurately assess the added value of behavior change interventions.
In conclusion, this meta-analysis showed that adherence care has a large impact on patients' adherence and the effectiveness of HIV treatments. Moreover, the care provided to active control groups had an impact on effect sizes reported by intervention trials, which suggests that future meta-analyses should control for variability in care provided to active control groups. Finally, the findings suggest that substantial increases in treatment effectiveness could follow from the widespread adoption of current best standard care practice.
Correspondence: Marijn de Bruin, PhD, PO Box 8130, 6700 EW Wageningen, the Netherlands (Marijn.deBruin@wur.nl).
Accepted for Publication: September 10, 2009.
Author Contributions: Dr de Bruin had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: de Bruin, Kok, Abraham, and Hospers. Acquisition of data: de Bruin, Viechtbauer, Kok, and Hospers. Analysis and interpretation of data: de Bruin, Viechtbauer, Schaalma, and Hospers. Drafting of the manuscript: de Bruin, Viechtbauer, Schaalma, Kok, Abraham, and Hospers. Critical revision of the manuscript for important intellectual content: de Bruin, Viechtbauer, Schaalma, Kok, Abraham, and Hospers. Statistical analysis: de Bruin, Viechtbauer, Schaalma, and Hospers. Obtained funding: de Bruin, Kok, and Hospers. Administrative, technical, and material support: de Bruin. Study supervision: Schaalma, Kok, and Hospers. Performed meta-analyses: de Bruin and Viechtbauer.
Financial Disclosure: None reported.
Additional Information: Table with study characteristics and capacity scores is available at http://www.MarijndeBruin.eu/Meta/HIVadherence/TableS1. The “Coding Manual for Behavior Change Techniques” is available at http://www.MarijndeBruin.eu/Meta/HIVadherence/Taxonomy.
Additional Contributions: We thank all of the authors of the included studies who so readily responded to our request for additional study information. Their contributions have been extensive and essential. We also thank Gjalt-Jorn Peters, PhD, of Maastricht University for fruitful discussions and feedback on previous drafts.