GUH indicates Georgetown University Hospital; MPP, MedStar Physician Partners; PCa, prostate cancer; WHC, Washington Hospital Center.aRemoved from the analysis of screening outcome but included for other analyses.
Customize your JAMA Network experience by selecting one or more topics from the list below.
Taylor KL, Williams RM, Davis K, et al. Decision Making in Prostate Cancer Screening Using Decision Aids vs Usual Care: A Randomized Clinical Trial. JAMA Intern Med. 2013;173(18):1704–1712. doi:10.1001/jamainternmed.2013.9253
The conflicting recommendations for prostate cancer (PCa) screening and the mixed messages communicated to the public about screening effectiveness make it critical to assist men in making informed decisions.
To assess the effectiveness of 2 decision aids in helping men make informed PCa screening decisions.
Design, Setting, and Participants
A racially diverse group of male outpatients aged 45 to 70 years from 3 sites were interviewed by telephone at baseline, 1 month, and 13 months, from 2007 through 2011. We conducted intention-to-treat univariate analyses and multivariable linear and logistic regression analyses, adjusting for baseline outcome measures.
Random assignment to print-based decision aid (n = 628), web-based interactive decision aid (n = 625), or usual care (UC) (n = 626).
Main outcomes and measures
Prostate cancer knowledge, decisional conflict, decisional satisfaction, and whether participants underwent PCa screening.
Of 4794 eligible men approached, 1893 were randomized. At each follow-up assessment, univariate and multivariable analyses indicated that both decision aids resulted in significantly improved PCa knowledge and reduced decisional conflict compared with UC (all P <.001). At 1 month, the standardized mean difference (Cohen’s d) in knowledge for the web group vs UC was 0.74, and in the print group vs UC, 0.73. Decisional conflict was significantly lower for web vs UC (d = 0.33) and print vs UC (d = 0.36). At 13 months, these differences were smaller but remained significant. At 1 month, high satisfaction was reported by significantly more print (60.4%) than web participants (52.2%; P = .009) and significantly more web (P = .001) and print (P = .03) than UC participants (45.5%). At 13 months, differences in the proportion reporting high satisfaction among print (55.7%) compared with UC (49.8%; P = .06) and web participants (50.4%; P = .10) were not significant. Screening rates at 13 months did not differ significantly among groups.
Conclusions and Relevance
Both decision aids improved participants’ informed decision making about PCa screening up to 13 months later but did not affect actual screening rates. Dissemination of these decision aids may be a valuable public health tool.
clinicaltrials.gov Identifier: NCT00196807
Prostate cancer (PCa) is the most common cancer diagnosis among men and the second leading cause of male cancer deaths.1 However, mixed evidence about the benefits of screening2-5 and growing concerns about harms have led the US Preventive Services Task Force to recommend against routinely screening all men for PCa.6 Most professional groups recommend that men understand the limitations of screening before being tested.7 Given the balance of benefits and harms, patients and clinicians will continue to face the difficult decision about whether to screen, making the promotion of informed decisions critical.
One way to deliver this information is by offering decision aids (DAs), tools that help patients learn about a condition and review the possible benefits, harms, and scientific uncertainties about potential options.8-11 Decision aids are particularly useful when efficacy and outcomes are unclear, as well as when the outcomes are clear but the trade-off between benefits and risks requires subjective judgment. Most men overestimate the benefits of PCa screening and are unaware of the limitations.12,13 These issues, as well as difficult concepts such as overdiagnosis and overtreatment,14,15 make DAs especially useful in augmenting the physician-patient discussion.16
Several randomized clinical trials have evaluated DAs for PCa screening, largely among primary care patients. These studies have included comparisons of information communicated by means of print, video, computer, web, and in-person conversation.17 Almost all trials have reported that DAs improved knowledge compared with usual care (UC).18-34 There have been mixed results concerning decisional conflict, a measure of one’s uncertainty regarding a decision, with some studies showing a reduction17,18,20-22,25,28,31,35 and others showing no difference.23,24,26,36 There have also been mixed findings regarding the effect of DAs on screening rates, including reduced screening,20,21,24,26,32,37 no change,25,31,34,38,39 or increased screening.30,40 The DAs tested in several studies have included a values clarification tool,17,18,20-23,38 which assists individuals in systematically considering the risks and benefits of competing choices.22,41 Although these were well-conducted studies, several were limited by small samples, few nonwhite participants, lack of long-term follow-up, and absence of a no treatment control group.
To address these limitations, we conducted a randomized clinical trial with, to our knowledge, the largest study population to date, comparing 2 DAs against UC and measuring long-term effects on informed decision making in a racially diverse population. Given that web-based and print-based DAs each have their own strengths (eg, interactive capability and potential for broader uptake for web-based DAs and ease of use for print-based DAs), we also compared their effectiveness on informed decision making outcomes. The tools were intended neither to encourage nor discourage screening but instead to present the benefits and limitations of screening to help men make choices consistent with their preferences. We hypothesized that (1) men randomized to either DA would have greater knowledge and satisfaction, less decisional conflict, and lower screening rates than men randomized to UC; and (2) because of its interactive capability, the web-based DA would have a greater effect than the print-based DA on these outcomes.
Participants were 1893 male primary care outpatients at 3 Washington, DC–based health systems: Georgetown University Hospital, Washington Hospital Center, and MedStar Physician Partners (a large outpatient group practice). Eligibility criteria were (1) age 45-70 years, (2) no history of PCa, (3) English speaking, (4) ability to provide informed consent, (5) independent living (eg, nursing home residents were excluded), and (6) having had an outpatient appointment in the 24 months before enrollment. Eligibility was not based on having an upcoming office visit or on having Internet access. Following randomization, 14 men received a diagnosis of PCa during the study and were removed from analyses, resulting in a final sample of 1879 (Figure).
During the 27-month accrual period (November 2007 through January 2010), we mailed invitation letters to all eligible patients (Figure) at the Georgetown University Hospital and Washington Hospital Center sites and a randomly selected sample of more than 60 000 eligible patients at MedStar Physician Partners. Men were called 5 days after the letter was mailed and were considered unreachable after 10 unsuccessful attempts. Among men interested in participation, interviewers confirmed eligibility, obtained verbal consent, and completed the 20-minute baseline telephone interview. At the conclusion of the interview, the interviewer used a computer-generated random allocation sequence to assign participants in a 1:1:1 ratio to the web DA, print DA, or UC. Randomization was stratified by site and self-reported race (white, African American, or other), with a block size of 6.
We mailed participants a written consent form with a stamped return envelope. Print participants also received the print-based DA. Web participants received the study URL, secure login information, a troubleshooting guide, and a list of free Internet access locations. Interviewers conducted the first follow-up assessment at 1 month after randomization and the final assessment at 13 months after randomization. Participants received a $10 gift card after the first follow-up assessment and a lottery entry for a $100 or a $200 gift card drawn for every 50 participants after completion of the final assessment. This study was approved by the Georgetown/Medstar Oncology institutional review board.
The DAs are described in detail elsewhere.42 Briefly, both DAs share identical content, meet International Patient Decision Aid Standards criteria,43 have an eighth grade reading level,44 and offer a table of contents that allows nonlinear navigation. The 6 informational sections include introductory material about the prostate gland; a description of screening tests and possible results; information about treatment options, risks, and adverse effects; a review of PCa risk factors and encouragement to discuss screening with a physician (but without instructions to make an immediate appointment); a 10-item values clarification tool; and resources for more information (references, links to cancer-related organizations, and glossary). In addition, the web DA includes (1) a voice-over that presents most of the text, (2) pop-up definitions of 77 terms, (3) 8 video testimonials, (4) an interactive values clarification tool, and (5) figures, animation, and graphics. In a separate article, we described the website utilization data, which provided a detailed assessment of men’s patterns of use,45 including that 50% of men in the web arm used the website (median [range] time on site, 34 [1.2-112.6] minutes) and that users were more likely to be white, to have previously been screened, and to report greater Internet use.
At baseline, we collected self-reported demographic data (age, marital status, education level, employment status, health insurance status, ethnicity and/or race, and income), clinical information (personal history of cancer, family history of PCa, urinary symptoms, and comorbid illnesses), and information about prior screening (ever screened, screened in the previous 12 months, and whether participants had discussed PCa screening with a health care professional). Two health-related numeracy questions46 assessed understanding of fractions and percentages used to evaluate disease risk.
At baseline, we assessed several process variables, including availability of Internet access at any location, frequency of Internet use, and willingness to seek Internet access if it was not readily available. The website was not optimized for smartphones, and thus we did not assess smartphone availability. We also assessed participants’ preference for receiving web-based vs print-based health information and their use and evaluation of the web and print DA.45,47 These and other process variables will be presented in a separate article.47
PCa knowledge. An 18-item true/false scale31,48 assessed knowledge of PCa testing, the screening controversy, risk factors, the benefits and limitations of PCa treatment, and PCa natural history. Correct items were summed, with “don’t know” coded as incorrect. The α reliability was 0.66 (baseline), 0.79 (1 month), and 0.74 (13 months).
Decisional conflict scale. We included the 10-item scale,49 which includes 4 subscales. The total score ranges from 0 to −100, with higher scores indicating greater decisional conflict. The α reliability was 0.83, 0.83, and 0.81 at baseline, 1 month, and 13 months, respectively.
Satisfaction with decision scale. The 6-item scale50 assessed decisional satisfaction with participants’ most recent PCa screening decision. Each item is rated on a 5-point Likert scale (strongly disagree to strongly agree), with a higher score indicating greater satisfaction. The α reliability was 0.87 at 1 month and 0.89 at 13 months. The total score was highly positively skewed and was dichotomized (median [interquartile range], 4.67 [1.0]).
Prostate cancer screening outcomes. At 13 months, participants reported whether they had received a prostate-specific antigen (PSA) test and/or a digital rectal examination (DRE) during the 1-year study period.
We first examined group differences for the continuous outcome variables at each follow-up assessment by conducting analyses of variance and calculating standardized mean differences (Cohen’s d). We conducted χ2 analyses for the binary outcomes. We then assessed longitudinal effects on 1-month and 13-month outcomes using intention-to-treat analyses with generalized estimating equations for both linear (continuous outcomes) and logistic (binary outcomes) regression models. Generalized estimating equations are an extension of the generalized linear model and account for the dependence between outcomes, such as repeated measurements. For the linear models, estimated beta-coefficients (B) are presented, which represent the adjusted mean difference between trial arms. For the logistic models, odds ratios (ORs) are presented, which represent the association of trial arm to the outcome, or the ratio of the odds that an outcome will occur in 1 trial arm vs another. We examined the main effects of study arm and time, and the study arm by time interaction. In the analyses of knowledge, decisional conflict, and self-reported screening, we controlled for the baseline measure of each outcome. Decisional satisfaction was not assessed at baseline and thus was not included as a covariate in the logistic regression model. For the screening outcome, we excluded men who reported that they were tested because of prostate-related symptoms (n = 110). We present results from the outcome models described above, but the results were concordant with models that included additional covariates (data not shown). Missing data were minimal (<1%) for all variables assessed at baseline. We used SPSS, version 20.0 (SPSS), to conduct the analyses.
Assuming 500 participants per arm and a significance level of .05, the 3 pairwise comparisons of interest (web vs UC, print vs UC, and web vs print) had 80% power to detect effect sizes as small as 0.17 standard deviations for the continuous outcomes. For the screening outcome, on the basis of the assumption that the web, print, and UC participants would have 35%, 40%, and 55% screening rates, respectively,26 the study had 80% power to compare the web and print arms with UC.
The participation rate was 39.5%, and the retention rates were 89% and 84% at 1 month and 13 months, respectively (Figure). Compared with those who declined or could not be reached at baseline, participants were older and more likely to be white and from Georgetown University Hospital (all P values <.001). Compared with those who did not complete the assessments, 1-month and 13-month participants were more likely to be white, married, more highly educated, higher income, ever screened, screened in the past year, from Georgetown University Hospital, and in the UC arm (all P values <.01).
Baseline demographic, clinical, and PCa screening variables are presented in Table 1. Approximately 40% of participants were African American, 23.8% had a high school education or less, and 59.3% were screened in the year before enrollment. With regard to Internet access45 (data not shown), 90% had access and 67% used the Internet daily. Of the 10% without access, 36% said they would be willing to use a computer at another location.
Table 2 presents the unadjusted results for each of the outcome variables, stratified by intervention arm. For knowledge, the print and web arms reported significantly higher knowledge compared with the UC arm at both assessments (all P values <.001). At 1 month, the standardized mean difference (Cohen’s d) between the UC arm and the print arm was 0.73 and between the UC arm and the web arm was 0.74, both large effect sizes and clinically significant differences.51 At 13 months, these standardized mean differences were 0.54 and 0.50, respectively.
For decisional conflict, the print and web arms reported significantly lower conflict compared with the UC arm at both assessments (P < .001; Table 2). At 1 month, the standardized mean difference between the print and UC arms was d = 0.36 and between the web and UC arms was d = 0.33. At 13 months, these standardized mean differences were 0.23 and 0.18, respectively.
For decisional satisfaction at the 1-month assessment, print participants (60.4%) were significantly more likely to report high satisfaction compared with web participants (52.2%), and both print and web participants were more likely to report high satisfaction compared with those in the UC arm (45.5%; all P values <.05). At 13 months, the print (55.7%) vs UC (49.8%) and vs web (50.4%) differences were not significant (P < .10; Table 2). There were no significant group differences on the screening outcomes (Table 2).
The intention-to-treat, linear regression analysis revealed that both intervention arms led to greater knowledge than UC at each assessment: At 1 month, the adjusted mean differences between trial arms (estimated beta-coefficients [B]) were web vs UC B, 2.26 (95% CI, 1.88-2.64; P < .001), and print vs UC B, 2.40 (95% CI, 2.02-2.78; P < .001). At 13 months, the effect was significant but smaller than at 1 month: web vs UC B, 1.46 (95% CI, 1.07-1.84; P < .001), and print vs UC B, 1.54 (95% CI, 1.17-1.91; P < .001). Finally, there was no evidence for the hypothesized web vs print difference because these 2 groups did not differ at 1 month (B, 0.14 [95% CI, −0.27 to 0.55]; P = .51) or at 13 months (B, 0.08 [95% CI, −0.32 to 0.49]; P = .68).
The intention-to-treat, linear regression analysis demonstrated that both the web and print DAs led to reduced decisional conflict compared with UC at each assessment: At 1 month, web vs UC B, −6.7 (95% CI, −9.35 to −4.14; P < .001), and print vs UC B, −7.50 (95% CI, −9.99 to −4.99; P < .001). At 13 months, the effect was significant but smaller than at 1 month: web vs UC B, −3.57 (95% CI, −5.99 to −1.14; P = .004), and print vs UC B, −4.08 (95% CI, −6.37 to −1.80; P < .001). Finally, there was no evidence for the hypothesized web vs print difference because these 2 groups did not differ at 1 month (B, −0.75 [95% CI, −3.12 to 1.66]; P = .54) or at 13 months (B, −0.51 [95% CI, −2.75 to 1.72]; P = .65).
The intention-to-treat, logistic regression analysis demonstrated that participants in the print arm were more likely to report high satisfaction compared with participants in the UC arm at both 1 month (OR, 1.79 [95% CI, 1.41-2.29]; P < .001) and 13 months (OR, 1.29 [95% CI, 1.01-1.66]; P = .046). Participants in the web arm reported greater satisfaction than those in the UC arm at 1 month (OR, 1.29 [95% CI, 1.02-1.66]; P = .04) but not at 13 months (OR, 1.04 [95% CI, 0.81-1.34]; P = .75). Finally, print participants reported significantly greater satisfaction than web participants at 1 month (OR, 1.38 [95% CI, 1.07-1.77]; P = .01) but not at 13 months (OR, 1.24 [95% CI, 0.96-1.60]; P = .10).
At the 13-month assessment, 58.3% self-reported having been screened (defined as the PSA test and/or DRE) since the baseline assessment, virtually unchanged from the 59.3% baseline rate. Logistic regression analysis revealed no significant differences between participants in the web vs UC arms (OR, 1.13 [95% CI, 0.94-1.35]), print vs UC arms (OR, 1.15 [95% CI, 0.96-1.38]), or print vs web arms (OR, 1.02 [95% CI, 0.85-1.23]). Comparable results were obtained in separate analyses for PSA and DRE, in per-protocol analyses (limited to participants who reported using the DAs), and in analyses based on electronic medical record screening rates (data not shown). We also found no evidence that prior PSA testing or changes in knowledge or decisional conflict moderated the screening outcome (data not shown).
As a result of the conflicting recommendations for PCa screening and mixed messages about screening effectiveness, it is critical to assist men in making informed screening decisions. In the present study, we found that the print-based and web-based DAs were more effective than UC in increasing knowledge and reducing decisional conflict up to 13 months following randomization. These results are consistent with several prior studies reporting increased knowledge18,20-34,39 and reduced decisional conflict17,18,20-22,25,28,31,35,39 among men exposed to a PCa screening DA. These findings make an important contribution because, to our knowledge, there have been only 2 studies that have reported the long-term maintenance of increased knowledge30,39—an important issue because men are asked to make this decision every year but are unable to make an informed decision when knowledge is limited. Furthermore, to our knowledge, no studies have reported the long-term maintenance of reduced decisional conflict, suggesting that these DAs helped men to remain certain about their decision. Regarding decisional satisfaction, the print-based DA arm resulted in significant improvements compared with the UC arm at both follow-up assessments and in significant improvements over the web-based DA arm at 1 month. These are novel findings, particularly because decisional satisfaction was highly positively skewed, which has resulted in prior studies finding no improvement in decisional satisfaction.17,31,39 Furthermore, these findings suggest the possibility of greater ease with the print-based DA compared with the web-based DA.
There were no group differences in screening rates at 13 months, consistent with several studies that have measured longer-term screening outcomes.25,30-32,38,39 Among studies that have found decreased rates of screening, most have had immediate or very short follow-up periods,21,24,26,37 suggesting that the reduction in screening outcomes may not endure over time. It is important to understand the long-term impact of DAs on PCa screening, given that this is a decision that men will make annually for up to 30 years.
Our hypothesis that the web arm would be more effective than the print arm was not substantiated. There were no significant group differences on knowledge, decisional conflict, or screening outcomes, and in fact, participants in the print arm reported significantly greater satisfaction than those in the web arm at the 1-month assessment. Post hoc consideration of these results suggests that, regarding knowledge, decisional conflict, and screening, either DA may be used depending on an individual’s preferred medium. These results were not explained by baseline group differences on Internet access or on preference for 1 medium over the other, although there was an overall preference at baseline for print-based over web-based health information (data not shown). Furthermore, at the 1-month assessment, print participants reported being more likely to have read the materials prior to screening than web participants (data not shown). For future DA development, it is important to note that at least in this age cohort, a greater ease with print compared with electronic materials may exist. These results call into question the widespread assumption that interactive, web-based delivery will necessarily lead to better outcomes.
There were several study limitations. First, only 39.5% of eligible men participated and they differed from nonparticipants. However, lower participation rates are typical of prior DA studies that also accrued outpatients without in-person contact17,21,29,37,52 or a connection to an upcoming appointment.29,33 The inclusion of participants regardless of their appointment status, Internet access, or a particular screening history suggests that the results are potentially more generalizable than if the eligibility criteria had been more restrictive and that this was more of an effectiveness trial than an efficacy trial. Second, 2 events that occurred during the study, the release of the results from 2 major screening trials2,4 and the modified American Cancer Society guidelines,7 could have affected results. However, we assessed participants’ awareness of these events immediately and for several weeks following their release and found minimal awareness and no difference between study arms (data not shown). Third, we used self-reports of whether participants had undergone screening at both baseline and follow-up, which can create a bias.53 Although we had also collected screening outcomes from medical records, these data were incomplete as a result of unreturned medical record consent forms, as well as missing records. Importantly, self-reported screening outcomes were concordant with findings from the medical record data. Furthermore, on the basis of our assessment of the literature, it does not seem that measuring PCa screening via self-report vs medical record has resulted in systematic differences because both methods have been used in studies reporting increased screening, decreased screening, and no group differences.
In 1 of the largest and most representative randomized trials conducted on this topic, this study demonstrated the benefits of 2 DAs that meet standard criteria43 compared with UC in 3 practice sites, and with a racially diverse population that included many participants of low socioeconomic status. Furthermore, the sample was made potentially more representative because of broad inclusion criteria that did not limit eligibility. Finally, we observed men for 1 year, demonstrating the long-term impact of both DAs. Only 4 other studies have followed men for at least 1 year,30-32,39 which is necessary to understand the long-term impact of DAs on decision-making and screening outcomes.
The clinical implications of this study include the potential for these 2 DAs to be easily adopted in real-world practice settings. Furthermore, the DAs offer neutrality, shown by the fact that they did not influence the screening decision in either direction compared with UC, which allows patients and providers to individualize the decision. Moreover, these tools offer flexibility for patients and providers, given the availability of both print-based and web-based tools. Given the demonstrated beneficial effects of these DAs, work is now needed to understand how to deliver them to patients in a systematic manner. Possible avenues include personal health records,54,55 distribution in health care provider offices,56 or via the websites of large health care organizations.57 The ongoing questions concerning the impact of PCa screening on disease-related mortality and on men’s long-term quality of life58,59 highlight the need for promoting widespread informed decision making among patients and their physicians.
Corresponding Author: Kathryn L. Taylor, PhD, Georgetown University, Cancer Prevention and Control Program, Department of Oncology, Lombardi Comprehensive Cancer Center, 3300 Whitehaven St, NW, Ste 4100, Washington, DC 20007 (firstname.lastname@example.org).
Accepted for Publication: June 4, 2013.
Published Online: July 29, 2013. doi:10.1001/jamainternmed.2013.9253.
Author Contributions: Dr Taylor had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Taylor, Davis, Luta, Schwartz, Krist, Cole.
Acquisition of data: Taylor, Williams, Penek, Barry, Kelly, Krist, Fishman, Cole, Miller.
Analysis and interpretation of data: Taylor, Williams, Luta, Penek, Barry, Kelly, Tomko, Schwartz, Krist, Woolf.
Drafting of the manuscript: Taylor, Williams, Luta, Penek, Kelly, Tomko, Schwartz, Krist.
Critical revision of the manuscript for important intellectual content: Taylor, Williams, Davis, Luta, Barry, Kelly, Schwartz, Krist, Woolf, Fishman, Cole, Miller.
Statistical analysis: Taylor, Williams, Luta, Penek, Barry, Kelly, Tomko, Schwartz.
Obtained funding: Taylor, Davis, Schwartz.
Administrative, technical, and material support: Taylor, Williams, Penek, Barry, Kelly, Tomko, Krist, Fishman, Miller.
Study supervision: Taylor, Williams.
Conflict of Interest Disclosures: None reported.
Funding/Support: This work was supported by grants from the National Cancer Institute (R01 CA119168-01) and Department of Defense (PC051100) to Dr Taylor. In addition, the project was supported by the Lombardi Comprehensive Cancer Center (LCCC) Biostatistics and Bioinformatics Shared Resource and an LCCC Cancer Center Support Grant.
Previous Presentations: Earlier versions of the results were presented at the annual meeting of the Society of Behavioral Medicine; April, 28, 2011; Washington, DC; and at the annual meeting of the American Public Health Association; November 9, 2009; Philadelphia, Pennsylvania.
Additional Contributions: We are grateful to the participants for contributing their time; to Janet Ohene-Frempong, MA, our plain language consultant, who contributed to the editing of the intervention materials; to the interviewers who conducted the telephone assessments: Sara Edmond, BA; Caroline Dorfman, BA; Elisabeth Kassan, MA; David Dawson, BA; William Tuong, BS; Elizabeth Parker, BA; and Lisa Haisfield, PhD; and to Susan Marx, BA, for administrative support.
Create a personal account or sign in to: