Importance
The conflicting recommendations for prostate cancer (PCa) screening and the mixed messages communicated to the public about screening effectiveness make it critical to assist men in making informed decisions.
Objective
To assess the effectiveness of 2 decision aids in helping men make informed PCa screening decisions.
Design, Setting, and Participants
A racially diverse group of male outpatients aged 45 to 70 years from 3 sites were interviewed by telephone at baseline, 1 month, and 13 months, from 2007 through 2011. We conducted intention-to-treat univariate analyses and multivariable linear and logistic regression analyses, adjusting for baseline outcome measures.
Intervention
Random assignment to print-based decision aid (n = 628), web-based interactive decision aid (n = 625), or usual care (UC) (n = 626).
Main outcomes and measures
Prostate cancer knowledge, decisional conflict, decisional satisfaction, and whether participants underwent PCa screening.
Results
Of 4794 eligible men approached, 1893 were randomized. At each follow-up assessment, univariate and multivariable analyses indicated that both decision aids resulted in significantly improved PCa knowledge and reduced decisional conflict compared with UC (all P <.001). At 1 month, the standardized mean difference (Cohen’s d) in knowledge for the web group vs UC was 0.74, and in the print group vs UC, 0.73. Decisional conflict was significantly lower for web vs UC (d = 0.33) and print vs UC (d = 0.36). At 13 months, these differences were smaller but remained significant. At 1 month, high satisfaction was reported by significantly more print (60.4%) than web participants (52.2%; P = .009) and significantly more web (P = .001) and print (P = .03) than UC participants (45.5%). At 13 months, differences in the proportion reporting high satisfaction among print (55.7%) compared with UC (49.8%; P = .06) and web participants (50.4%; P = .10) were not significant. Screening rates at 13 months did not differ significantly among groups.
Conclusions and Relevance
Both decision aids improved participants’ informed decision making about PCa screening up to 13 months later but did not affect actual screening rates. Dissemination of these decision aids may be a valuable public health tool.
Trial Registration
clinicaltrials.gov Identifier: NCT00196807
Prostate cancer (PCa) is the most common cancer diagnosis among men and the second leading cause of male cancer deaths.1 However, mixed evidence about the benefits of screening2-5 and growing concerns about harms have led the US Preventive Services Task Force to recommend against routinely screening all men for PCa.6 Most professional groups recommend that men understand the limitations of screening before being tested.7 Given the balance of benefits and harms, patients and clinicians will continue to face the difficult decision about whether to screen, making the promotion of informed decisions critical.
One way to deliver this information is by offering decision aids (DAs), tools that help patients learn about a condition and review the possible benefits, harms, and scientific uncertainties about potential options.8-11 Decision aids are particularly useful when efficacy and outcomes are unclear, as well as when the outcomes are clear but the trade-off between benefits and risks requires subjective judgment. Most men overestimate the benefits of PCa screening and are unaware of the limitations.12,13 These issues, as well as difficult concepts such as overdiagnosis and overtreatment,14,15 make DAs especially useful in augmenting the physician-patient discussion.16
Several randomized clinical trials have evaluated DAs for PCa screening, largely among primary care patients. These studies have included comparisons of information communicated by means of print, video, computer, web, and in-person conversation.17 Almost all trials have reported that DAs improved knowledge compared with usual care (UC).18-34 There have been mixed results concerning decisional conflict, a measure of one’s uncertainty regarding a decision, with some studies showing a reduction17,18,20-22,25,28,31,35 and others showing no difference.23,24,26,36 There have also been mixed findings regarding the effect of DAs on screening rates, including reduced screening,20,21,24,26,32,37 no change,25,31,34,38,39 or increased screening.30,40 The DAs tested in several studies have included a values clarification tool,17,18,20-23,38 which assists individuals in systematically considering the risks and benefits of competing choices.22,41 Although these were well-conducted studies, several were limited by small samples, few nonwhite participants, lack of long-term follow-up, and absence of a no treatment control group.
To address these limitations, we conducted a randomized clinical trial with, to our knowledge, the largest study population to date, comparing 2 DAs against UC and measuring long-term effects on informed decision making in a racially diverse population. Given that web-based and print-based DAs each have their own strengths (eg, interactive capability and potential for broader uptake for web-based DAs and ease of use for print-based DAs), we also compared their effectiveness on informed decision making outcomes. The tools were intended neither to encourage nor discourage screening but instead to present the benefits and limitations of screening to help men make choices consistent with their preferences. We hypothesized that (1) men randomized to either DA would have greater knowledge and satisfaction, less decisional conflict, and lower screening rates than men randomized to UC; and (2) because of its interactive capability, the web-based DA would have a greater effect than the print-based DA on these outcomes.
Participants were 1893 male primary care outpatients at 3 Washington, DC–based health systems: Georgetown University Hospital, Washington Hospital Center, and MedStar Physician Partners (a large outpatient group practice). Eligibility criteria were (1) age 45-70 years, (2) no history of PCa, (3) English speaking, (4) ability to provide informed consent, (5) independent living (eg, nursing home residents were excluded), and (6) having had an outpatient appointment in the 24 months before enrollment. Eligibility was not based on having an upcoming office visit or on having Internet access. Following randomization, 14 men received a diagnosis of PCa during the study and were removed from analyses, resulting in a final sample of 1879 (Figure).
During the 27-month accrual period (November 2007 through January 2010), we mailed invitation letters to all eligible patients (Figure) at the Georgetown University Hospital and Washington Hospital Center sites and a randomly selected sample of more than 60 000 eligible patients at MedStar Physician Partners. Men were called 5 days after the letter was mailed and were considered unreachable after 10 unsuccessful attempts. Among men interested in participation, interviewers confirmed eligibility, obtained verbal consent, and completed the 20-minute baseline telephone interview. At the conclusion of the interview, the interviewer used a computer-generated random allocation sequence to assign participants in a 1:1:1 ratio to the web DA, print DA, or UC. Randomization was stratified by site and self-reported race (white, African American, or other), with a block size of 6.
We mailed participants a written consent form with a stamped return envelope. Print participants also received the print-based DA. Web participants received the study URL, secure login information, a troubleshooting guide, and a list of free Internet access locations. Interviewers conducted the first follow-up assessment at 1 month after randomization and the final assessment at 13 months after randomization. Participants received a $10 gift card after the first follow-up assessment and a lottery entry for a $100 or a $200 gift card drawn for every 50 participants after completion of the final assessment. This study was approved by the Georgetown/Medstar Oncology institutional review board.
Description of the Decision Aid Interventions
The DAs are described in detail elsewhere.42 Briefly, both DAs share identical content, meet International Patient Decision Aid Standards criteria,43 have an eighth grade reading level,44 and offer a table of contents that allows nonlinear navigation. The 6 informational sections include introductory material about the prostate gland; a description of screening tests and possible results; information about treatment options, risks, and adverse effects; a review of PCa risk factors and encouragement to discuss screening with a physician (but without instructions to make an immediate appointment); a 10-item values clarification tool; and resources for more information (references, links to cancer-related organizations, and glossary). In addition, the web DA includes (1) a voice-over that presents most of the text, (2) pop-up definitions of 77 terms, (3) 8 video testimonials, (4) an interactive values clarification tool, and (5) figures, animation, and graphics. In a separate article, we described the website utilization data, which provided a detailed assessment of men’s patterns of use,45 including that 50% of men in the web arm used the website (median [range] time on site, 34 [1.2-112.6] minutes) and that users were more likely to be white, to have previously been screened, and to report greater Internet use.
Control Variables (Baseline Assessment)
At baseline, we collected self-reported demographic data (age, marital status, education level, employment status, health insurance status, ethnicity and/or race, and income), clinical information (personal history of cancer, family history of PCa, urinary symptoms, and comorbid illnesses), and information about prior screening (ever screened, screened in the previous 12 months, and whether participants had discussed PCa screening with a health care professional). Two health-related numeracy questions46 assessed understanding of fractions and percentages used to evaluate disease risk.
Process Variables (Baseline and 1-Month Assessments)
At baseline, we assessed several process variables, including availability of Internet access at any location, frequency of Internet use, and willingness to seek Internet access if it was not readily available. The website was not optimized for smartphones, and thus we did not assess smartphone availability. We also assessed participants’ preference for receiving web-based vs print-based health information and their use and evaluation of the web and print DA.45,47 These and other process variables will be presented in a separate article.47
Outcome Variables (Baseline, 1-Month, and 13-Month Assessments)
PCa knowledge. An 18-item true/false scale31,48 assessed knowledge of PCa testing, the screening controversy, risk factors, the benefits and limitations of PCa treatment, and PCa natural history. Correct items were summed, with “don’t know” coded as incorrect. The α reliability was 0.66 (baseline), 0.79 (1 month), and 0.74 (13 months).
Decisional conflict scale. We included the 10-item scale,49 which includes 4 subscales. The total score ranges from 0 to −100, with higher scores indicating greater decisional conflict. The α reliability was 0.83, 0.83, and 0.81 at baseline, 1 month, and 13 months, respectively.
Satisfaction with decision scale. The 6-item scale50 assessed decisional satisfaction with participants’ most recent PCa screening decision. Each item is rated on a 5-point Likert scale (strongly disagree to strongly agree), with a higher score indicating greater satisfaction. The α reliability was 0.87 at 1 month and 0.89 at 13 months. The total score was highly positively skewed and was dichotomized (median [interquartile range], 4.67 [1.0]).
Prostate cancer screening outcomes. At 13 months, participants reported whether they had received a prostate-specific antigen (PSA) test and/or a digital rectal examination (DRE) during the 1-year study period.
Data Analyses and Statistical Power
We first examined group differences for the continuous outcome variables at each follow-up assessment by conducting analyses of variance and calculating standardized mean differences (Cohen’s d). We conducted χ2 analyses for the binary outcomes. We then assessed longitudinal effects on 1-month and 13-month outcomes using intention-to-treat analyses with generalized estimating equations for both linear (continuous outcomes) and logistic (binary outcomes) regression models. Generalized estimating equations are an extension of the generalized linear model and account for the dependence between outcomes, such as repeated measurements. For the linear models, estimated beta-coefficients (B) are presented, which represent the adjusted mean difference between trial arms. For the logistic models, odds ratios (ORs) are presented, which represent the association of trial arm to the outcome, or the ratio of the odds that an outcome will occur in 1 trial arm vs another. We examined the main effects of study arm and time, and the study arm by time interaction. In the analyses of knowledge, decisional conflict, and self-reported screening, we controlled for the baseline measure of each outcome. Decisional satisfaction was not assessed at baseline and thus was not included as a covariate in the logistic regression model. For the screening outcome, we excluded men who reported that they were tested because of prostate-related symptoms (n = 110). We present results from the outcome models described above, but the results were concordant with models that included additional covariates (data not shown). Missing data were minimal (<1%) for all variables assessed at baseline. We used SPSS, version 20.0 (SPSS), to conduct the analyses.
Assuming 500 participants per arm and a significance level of .05, the 3 pairwise comparisons of interest (web vs UC, print vs UC, and web vs print) had 80% power to detect effect sizes as small as 0.17 standard deviations for the continuous outcomes. For the screening outcome, on the basis of the assumption that the web, print, and UC participants would have 35%, 40%, and 55% screening rates, respectively,26 the study had 80% power to compare the web and print arms with UC.
The participation rate was 39.5%, and the retention rates were 89% and 84% at 1 month and 13 months, respectively (Figure). Compared with those who declined or could not be reached at baseline, participants were older and more likely to be white and from Georgetown University Hospital (all P values <.001). Compared with those who did not complete the assessments, 1-month and 13-month participants were more likely to be white, married, more highly educated, higher income, ever screened, screened in the past year, from Georgetown University Hospital, and in the UC arm (all P values <.01).
Baseline demographic, clinical, and PCa screening variables are presented in Table 1. Approximately 40% of participants were African American, 23.8% had a high school education or less, and 59.3% were screened in the year before enrollment. With regard to Internet access45 (data not shown), 90% had access and 67% used the Internet daily. Of the 10% without access, 36% said they would be willing to use a computer at another location.
Table 2 presents the unadjusted results for each of the outcome variables, stratified by intervention arm. For knowledge, the print and web arms reported significantly higher knowledge compared with the UC arm at both assessments (all P values <.001). At 1 month, the standardized mean difference (Cohen’s d) between the UC arm and the print arm was 0.73 and between the UC arm and the web arm was 0.74, both large effect sizes and clinically significant differences.51 At 13 months, these standardized mean differences were 0.54 and 0.50, respectively.
For decisional conflict, the print and web arms reported significantly lower conflict compared with the UC arm at both assessments (P < .001; Table 2). At 1 month, the standardized mean difference between the print and UC arms was d = 0.36 and between the web and UC arms was d = 0.33. At 13 months, these standardized mean differences were 0.23 and 0.18, respectively.
For decisional satisfaction at the 1-month assessment, print participants (60.4%) were significantly more likely to report high satisfaction compared with web participants (52.2%), and both print and web participants were more likely to report high satisfaction compared with those in the UC arm (45.5%; all P values <.05). At 13 months, the print (55.7%) vs UC (49.8%) and vs web (50.4%) differences were not significant (P < .10; Table 2). There were no significant group differences on the screening outcomes (Table 2).
Multivariable Outcome Models
The intention-to-treat, linear regression analysis revealed that both intervention arms led to greater knowledge than UC at each assessment: At 1 month, the adjusted mean differences between trial arms (estimated beta-coefficients [B]) were web vs UC B, 2.26 (95% CI, 1.88-2.64; P < .001), and print vs UC B, 2.40 (95% CI, 2.02-2.78; P < .001). At 13 months, the effect was significant but smaller than at 1 month: web vs UC B, 1.46 (95% CI, 1.07-1.84; P < .001), and print vs UC B, 1.54 (95% CI, 1.17-1.91; P < .001). Finally, there was no evidence for the hypothesized web vs print difference because these 2 groups did not differ at 1 month (B, 0.14 [95% CI, −0.27 to 0.55]; P = .51) or at 13 months (B, 0.08 [95% CI, −0.32 to 0.49]; P = .68).
The intention-to-treat, linear regression analysis demonstrated that both the web and print DAs led to reduced decisional conflict compared with UC at each assessment: At 1 month, web vs UC B, −6.7 (95% CI, −9.35 to −4.14; P < .001), and print vs UC B, −7.50 (95% CI, −9.99 to −4.99; P < .001). At 13 months, the effect was significant but smaller than at 1 month: web vs UC B, −3.57 (95% CI, −5.99 to −1.14; P = .004), and print vs UC B, −4.08 (95% CI, −6.37 to −1.80; P < .001). Finally, there was no evidence for the hypothesized web vs print difference because these 2 groups did not differ at 1 month (B, −0.75 [95% CI, −3.12 to 1.66]; P = .54) or at 13 months (B, −0.51 [95% CI, −2.75 to 1.72]; P = .65).
Satisfaction With Decision
The intention-to-treat, logistic regression analysis demonstrated that participants in the print arm were more likely to report high satisfaction compared with participants in the UC arm at both 1 month (OR, 1.79 [95% CI, 1.41-2.29]; P < .001) and 13 months (OR, 1.29 [95% CI, 1.01-1.66]; P = .046). Participants in the web arm reported greater satisfaction than those in the UC arm at 1 month (OR, 1.29 [95% CI, 1.02-1.66]; P = .04) but not at 13 months (OR, 1.04 [95% CI, 0.81-1.34]; P = .75). Finally, print participants reported significantly greater satisfaction than web participants at 1 month (OR, 1.38 [95% CI, 1.07-1.77]; P = .01) but not at 13 months (OR, 1.24 [95% CI, 0.96-1.60]; P = .10).
At the 13-month assessment, 58.3% self-reported having been screened (defined as the PSA test and/or DRE) since the baseline assessment, virtually unchanged from the 59.3% baseline rate. Logistic regression analysis revealed no significant differences between participants in the web vs UC arms (OR, 1.13 [95% CI, 0.94-1.35]), print vs UC arms (OR, 1.15 [95% CI, 0.96-1.38]), or print vs web arms (OR, 1.02 [95% CI, 0.85-1.23]). Comparable results were obtained in separate analyses for PSA and DRE, in per-protocol analyses (limited to participants who reported using the DAs), and in analyses based on electronic medical record screening rates (data not shown). We also found no evidence that prior PSA testing or changes in knowledge or decisional conflict moderated the screening outcome (data not shown).
As a result of the conflicting recommendations for PCa screening and mixed messages about screening effectiveness, it is critical to assist men in making informed screening decisions. In the present study, we found that the print-based and web-based DAs were more effective than UC in increasing knowledge and reducing decisional conflict up to 13 months following randomization. These results are consistent with several prior studies reporting increased knowledge18,20-34,39 and reduced decisional conflict17,18,20-22,25,28,31,35,39 among men exposed to a PCa screening DA. These findings make an important contribution because, to our knowledge, there have been only 2 studies that have reported the long-term maintenance of increased knowledge30,39—an important issue because men are asked to make this decision every year but are unable to make an informed decision when knowledge is limited. Furthermore, to our knowledge, no studies have reported the long-term maintenance of reduced decisional conflict, suggesting that these DAs helped men to remain certain about their decision. Regarding decisional satisfaction, the print-based DA arm resulted in significant improvements compared with the UC arm at both follow-up assessments and in significant improvements over the web-based DA arm at 1 month. These are novel findings, particularly because decisional satisfaction was highly positively skewed, which has resulted in prior studies finding no improvement in decisional satisfaction.17,31,39 Furthermore, these findings suggest the possibility of greater ease with the print-based DA compared with the web-based DA.
There were no group differences in screening rates at 13 months, consistent with several studies that have measured longer-term screening outcomes.25,30-32,38,39 Among studies that have found decreased rates of screening, most have had immediate or very short follow-up periods,21,24,26,37 suggesting that the reduction in screening outcomes may not endure over time. It is important to understand the long-term impact of DAs on PCa screening, given that this is a decision that men will make annually for up to 30 years.
Our hypothesis that the web arm would be more effective than the print arm was not substantiated. There were no significant group differences on knowledge, decisional conflict, or screening outcomes, and in fact, participants in the print arm reported significantly greater satisfaction than those in the web arm at the 1-month assessment. Post hoc consideration of these results suggests that, regarding knowledge, decisional conflict, and screening, either DA may be used depending on an individual’s preferred medium. These results were not explained by baseline group differences on Internet access or on preference for 1 medium over the other, although there was an overall preference at baseline for print-based over web-based health information (data not shown). Furthermore, at the 1-month assessment, print participants reported being more likely to have read the materials prior to screening than web participants (data not shown). For future DA development, it is important to note that at least in this age cohort, a greater ease with print compared with electronic materials may exist. These results call into question the widespread assumption that interactive, web-based delivery will necessarily lead to better outcomes.
There were several study limitations. First, only 39.5% of eligible men participated and they differed from nonparticipants. However, lower participation rates are typical of prior DA studies that also accrued outpatients without in-person contact17,21,29,37,52 or a connection to an upcoming appointment.29,33 The inclusion of participants regardless of their appointment status, Internet access, or a particular screening history suggests that the results are potentially more generalizable than if the eligibility criteria had been more restrictive and that this was more of an effectiveness trial than an efficacy trial. Second, 2 events that occurred during the study, the release of the results from 2 major screening trials2,4 and the modified American Cancer Society guidelines,7 could have affected results. However, we assessed participants’ awareness of these events immediately and for several weeks following their release and found minimal awareness and no difference between study arms (data not shown). Third, we used self-reports of whether participants had undergone screening at both baseline and follow-up, which can create a bias.53 Although we had also collected screening outcomes from medical records, these data were incomplete as a result of unreturned medical record consent forms, as well as missing records. Importantly, self-reported screening outcomes were concordant with findings from the medical record data. Furthermore, on the basis of our assessment of the literature, it does not seem that measuring PCa screening via self-report vs medical record has resulted in systematic differences because both methods have been used in studies reporting increased screening, decreased screening, and no group differences.
In 1 of the largest and most representative randomized trials conducted on this topic, this study demonstrated the benefits of 2 DAs that meet standard criteria43 compared with UC in 3 practice sites, and with a racially diverse population that included many participants of low socioeconomic status. Furthermore, the sample was made potentially more representative because of broad inclusion criteria that did not limit eligibility. Finally, we observed men for 1 year, demonstrating the long-term impact of both DAs. Only 4 other studies have followed men for at least 1 year,30-32,39 which is necessary to understand the long-term impact of DAs on decision-making and screening outcomes.
The clinical implications of this study include the potential for these 2 DAs to be easily adopted in real-world practice settings. Furthermore, the DAs offer neutrality, shown by the fact that they did not influence the screening decision in either direction compared with UC, which allows patients and providers to individualize the decision. Moreover, these tools offer flexibility for patients and providers, given the availability of both print-based and web-based tools. Given the demonstrated beneficial effects of these DAs, work is now needed to understand how to deliver them to patients in a systematic manner. Possible avenues include personal health records,54,55 distribution in health care provider offices,56 or via the websites of large health care organizations.57 The ongoing questions concerning the impact of PCa screening on disease-related mortality and on men’s long-term quality of life58,59 highlight the need for promoting widespread informed decision making among patients and their physicians.
Corresponding Author: Kathryn L. Taylor, PhD, Georgetown University, Cancer Prevention and Control Program, Department of Oncology, Lombardi Comprehensive Cancer Center, 3300 Whitehaven St, NW, Ste 4100, Washington, DC 20007 (taylorkl@georgetown.edu).
Accepted for Publication: June 4, 2013.
Published Online: July 29, 2013. doi:10.1001/jamainternmed.2013.9253.
Author Contributions: Dr Taylor had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Taylor, Davis, Luta, Schwartz, Krist, Cole.
Acquisition of data: Taylor, Williams, Penek, Barry, Kelly, Krist, Fishman, Cole, Miller.
Analysis and interpretation of data: Taylor, Williams, Luta, Penek, Barry, Kelly, Tomko, Schwartz, Krist, Woolf.
Drafting of the manuscript: Taylor, Williams, Luta, Penek, Kelly, Tomko, Schwartz, Krist.
Critical revision of the manuscript for important intellectual content: Taylor, Williams, Davis, Luta, Barry, Kelly, Schwartz, Krist, Woolf, Fishman, Cole, Miller.
Statistical analysis: Taylor, Williams, Luta, Penek, Barry, Kelly, Tomko, Schwartz.
Obtained funding: Taylor, Davis, Schwartz.
Administrative, technical, and material support: Taylor, Williams, Penek, Barry, Kelly, Tomko, Krist, Fishman, Miller.
Study supervision: Taylor, Williams.
Conflict of Interest Disclosures: None reported.
Funding/Support: This work was supported by grants from the National Cancer Institute (R01 CA119168-01) and Department of Defense (PC051100) to Dr Taylor. In addition, the project was supported by the Lombardi Comprehensive Cancer Center (LCCC) Biostatistics and Bioinformatics Shared Resource and an LCCC Cancer Center Support Grant.
Previous Presentations: Earlier versions of the results were presented at the annual meeting of the Society of Behavioral Medicine; April, 28, 2011; Washington, DC; and at the annual meeting of the American Public Health Association; November 9, 2009; Philadelphia, Pennsylvania.
Additional Contributions: We are grateful to the participants for contributing their time; to Janet Ohene-Frempong, MA, our plain language consultant, who contributed to the editing of the intervention materials; to the interviewers who conducted the telephone assessments: Sara Edmond, BA; Caroline Dorfman, BA; Elisabeth Kassan, MA; David Dawson, BA; William Tuong, BS; Elizabeth Parker, BA; and Lisa Haisfield, PhD; and to Susan Marx, BA, for administrative support.
2.Andriole
GL, Crawford
ED, Grubb
RL
III,
et al; PLCO Project Team. Mortality results from a randomized prostate-cancer screening trial.
N Engl J Med. 2009;360(13):1310-1319.
PubMedGoogle ScholarCrossref 3.Andriole
GL, Crawford
ED, Grubb
RL
III,
et al; PLCO Project Team. Prostate cancer screening in the randomized Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial: mortality results after 13 years of follow-up.
J Natl Cancer Inst. 2012;104(2):125-132.
PubMedGoogle ScholarCrossref 4.Schröder
FH, Hugosson
J, Roobol
MJ,
et al; ERSPC Investigators. Screening and prostate-cancer mortality in a randomized European study.
N Engl J Med. 2009;360(13):1320-1328.
PubMedGoogle ScholarCrossref 5.Schröder
FH, Hugosson
J, Roobol
MJ,
et al; ERSPC Investigators. Prostate-cancer mortality at 11 years of follow-up.
N Engl J Med. 2012;366(11):981-990.
PubMedGoogle ScholarCrossref 8.O’Connor
A. Using patient decision aids to promote evidence-based decision making.
ACP J Club. 2001;135(1):A11-A12.
PubMedGoogle Scholar 9.O’Connor
AM, Llewellyn-Thomas
HA, Flood
AB. Modifying unwarranted variations in health care: shared decision making using patient decision aids.
Health Aff (Millwood). 2004;suppl var:VAR63-VAR72.
PubMedGoogle Scholar 10.Stacey
D, Bennett
CL, Barry
MJ,
et al. Decision aids for people facing health treatment or screening decisions.
Cochrane Database Syst Rev. 2011;10(10):CD001431.
PubMedGoogle Scholar 11.Volk
RJ, Hawley
ST, Kneuper
S,
et al. Trials of decision aids for prostate cancer screening: a systematic review.
Am J Prev Med. 2007;33(5):428-434.
PubMedGoogle ScholarCrossref 12.Gigerenzer
G, Mata
J, Frank
R. Public knowledge of benefits of breast and prostate cancer screening in Europe.
J Natl Cancer Inst. 2009;101(17):1216-1220.
PubMedGoogle ScholarCrossref 13.Hoffman
RM, Couper
MP, Zikmund-Fisher
BJ,
et al. Prostate cancer screening decisions: results from the National Survey of Medical Decisions (DECISIONS study).
Arch Intern Med. 2009;169(17):1611-1618.
PubMedGoogle ScholarCrossref 14.Woolf
SH. The price of false beliefs: unrealistic expectations as a contributor to the health care crisis.
Ann Fam Med. 2012;10(6):491-494.
PubMedGoogle ScholarCrossref 16.Pignone
M. Weighing the benefits and downsides of prostate-specific antigen screening.
Arch Intern Med. 2009;169(17):1554-1556.
PubMedGoogle ScholarCrossref 17.Volk
RJ, Jibaja-Weiss
ML, Hawley
ST,
et al. Entertainment education for prostate cancer screening: a randomized trial among primary care patients with low health literacy.
Patient Educ Couns. 2008;73(3):482-489.
PubMedGoogle ScholarCrossref 18.Allen
JD, Othus
MK, Hart
A
Jr,
et al. A randomized trial of a computer-tailored decision aid to improve prostate cancer screening decisions: results from the Take the Wheel trial.
Cancer Epidemiol Biomarkers Prev. 2010;19(9):2172-2186.
PubMedGoogle ScholarCrossref 19.Chan
EC, McFall
SL, Byrd
TL,
et al. A community-based intervention to promote informed decision making for prostate cancer screening among Hispanic American men changed knowledge and role preferences: a cluster RCT.
Patient Educ Couns. 2011;84(2):e44-e51.
PubMedGoogle ScholarCrossref 20.Evans
R, Joseph-Williams
N, Edwards
A,
et al. Supporting informed decision making for prostate specific antigen (PSA) testing on the web: an online randomized controlled trial.
J Med Internet Res. 2010;12(3):e27.
PubMedGoogle ScholarCrossref 21.Frosch
DL, Bhatnagar
V, Tally
S, Hamori
CJ, Kaplan
RM. Internet patient decision support: a randomized controlled trial comparing alternative approaches for men considering prostate cancer screening.
Arch Intern Med. 2008;168(4):363-369.
PubMedGoogle ScholarCrossref 22.Gattellari
M, Ward
JE. Does evidence-based information about screening for prostate cancer enhance consumer decision-making? a randomised controlled trial.
J Med Screen. 2003;10(1):27-39.
PubMedGoogle ScholarCrossref 23.Gattellari
M, Ward
JE. A community-based randomised controlled trial of three different educational resources for men about prostate cancer screening.
Patient Educ Couns. 2005;57(2):168-182.
PubMedGoogle ScholarCrossref 24.Krist
AH, Woolf
SH, Johnson
RE, Kerns
JW. Patient education on prostate cancer screening and involvement in decision making.
Ann Fam Med. 2007;5(2):112-119.
PubMedGoogle ScholarCrossref 25.Lepore
SJ, Wolf
RL, Basch
CE,
et al. Informed decision making about prostate cancer testing in predominantly immigrant black men: a randomized controlled trial.
Ann Behav Med. 2012;44(3):320-330.
PubMedGoogle ScholarCrossref 26.Myers
RE, Daskalakis
C, Kunkel
EJ,
et al. Mediated decision support in prostate cancer screening: a randomized controlled trial of decision counseling.
Patient Educ Couns. 2011;83(2):240-246.
PubMedGoogle ScholarCrossref 27.Partin
MR, Nelson
D, Radosevich
D,
et al. Randomized trial examining the effect of two prostate cancer screening educational interventions on patient knowledge, preferences, and behaviors.
J Gen Intern Med. 2004;19(8):835-842.
PubMedGoogle ScholarCrossref 28.Rubel
SK, Miller
JW, Stephens
RL,
et al. Testing the effects of a decision aid for prostate cancer screening.
J Health Commun. 2010;15(3):307-321.
PubMedGoogle ScholarCrossref 29.Schapira
MM, VanRuiswyk
J. The effect of an illustrated pamphlet decision-aid on the use of prostate cancer screening tests.
J Fam Pract. 2000;49(5):418-424.
PubMedGoogle Scholar 30.Stamatiou
K, Skolarikos
A, Heretis
I,
et al. Does educational printed material manage to change compliance with prostate cancer screening?
World J Urol. 2008;26(4):365-373.
PubMedGoogle ScholarCrossref 31.Taylor
KL, Davis
JL
III, Turner
RO,
et al. Educating African American men about the prostate cancer screening dilemma: a randomized intervention.
Cancer Epidemiol Biomarkers Prev. 2006;15(11):2179-2188.
PubMedGoogle ScholarCrossref 32.Volk
RJ, Spann
SJ, Cass
AR, Hawley
ST. Patient education for informed decision making about prostate cancer screening: a randomized controlled trial with 1-year follow-up.
Ann Fam Med. 2003;1(1):22-28.
PubMedGoogle ScholarCrossref 33.Watson
E, Hewitson
P, Brett
J,
et al. Informed decision making and prostate specific antigen (PSA) testing for prostate cancer: a randomised controlled trial exploring the impact of a brief patient decision aid on men’s knowledge, attitudes and intention to be tested.
Patient Educ Couns. 2006;63(3):367-379.
PubMedGoogle ScholarCrossref 34.Wilt
TJ, Paul
J, Murdoch
M, Nelson
D, Nugent
S, Rubins
HB. Educating men about prostate cancer screening: a randomized trial of a mailed pamphlet.
Eff Clin Pract. 2001;4(3):112-120.
PubMedGoogle Scholar 35.Davison
BJ, Kirk
P, Degner
LF, Hassard
TH. Information and patient participation in screening for prostate cancer.
Patient Educ Couns. 1999;37(3):255-263.
PubMedGoogle ScholarCrossref 36.Ilic
D, Egberts
K, McKenzie
JE, Risbridger
G, Green
S. Informing men about prostate cancer screening: a randomized controlled trial of patient education materials.
J Gen Intern Med. 2008;23(4):466-471.
PubMedGoogle ScholarCrossref 37.Frosch
DL, Kaplan
RM, Felitti
VJ. A randomized controlled trial comparing internet and video to facilitate patient education for men considering the prostate specific antigen test.
J Gen Intern Med. 2003;18(10):781-787.
PubMedGoogle ScholarCrossref 38.Myers
RE, Daskalakis
C, Cocroft
J,
et al. Preparing African-American men in community primary care practices to decide whether or not to have prostate cancer screening.
J Natl Med Assoc. 2005;97(8):1143-1154.
PubMedGoogle Scholar 39.Williams
RM, Davis
KM, Luta
G,
et al. Fostering informed decisions: a randomized controlled trial assessing the impact of a decision aid among men registered to undergo mass screening for prostate cancer.
Patient Educ Couns. 2013;91(3):329-336.
PubMedGoogle ScholarCrossref 40.Kripalani
S, Sharma
J, Justice
E,
et al. Low-literacy interventions to promote discussion of prostate cancer: a randomized controlled trial.
Am J Prev Med. 2007;33(2):83-90.
PubMedGoogle ScholarCrossref 41.Llewellyn-Thomas
HA. Patients’ health-care decision making: a framework for descriptive and experimental investigations.
Med Decis Making. 1995;15(2):101-106.
PubMedGoogle ScholarCrossref 42.Dorfman
CS, Williams
RM, Kassan
EC,
et al. The development of a web- and a print-based decision aid for prostate cancer screening.
BMC Med Inform Decis Mak. 2010;10:12.
PubMedGoogle ScholarCrossref 43.Elwyn
G, O’Connor
A, Stacey
D,
et al; International Patient Decision Aids Standards (IPDAS) Collaboration. Developing a quality criteria framework for patient decision aids: online international Delphi consensus process.
BMJ. 2006;333(7565):417.
PubMedGoogle ScholarCrossref 44.Friedman
DB, Hoffman-Goetz
L. A systematic review of readability and comprehension instruments used for print and web-based cancer information.
Health Educ Behav. 2006;33(3):352-373.
PubMedGoogle ScholarCrossref 45.Kassan
EC, Williams
RM, Kelly
SP,
et al. Men’s use of an Internet-based decision aid for prostate cancer screening.
J Health Commun. 2012;17(6):677-697.
PubMedGoogle ScholarCrossref 46.Lipkus
IM, Samsa
G, Rimer
BK. General performance on a numeracy scale among highly educated samples.
Med Decis Making. 2001;21(1):37-44.
PubMedGoogle ScholarCrossref 47.Tomko C, Ludin S, Stern A, et al. Impact of a web-based decision tool for prostate cancer screening on decisional and behavioral outcomes. Poster presented at: Annual Meeting of the American Society of Preventive Oncology; March 6, 2012; Bethesda, MD.
48.Taylor
KL, Shelby
R, Kerner
J, Redd
W, Lynch
J. Impact of undergoing prostate carcinoma screening on prostate carcinoma-related knowledge and distress.
Cancer. 2002;95(5):1037-1044.
PubMedGoogle ScholarCrossref 50.Holmes-Rovner
M, Kroll
J, Schmitt
N,
et al. Patient satisfaction with health care decisions: the satisfaction with decision scale.
Med Decis Making. 1996;16(1):58-64.
PubMedGoogle ScholarCrossref 51.Norman
GR, Sloan
JA, Wyrwich
KW. Interpretation of changes in health-related quality of life: the remarkable universality of half a standard deviation.
Med Care. 2003;41(5):582-592.
PubMedGoogle Scholar 52.Pignone
M, Winquist
A, Schild
LA,
et al. Effectiveness of a patient and practice-level colorectal cancer screening intervention in health plan members: the CHOICE trial.
Cancer. 2011;117(15):3352-3362.
PubMedGoogle ScholarCrossref 53.Chan
EC, Vernon
SW, Ahn
C, Greisinger
A. Do men know that they have had a prostate-specific antigen test? accuracy of self-reports of testing at 2 sites.
Am J Public Health. 2004;94(8):1336-1338.
PubMedGoogle ScholarCrossref 55.Krist
AH, Peele
E, Woolf
SH,
et al. Designing a patient-centered personal health record to promote preventive care.
BMC Med Inform Decis Mak. 2011;11:73.
PubMedGoogle ScholarCrossref 56.Brackett
C, Kearing
S, Cochran
N, Tosteson
AN, Blair Brooks
W. Strategies for distributing cancer screening decision aids in primary care.
Patient Educ Couns. 2010;78(2):166-168.
PubMedGoogle ScholarCrossref 57.O’Connor
AM, Wennberg
JE, Legare
F,
et al. Toward the ‘tipping point’: decision aids and informed patient choice.
Health Aff (Millwood). 2007;26(3):716-725.
PubMedGoogle ScholarCrossref 58.Heijnsdijk
EA, Wever
EM, Auvinen
A,
et al. Quality-of-life effects of prostate-specific antigen screening.
N Engl J Med. 2012;367(7):595-605.
PubMedGoogle ScholarCrossref 59.Taylor
KL, Luta
G, Miller
AB,
et al. Long-term disease-specific functioning among prostate cancer survivors and noncancer controls in the prostate, lung, colorectal, and ovarian cancer screening trial.
J Clin Oncol. 2012;30(22):2768-2775.
PubMedGoogle ScholarCrossref