[Skip to Navigation]
Sign In
Figure 1. The balance sheet was based on modeling studies of outcomes related to prostate-specific antigen (PSA) screening.

Figure 1. The balance sheet was based on modeling studies of outcomes related to prostate-specific antigen (PSA) screening.

Figure 2. The rating and ranking task asked participants to rate and then rank the 3 most important screening test attributes.

Figure 2. The rating and ranking task asked participants to rate and then rank the 3 most important screening test attributes.

Figure 3. A sample choice task. Each task included an active screening option and a fixed “no screening” option.

Figure 3. A sample choice task. Each task included an active screening option and a fixed “no screening” option.

Figure 4. Unlabeled test preference question.

Figure 4. Unlabeled test preference question.

Figure 5. Study flowchart. DCE indicates discrete choice experiment.

Figure 5. Study flowchart. DCE indicates discrete choice experiment.

Table 1. Attributes and Levels for Prostate Cancer Screening
Table 1. Attributes and Levels for Prostate Cancer Screening
Table 2. Characteristics of Participants Overall and by Groupa
Table 2. Characteristics of Participants Overall and by Groupa
Table 3. Proportion of Respondents Designating Specific Attributes as Most Important (Single Question) by Values Clarification Taska
Table 3. Proportion of Respondents Designating Specific Attributes as Most Important (Single Question) by Values Clarification Taska
1.
American Cancer Society. Cancer Facts & Figures 2012. http://www.cancer.org/acs/groups/content/@epidemiologysurveilance/documents/document/acspc-031941.pdf. Accessed August 8, 2012
2.
Schroder FH, Hugosson J, Roobol MJ,  et al.  Prostate-cancer mortality at 11 years of follow-up.  N Engl J Med. 2012;366(11):981-990Google ScholarCrossref
3.
Chou R, Croswell JM, Dana T,  et al.  Screening for prostate cancer: a review of the evidence for the US Preventive Services Task Force.  Ann Intern Med. 2011;155(11):762-77121984740PubMedGoogle Scholar
4.
Braddock CH III, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics.  JAMA. 1999;282(24):2313-232010612318PubMedGoogle ScholarCrossref
5.
Elwyn G, O’Connor A, Stacey D,  et al; International Patient Decision Aids Standards (IPDAS) Collaboration.  Developing a quality criteria framework for patient decision aids: online international Delphi consensus process.  BMJ. 2006;333(7565):41716908462PubMedGoogle ScholarCrossref
6.
Stacey D, Bennett CL, Barry MJ,  et al.  Decision aids for people facing health treatment or screening decisions.  Cochrane Database Syst Rev. 2011;(10):CD00143121975733PubMedGoogle Scholar
7.
Sheridan SL, Felix K, Pignone MP, Lewis CL. Information needs of men regarding prostate cancer screening and the effect of a brief decision aid.  Patient Educ Couns. 2004;54(3):345-35115324986PubMedGoogle ScholarCrossref
8.
O’Connor AM, Bennett C, Stacey D,  et al.  Do patient decision aids meet effectiveness criteria of the International Patient Decision Aid Standards Collaboration? a systematic review and meta-analysis.  Med Decis Making. 2007;27(5):554-57417873255PubMedGoogle ScholarCrossref
9.
Elwyn G, O’Connor AM, Bennett C,  et al.  Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi).  PLoS One. 2009;4(3):e470519259269PubMedGoogle ScholarCrossref
10.
Wilson TD, Schooler JW. Thinking too much: introspection can reduce the quality of preferences and decisions.  J Pers Soc Psychol. 1991;60(2):181-1922016668PubMedGoogle ScholarCrossref
11.
Fagerlin A, Pignone M, Abhyankar P,  et al.  Clarifying and expressing values. In: Volk R, Llewellyn-Thomas H, eds. 2012 Update of the International Patient Decision Aids Standards (IPDAS) Collaboration's Background Document. International Patient Decision Aid Standards (IPDAS) Collaboration website. 2012. http://ipdas.ohri.ca/IPDAS-Chapter-D.pdf. Accessed December 11, 2012
12.
Frosch DL, Bhatnagar V, Tally S, Hamori CJ, Kaplan RM. Internet patient decision support: a randomized controlled trial comparing alternative approaches for men considering prostate cancer screening.  Arch Intern Med. 2008;168(4):363-36918299490PubMedGoogle ScholarCrossref
13.
Pignone MP, Brenner AT, Hawley S,  et al.  Conjoint analysis versus rating and ranking for values elicitation and clarification in colorectal cancer screening.  J Gen Intern Med. 2012;27(1):45-5021870192PubMedGoogle ScholarCrossref
14.
Howard K, Barratt A, Mann GJ, Patel MI. A model of prostate-specific antigen screening outcomes for low- to high-risk men: information to support informed choices.  Arch Intern Med. 2009;169(17):1603-161019786680PubMedGoogle ScholarCrossref
15.
Howard K, Salkeld G. Does attribute framing in discrete choice experiments influence willingness to pay? results from a discrete choice experiment in screening for colorectal cancer.  Value Health. 2009;12(2):354-36318657102PubMedGoogle ScholarCrossref
16.
Howard K, Salkeld GP, Mann GJ, Patel MI, Cunich M, Pignone MP. The COMPASS Study: Community Preferences for Prostate Cancer Screening: protocol for a quantitative preference study.  BMJ Open. 2012;2(1):e00058722226686PubMedGoogle ScholarCrossref
17.
Lancsar E, Louviere J. Conducting discrete choice experiments to inform healthcare decision making: a user's guide.  Pharmacoeconomics. 2008;26(8):661-67718620460PubMedGoogle ScholarCrossref
18.
Bridges JF, Kinter ET, Kidane L, Heinzen RR, McCormick C. Things are looking up since we started listening to patients: trends in the application of conjoint analysis in health 1982-2007.  Patient. 2008;1(4):273-28222272995PubMedGoogle ScholarCrossref
19.
Marshall D, Bridges JF, Hauber B,  et al.  Conjoint analysis applications in health—how are studies being designed and reported? an update on current practice in the published literature between 2005 and 2008.  Patient. 2010;3(4):249-25622273432PubMedGoogle ScholarCrossref
20.
Bridges JF, Hauber AB, Marshall D,  et al.  Conjoint analysis applications in health—a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force.  Value Health. 2011;14(4):403-41321669364PubMedGoogle ScholarCrossref
21.
Huber J, Zwerina K. The importance of utility balance in efficient choice designs.  J Mark Res. 1996;33(3):307-317Google ScholarCrossref
22.
Sándor Z, Wedel M. Profile construction in experimental choice designs for mixed logit models.  Marketing Sci. 2002;21(4):455-475Google ScholarCrossref
23.
Hensher DA, Rose JM, Greene WH. Applied Choice Analysis: A Primer. Cambridge, NY: Cambridge University Press; 2005
24.
Hensher DA, Greene WH. The mixed logit model: the state of practice.  Transportation. 2003;30(2):133-176Google ScholarCrossref
25.
Ryan M, Watson V, Entwistle V. Rationalising the “irrational”: a think aloud study of discrete choice experiment responses.  Health Econ. 2009;18(3):321-33618651601PubMedGoogle ScholarCrossref
26.
de Bekker-Grob EW, Hol L, Donkers B,  et al.  Labeled versus unlabeled discrete choice experiments in health economics: an application to colorectal cancer screening.  Value Health. 2010;13(2):315-32319912597PubMedGoogle ScholarCrossref
27.
Moyer VA.US Preventive Services Task Force.  Screening for prostate cancer: US Preventive Services Task Force recommendation statement.  Ann Intern Med. 2012;157(2):120-13422801674PubMedGoogle Scholar
Original Investigation
Mar 11, 2013

Comparing 3 Techniques for Eliciting Patient Values for Decision Making About Prostate-Specific Antigen Screening: A Randomized Controlled Trial

Author Affiliations

Author Affiliations: Cecil G. Sheps Center for Health Services Research and Lineberger Comprehensive Cancer Center (Drs Pignone, Lewis, and Sheridan and Ms Crutchfield) and Department of Medicine (Drs Pignone, Lewis, and Sheridan), University of North Carolina, Chapel Hill; School of Public Health, University of Sydney, New South Wales, Australia (Dr Howard); School of Public Health, University of Washington, Seattle (Ms Brenner); and Department of Medicine, University of Michigan, Ann Arbor (Dr Hawley).

JAMA Intern Med. 2013;173(5):362-368. doi:10.1001/jamainternmed.2013.2651
Abstract

Importance To make good decisions about prostate-specific antigen (PSA) screening, men must consider how they value the different potential outcomes.

Objective To determine the effects of different methods of helping men consider such values.

Design and Setting Randomized trial from October 12 to 27, 2011, in the general community.

Participants A total of 911 men aged 50 to 70 years from the United States and Australia who had average risk. Participants were drawn from online panels from a survey research firm in each country and were randomized by the survey firm to 1 of 3 values clarification methods: a balance sheet (n = 302), a rating and ranking task (n = 307), or a discrete choice experiment (n = 302).

Intervention Participants underwent a values clarification task and then chose the most important attribute.

Main Outcome Measures The main outcome was the difference among groups in the most important attribute. Secondary outcomes were differences in unlabeled test preference and intent to undergo screening with PSA.

Results The mean age was 59.8 years; most participants were white and more than one-third had graduated from college. More than 40% reported a PSA test within 12 months. The participants who received the rating and ranking task were more likely to report reducing the chance of death from prostate cancer as being most important (54.4%) compared with those who received the balance sheet (35.1%) or the discrete choice experiment (32.5%) (P < .001). Those receiving the balance sheet were more likely (43.7%) to prefer the unlabeled PSA-like option (as opposed to the “no screening”–like option) compared with those who received rating and ranking (34.2%) or the discrete choice experiment (20.2%). However, the proportion who intended to undergo PSA testing was high and did not differ between groups (balance sheet, 77.1%; rating and ranking, 76.8%; and discrete choice experiment, 73.5%; P = .73).

Conclusions and Relevance Different values clarification methods produce different patterns of attribute importance and different preferences for screening when presented with an unlabeled choice. Further studies with more distal outcome measures are needed to determine the best method of values clarification, if any, for decisions such as whether to undergo screening with PSA.

Trial Registration clinicaltrials.gov Identifier: NCT01558583

Whether to undergo prostate-specific antigen (PSA) screening is a difficult decision for middle-aged men. Prostate cancer is common and causes more than 28 000 deaths per year in the United States.1 However, PSA screening at best seems to produce only a small reduction in prostate cancer mortality and has considerable downsides.2 These downsides include an increase in the number of prostate biopsies (which can be painful and have a risk of causing infection), overdiagnosis (ie, the detection of cancers that would never become clinically apparent or problematic), and increased treatment and treatment-related adverse effects (impotence and incontinence).3

High-quality decision processes, including whether or not to be screened for prostate cancer, should inform patients and incorporate patient values.4,5 Decision aids are tools to help inform patients of their options related to preference-sensitive decisions, promote understanding of the benefits and downsides of these options, prompt consideration of one's personal values, and encourage shared decision making.5 Decision aids have been shown to improve patient knowledge, reduce uncertainty and decisional conflict, and promote a shared decision-making process for a range of conditions, including PSA screening.6,7

Consensus recommendations for high-quality decision aid design include incorporating some method for eliciting and clarifying patient values and preferences.5 However, the best method for eliciting and incorporating patient values and preferences is not clear.8,9 Potential options for values elicitation include implicit techniques, in which patients receive information about different domains and are able to consider their potential value on their own (or with a prompt to “consider which factors are most important to you”), and several explicit techniques (eg, rating, ranking, and discrete choice methods), in which patients are asked specifically to compare the relative importance of several potentially relevant characteristics of a decision. Among decision psychologists, there remains considerable theoretical debate about the potential benefits and downsides of explicit techniques.10

Few previous studies have examined the effect of a decision aid with explicit values clarification compared with the same decision aid without explicit values clarification or compared different values clarification techniques against one another. A recent review11 identified 13 comparative trials and could not reach a conclusion about the effects of values clarification because outcome measurement was inconsistent and results were mixed. One of these studies12 examined PSA screening and found no effect of adding a time trade-off task on knowledge, decisional conflict, or testing preference. In a small, single-site trial, our group recently compared 2 different explicit techniques (discrete choice experiment [DCE] vs rating and ranking) for decision making about colorectal cancer screening and found some differences in participant-reported most important attribute but few other effects.13

To help better understand the effect of different values clarification methods (VCMs), we conducted a randomized trial comparing 1 implicit method (provision of a balance sheet) and 2 explicit methods (a rating and ranking task and a DCE) to determine whether they produce different effects on decision making about PSA screening.

Methods
Overview

We performed a randomized trial among male members of an online survey panel in the United States and Australia who had indicated a willingness to complete surveys. Participants were asked to complete a baseline questionnaire, review basic information about the PSA decision, work through their assigned values clarification task, and then complete a posttask questionnaire.

Selection of attributes and levels

We described PSA screening decision options in terms of 4 key attributes: effect on prostate cancer mortality risk of biopsy, risk of being diagnosed with prostate cancer, and risk of becoming impotent or incontinent as a result of treatment. The attributes and the range of levels of the attributes included were based on the existing literature, including a review of recently reported randomized trials2 and the previous work of our group14-16 (Table 1).

Balance sheet task

The balance sheet was based on modeling studies14 of PSA-related outcomes and was informed by randomized trials and observational evidence as well as by previous research in this area16 (Figure 1).

Rating and ranking task

The rating and ranking task asked participants to rate (on a scale of 0, meaning not important at all, to 5, very important) and then rank the 3 most important screening test attributes from sets of the key attributes (Figure 2).

Discrete choice experiment task

In DCEs (also known as choice-based conjoint analysis), respondents are asked to choose between hypothetical alternatives defined by a set of attributes.17-20 The method is based on the idea that goods and services, including health care services, can be described in terms of a number of separate attributes or factors. The levels of attributes are varied systematically in a series of questions. Respondents choose the option that they prefer for each question. People are assumed to choose the option that is most preferred or that has the highest value or utility. From these choices, a mathematical function is estimated that describes numerically the value that respondents attach to different choice options. Our study followed the International Society for Pharmacoeconomics and Outcomes Research Guidelines for Good Research Practices for conjoint analysis in health.20

For the DCE, we used Ngene (http:// www.choice-metrics.com) to generate a statistically efficient choice design that minimized sample size.21,22 Our design required all participants in the DCE group to evaluate a set of 16 choice scenarios. A sample choice task is shown in Figure 3. Each task included an active screening option and a fixed “no screening” option.

Pretesting

We used online panels maintained by an international research firm, Survey Sampling International, to recruit 60 men (30 in the United States and 30 in Australia) to pretest the surveys. Pilot data indicated that respondents were able to complete the values clarification tasks without difficulty. On the basis of information garnered from the pretest, we modified the survey language slightly and removed the “I prefer neither option” response. Parameter estimates from analysis of the discrete choice pilot data were used to inform the final efficient design of the discrete choice task for the main study.

Participant eligibility and recruitment

We used the Survey Sampling International online panels to recruit a target of 900 men (450 in the United States and 450 in Australia). Participants had average risk (no personal or family history of prostate cancer) and were originally targeted to be 50 to 75 years old; however, because Survey Sampling International has few potential participants older than 70 years, the sample was instead drawn from men 50 to 70 years old. Testing history was assessed but not used to determine eligibility. Those with visual limitations or inability to understand English were excluded.

Study flow

The entire study was performed online. After eligibility was determined and consent obtained, participants received basic information about prostate cancer and PSA screening (eFigure), completed basic demographic questions, and were then randomized by Survey Sampling International on a 1:1:1 basis, stratified by country, to an implicit VCM (a balance sheet of key test attributes), a rating and ranking task, or a DCE. Upon task completion, participants then answered the posttask questionnaire.

Study outcomes

Our main outcome of interest was the participant-reported most important attribute (“Which ONE feature of prostate cancer screening is most important to you?” with responses chosen from the chance of being diagnosed, the chance of dying in the next 10 years, the chance of requiring a biopsy from screening, and the chance of becoming impotent or incontinent from treatment). We chose this outcome to determine whether the VCM itself influenced how participants valued key features of the decision. Key secondary outcomes included testing preference, based on a question that included 2 unlabeled options described in terms of the key decision attributes and designed to mimic screening or no screening options (Figure 4)—we call this “unlabeled test preference”; the values clarity subscale of the Decisional Conflict Scale, which ranges from 0 to 100 with lower scores suggesting better clarity; and a single question about intent to be screened with PSA, based on a Likert scale (from strongly disagree to strongly agree, with agree and strongly agree considered positive intent to be screened and as their labeled preference). In addition, we report certain VCM task-specific outcomes.

Statistical analysis

We performed initial descriptive analyses with means and proportions. We used χ2 tests and analysis of variance for bivariate analyses across the 3 groups. Because of baseline demographic differences among the groups, we then performed multivariate analyses using logistic regression and adjusting for potential confounders, including age, race, educational level, income, and prior PSA testing. We also examined whether there was effect modification based on prior PSA screening or country. Because we identified no important effect modification, we present nonstratified results here. A separate article will examine differences in study outcomes for US vs Australian participants. We used a mixed multinomial logit (also known as a random parameters logit) model with a panel specification23,24 to assess differences in preference structure between respondents from the United States and Australia within the DCE arm (see the eAppendix for details).

Ethical considerations

This study was approved by the University of North Carolina at Chapel Hill Institutional Review Board.

Results

We screened 2336 individuals from October 12 to 27, 2011. Of these, 1300 were ineligible or declined participation and 1036 were randomized. Of these 1036, there were 911 (87.9%) who completed the full survey (Figure 5). The mean time to complete the survey was 8 minutes 46 seconds (range, 1:16-39:32 [minutes:seconds]) and differed between groups (balance sheet, 6:58; rating/ranking, 7:57; DCE, 11:24; P < .001). Participant characteristics are shown in Table 2. We noted potentially important differences across randomization groups in the proportion of white participants and the proportion reporting PSA testing within the past 12 months.

Main outcomes

The 3 different VCMs produced differences in the participant-reported most important attribute from the single posttask questionnaire: those who received the rating and ranking task were more likely (54.4%) to report reducing the chance of death from prostate cancer as being most important compared with either the balance sheet (35.1%) or DCE (32.5%) groups (Table 3). Adjustment for potential confounders, including age, race, educational level, income, and prior PSA testing, did not affect the findings.

In terms of unlabeled test preference, those receiving the balance sheet were more likely (43.7%) to prefer the PSA-like option (as opposed to the no screening option) compared with those who received rating and ranking (34.2%) or the DCE (20.2%) (P < .001). Again, findings were similar after adjustment for potential confounders. Those choosing the screening option were somewhat more likely (47.0%) to select mortality reduction as most important than those choosing the no screening option (37.7%); conversely, those choosing no screening were more likely (22.3%) to select the chance of developing impotence or incontinence as most important compared with those choosing screening (10.1%).

However, the proportion of participants who agreed or strongly agreed that they intended to have PSA testing when labeled as such was high and did not differ between groups (balance sheet, 77.1%; rating and ranking, 76.8%; and DCE, 73.5%; P = .73) The mean values clarity score was low (suggesting a high degree of clarity) and did not differ importantly between groups despite the difference being statistically significant (balance sheet, 22.5; rating and ranking, 19.0; and DCE, 20.3; P = .03).

Method-specific outcomes

Among the 302 participants randomized to the balance sheet, 65.6% chose the labeled PSA testing option. Agreement between test preference from the balance sheet and the unlabeled test preference question was low (eTable 1).

For the rating and ranking group (n = 307), the chance of being diagnosed with prostate cancer and the chance of dying of prostate cancer were each rated slightly higher in importance (3.8 for each) compared with the chance of needing a biopsy (3.5) or the chance of developing impotence or incontinence from screening (3.6). Just over half (52.8%) of the 302 participants in this group ranked the chance of dying of prostate cancer as most important. Agreement between the ranking task and the single question about the most important attribute from the posttask questionnaire was modest: 71 of 307 participants (23%) chose different attributes as being most important on the single question compared with their ranking (eTable 2).

For the 302 participants randomized to the DCE, the mean part-worth utilities are shown in eTable 3. All attributes performed in the direction expected: more value was attached to a lower chance of dying of prostate cancer, a lower risk of being diagnosed, and a lower chance of impotence or incontinence compared with higher levels. There were no significant differences in utility based on the chance of needing a biopsy. The attribute most likely to be chosen as the most important one on the basis of the DCE-derived importance scores was the chance of dying of prostate cancer (chosen by 53.5% of participants).

Comment

Different VCMs produced different patterns of attribute importance and different preferences for PSA testing when participants were presented with unlabeled testing options designed to correspond to the PSA test and the option of not being screened. Those receiving the rating and ranking task were more likely to select a reduction in prostate cancer mortality as being most important compared with those assigned to the balance sheet or DCE; those receiving the balance sheet were more likely to select the unlabeled option that corresponded with the PSA test than were those assigned to rating and ranking or the DCE. Those assigned to the DCE were somewhat less likely to select a reduction of mortality as being most important and were least likely to select the PSA-like option on the unlabeled preference question. These findings are consistent with the theory that DCE encourages people to more fully consider all attributes and not to rely on simple heuristics.25

However, there was high intent to be screened and no difference between groups when asked directly with a labeled question. Among those in the balance sheet group, there were moderately large differences in the proportion selecting PSA screening when it was presented as a labeled vs an unlabeled choice. The mean values clarity subscale was low across all 3 groups, suggesting that most users were clear about their values after completing their task.

These findings have several implications. First, they suggest that the VCM chosen affects how participants report their values (in terms of the most important attribute) in this sample of online panel members making hypothetical choices. The DCE may have led to more deliberation and hence less monolithic results for attribute importance; conversely, the rating and ranking may have focused respondents more on the most “accessible” attribute: mortality reduction. The present study cannot determine which technique is “better” or a more accurate reflection of each man's true values; additional larger studies should be performed that examine men making actual screening decisions, account for other factors affecting decisions, and include longitudinal follow-up to allow measurement of more distal outcomes, such as appropriate test use (test was received by those who prefer it and not received by those who choose against it), decision satisfaction, and decisional regret.

Second, this study shows the potentially large effect of labeling on decision making. The proportion of men in the balance sheet arm who chose labeled PSA testing on the balance sheet (65.6%) was higher than the proportion choosing the unlabeled but otherwise identical PSA-like option immediately afterward (43.7%). Labeling may affect preferences by different mechanisms: it can allow decision makers to value important aspects of the decision that are not reflected in the attributes we used to describe the tests—this could be a desirable effect. However, labeling may also allow decision makers to simply choose the familiar option, which can help resolve cognitive dissonance but may not reflect one's underlying values.26 More studies, including qualitative work, are needed to discern the causes of this labeling effect.

Our study adds to the limited body of research examining the effects of explicit VCMs vs no values clarification, implicit methods, or other explicit methods.11 Other studies have found inconsistent effects of values clarification, including 1 study in PSA screening.12

The US Preventive Services Task Force recently revised its PSA screening recommendation for middle-aged men, changing to a “D” recommendation (against routine screening).27 Although it is clear that PSA screening has important downsides, the possibility of a rare benefit in terms of prostate cancer mortality reduction and the preference patterns noted in our survey may warrant a shared decision-making process.

More studies, conducted in a range of different conditions and using a range of different methods, are required to better determine the best approach (if any) to values clarification. Decision aids with explicit clarification techniques should be compared with decision aids with no or only implicit values clarification to best understand their effects on decision-making processes and actual decisions. Because decision aids can be difficult to implement, especially for common decisions such as PSA screening, decision aid developers should include VCMs only if they improve the decision-making process in some meaningful way.

Our study, although helpful in building the evidence base in this area, has a few key limitations that must be considered. First, we used a hypothetical scenario; whether the effects we observed would differ in men making the screening decision is unknown. To mitigate this concern, we enrolled only men of screening age and asked them to answer as if they were deciding. Second, we cannot directly determine, on an individual level, whether a good decision-making process was followed or whether a good decision was made; larger studies with more distal outcomes are required. Third, we did not conduct our trial within a full PSA screening decision aid; however, the information that we provided to participants contained the key elements for a decision aid (definition of the decision, pros and cons of the options, and encouragement to consider one's values).5 Future studies should compare different VCMs embedded within a full decision aid. Fourth, participants in this study were drawn from an online panel and may not be completely representative of the population of US and Australian men in this age group. Fifth, randomization produced some differences in baseline characteristics between VCM groups. We adjusted for these differences and did not see effects on our results. Finally, we did not measure whether our participants were informed and engaged when they provided their answers.

In conclusion, different VCMs affected how men valued different aspects of the decision to undergo PSA screening and also influenced unlabeled test choice. The intent to be screened with PSA was higher than the preference for PSA when assessed via an unlabeled question, suggesting a strong effect of the label itself. Further studies with more distal outcome measures are needed to determine the best method of values clarification, if any, for decisions such as whether to undergo screening with PSA.

Back to top
Article Information

Correspondence: Michael Patrick Pignone, MD, MPH, Cecil G. Sheps Center for Health Services Research, University of North Carolina at Chapel Hill, 725 Martin Luther King Jr Blvd, CB 7590, Chapel Hill, NC 27599 (pignone@med.unc.edu).

Accepted for Publication: October 8, 2012.

Published Online: February 11, 2013. doi:10.1001/jamainternmed.2013.2651

Author Contributions: All authors were responsible for the conceptual design of the study, participated in revisions to the study design, and approved the final study. Study concept and design: Pignone, Howard, Hawley, Lewis, and Sheridan. Acquisition of data: Pignone, Howard, and Crutchfield. Analysis and interpretation of data: Pignone, Howard, Brenner, Crutchfield, Lewis, and Sheridan. Drafting of the manuscript: Pignone, Howard, Brenner, and Hawley. Critical revision of the manuscript for important intellectual content: Pignone, Howard, Crutchfield, Hawley, Lewis, and Sheridan. Statistical analysis: Howard and Brenner. Obtained funding: Pignone, Howard, and Lewis. Administrative, technical, and material support: Pignone and Crutchfield. Study supervision: Pignone.

Conflict of Interest Disclosures: None reported.

Funding/Support: This study was funded by the University of North Carolina Cancer Research Fund and by an Established Investigator Award K05 CA129166 from the National Cancer Institute (Dr Pignone and Ms Crutchfield).

References
1.
American Cancer Society. Cancer Facts & Figures 2012. http://www.cancer.org/acs/groups/content/@epidemiologysurveilance/documents/document/acspc-031941.pdf. Accessed August 8, 2012
2.
Schroder FH, Hugosson J, Roobol MJ,  et al.  Prostate-cancer mortality at 11 years of follow-up.  N Engl J Med. 2012;366(11):981-990Google ScholarCrossref
3.
Chou R, Croswell JM, Dana T,  et al.  Screening for prostate cancer: a review of the evidence for the US Preventive Services Task Force.  Ann Intern Med. 2011;155(11):762-77121984740PubMedGoogle Scholar
4.
Braddock CH III, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: time to get back to basics.  JAMA. 1999;282(24):2313-232010612318PubMedGoogle ScholarCrossref
5.
Elwyn G, O’Connor A, Stacey D,  et al; International Patient Decision Aids Standards (IPDAS) Collaboration.  Developing a quality criteria framework for patient decision aids: online international Delphi consensus process.  BMJ. 2006;333(7565):41716908462PubMedGoogle ScholarCrossref
6.
Stacey D, Bennett CL, Barry MJ,  et al.  Decision aids for people facing health treatment or screening decisions.  Cochrane Database Syst Rev. 2011;(10):CD00143121975733PubMedGoogle Scholar
7.
Sheridan SL, Felix K, Pignone MP, Lewis CL. Information needs of men regarding prostate cancer screening and the effect of a brief decision aid.  Patient Educ Couns. 2004;54(3):345-35115324986PubMedGoogle ScholarCrossref
8.
O’Connor AM, Bennett C, Stacey D,  et al.  Do patient decision aids meet effectiveness criteria of the International Patient Decision Aid Standards Collaboration? a systematic review and meta-analysis.  Med Decis Making. 2007;27(5):554-57417873255PubMedGoogle ScholarCrossref
9.
Elwyn G, O’Connor AM, Bennett C,  et al.  Assessing the quality of decision support technologies using the International Patient Decision Aid Standards instrument (IPDASi).  PLoS One. 2009;4(3):e470519259269PubMedGoogle ScholarCrossref
10.
Wilson TD, Schooler JW. Thinking too much: introspection can reduce the quality of preferences and decisions.  J Pers Soc Psychol. 1991;60(2):181-1922016668PubMedGoogle ScholarCrossref
11.
Fagerlin A, Pignone M, Abhyankar P,  et al.  Clarifying and expressing values. In: Volk R, Llewellyn-Thomas H, eds. 2012 Update of the International Patient Decision Aids Standards (IPDAS) Collaboration's Background Document. International Patient Decision Aid Standards (IPDAS) Collaboration website. 2012. http://ipdas.ohri.ca/IPDAS-Chapter-D.pdf. Accessed December 11, 2012
12.
Frosch DL, Bhatnagar V, Tally S, Hamori CJ, Kaplan RM. Internet patient decision support: a randomized controlled trial comparing alternative approaches for men considering prostate cancer screening.  Arch Intern Med. 2008;168(4):363-36918299490PubMedGoogle ScholarCrossref
13.
Pignone MP, Brenner AT, Hawley S,  et al.  Conjoint analysis versus rating and ranking for values elicitation and clarification in colorectal cancer screening.  J Gen Intern Med. 2012;27(1):45-5021870192PubMedGoogle ScholarCrossref
14.
Howard K, Barratt A, Mann GJ, Patel MI. A model of prostate-specific antigen screening outcomes for low- to high-risk men: information to support informed choices.  Arch Intern Med. 2009;169(17):1603-161019786680PubMedGoogle ScholarCrossref
15.
Howard K, Salkeld G. Does attribute framing in discrete choice experiments influence willingness to pay? results from a discrete choice experiment in screening for colorectal cancer.  Value Health. 2009;12(2):354-36318657102PubMedGoogle ScholarCrossref
16.
Howard K, Salkeld GP, Mann GJ, Patel MI, Cunich M, Pignone MP. The COMPASS Study: Community Preferences for Prostate Cancer Screening: protocol for a quantitative preference study.  BMJ Open. 2012;2(1):e00058722226686PubMedGoogle ScholarCrossref
17.
Lancsar E, Louviere J. Conducting discrete choice experiments to inform healthcare decision making: a user's guide.  Pharmacoeconomics. 2008;26(8):661-67718620460PubMedGoogle ScholarCrossref
18.
Bridges JF, Kinter ET, Kidane L, Heinzen RR, McCormick C. Things are looking up since we started listening to patients: trends in the application of conjoint analysis in health 1982-2007.  Patient. 2008;1(4):273-28222272995PubMedGoogle ScholarCrossref
19.
Marshall D, Bridges JF, Hauber B,  et al.  Conjoint analysis applications in health—how are studies being designed and reported? an update on current practice in the published literature between 2005 and 2008.  Patient. 2010;3(4):249-25622273432PubMedGoogle ScholarCrossref
20.
Bridges JF, Hauber AB, Marshall D,  et al.  Conjoint analysis applications in health—a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force.  Value Health. 2011;14(4):403-41321669364PubMedGoogle ScholarCrossref
21.
Huber J, Zwerina K. The importance of utility balance in efficient choice designs.  J Mark Res. 1996;33(3):307-317Google ScholarCrossref
22.
Sándor Z, Wedel M. Profile construction in experimental choice designs for mixed logit models.  Marketing Sci. 2002;21(4):455-475Google ScholarCrossref
23.
Hensher DA, Rose JM, Greene WH. Applied Choice Analysis: A Primer. Cambridge, NY: Cambridge University Press; 2005
24.
Hensher DA, Greene WH. The mixed logit model: the state of practice.  Transportation. 2003;30(2):133-176Google ScholarCrossref
25.
Ryan M, Watson V, Entwistle V. Rationalising the “irrational”: a think aloud study of discrete choice experiment responses.  Health Econ. 2009;18(3):321-33618651601PubMedGoogle ScholarCrossref
26.
de Bekker-Grob EW, Hol L, Donkers B,  et al.  Labeled versus unlabeled discrete choice experiments in health economics: an application to colorectal cancer screening.  Value Health. 2010;13(2):315-32319912597PubMedGoogle ScholarCrossref
27.
Moyer VA.US Preventive Services Task Force.  Screening for prostate cancer: US Preventive Services Task Force recommendation statement.  Ann Intern Med. 2012;157(2):120-13422801674PubMedGoogle Scholar
×