[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
[Skip to Content Landing]
Figure 1.
Urologist-Level Variation in Observation
Urologist-Level Variation in Observation

Caterpillar plots of urologist-level variation in the use of observation for localized prostate cancer are shown across all 3 risk strata (low, intermediate, and high). Physicians are ranked by their predicted probability of observation. The black CIs identify urologists whose 95% CIs exclude the mean (orange line). The blue line indicates the point estimates for predicted profitabilities of observation for individual physicians; gray area, the 95% CIs for these point estimates.

Figure 2.
Test of the Urologist-Level Correlation in the Use of Observation for Low- and High-Risk Disease
Test of the Urologist-Level Correlation in the Use of Observation for Low- and High-Risk Disease

Scatterplot shows individual urologist differences from the mean estimated probability for observation (relative to the mean) for low- and high-risk prostate cancer. The black line demonstrates the urologist-level correlation between the estimated probability of observation for low- and high-risk prostate cancer. The blue line represents the sensitivity analysis of prostate cancer experts (urologists who treated ≥10 low- and high-risk patients during the study period). For the color spectrum, green indicates ideal; red, less ideal; and yellow and orange, intermediate.

Table 1.  
Baseline Characteristics of the Analytic Cohort
Baseline Characteristics of the Analytic Cohort
Table 2.  
Estimators of Observation in the Mixed-Effects Model
Estimators of Observation in the Mixed-Effects Model
1.
Wennberg  J, Gittelsohn.  Small area variations in health care delivery.  Science. 1973;182(4117):1102-1108.PubMedGoogle ScholarCrossref
2.
Birkmeyer  JD, Sharp  SM, Finlayson  SR, Fisher  ES, Wennberg  JE.  Variation profiles of common surgical procedures.  Surgery. 1998;124(5):917-923.PubMedGoogle ScholarCrossref
3.
Fisher  ES, Wennberg  DE, Stukel  TA, Gottlieb  DJ, Lucas  FL, Pinder  ÉL.  The implications of regional variations in Medicare spending, part 2: health outcomes and satisfaction with care.  Ann Intern Med. 2003;138(4):288-298.PubMedGoogle ScholarCrossref
4.
Song  Y, Skinner  J, Bynum  J, Sutherland  J, Wennberg  JE, Fisher  ES.  Regional variations in diagnostic practices.  N Engl J Med. 2010;363(1):45-53.PubMedGoogle ScholarCrossref
5.
Wennberg  JE.  Unwarranted variations in healthcare delivery: implications for academic medical centres.  BMJ. 2002;325(7370):961-964.PubMedGoogle ScholarCrossref
6.
Lu-Yao  GL, McLerran  D, Wasson  J, Wennberg  JE; Prostate Patient Outcomes Research Team.  An assessment of radical prostatectomy: time trends, geographic variation, and outcomes.  JAMA. 1993;269(20):2633-2636.PubMedGoogle ScholarCrossref
7.
Wilt  TJ, Cowper  DC, Gammack  JK, Going  DR, Nugent  S, Borowsky  SJ.  An evaluation of radical prostatectomy at Veterans Affairs Medical Centers: time trends and geographic variation in utilization and outcomes.  Med Care. 1999;37(10):1046-1056.PubMedGoogle ScholarCrossref
8.
Lai  S, Lai  H, Krongrad  A, Lamm  S, Schwade  J, Roos  BA.  Radical prostatectomy: geographic and demographic variation.  Urology. 2000;56(1):108-115.PubMedGoogle ScholarCrossref
9.
Krupski  TL, Kwan  L, Afifi  AA, Litwin  MS.  Geographic and socioeconomic variation in the treatment of prostate cancer.  J Clin Oncol. 2005;23(31):7881-7888.PubMedGoogle ScholarCrossref
10.
Hamilton  AS, Wu  X-C, Lipscomb  J,  et al.  Regional, provider, and economic factors associated with the choice of active surveillance in the treatment of men with localized prostate cancer.  J Natl Cancer Inst Monogr. 2012;2012(45):213-220.PubMedGoogle ScholarCrossref
11.
Harlan  L, Brawley  O, Pommerenke  F, Wali  P, Kramer  B.  Geographic, age, and racial variation in the treatment of local/regional carcinoma of the prostate.  J Clin Oncol. 1995;13(1):93-100.PubMedGoogle Scholar
12.
Lai  S, Lai  H, Lamm  S, Obek  C, Krongrad  A, Roos  B.  Radiation therapy in non–surgically-treated nonmetastatic prostate cancer: geographic and demographic variation.  Urology. 2001;57(3):510-517.PubMedGoogle ScholarCrossref
13.
Spencer  BA, Fung  CH, Wang  M, Rubenstein  LV, Litwin  MS.  Geographic variation across veterans affairs medical centers in the treatment of early stage prostate cancer.  J Urol. 2004;172(6, pt 1):2362-2365.PubMedGoogle ScholarCrossref
14.
Shahinian  VB, Kuo  YF, Freeman  JL, Goodwin  JS.  Determinants of androgen deprivation therapy use for prostate cancer: role of the urologist.  J Natl Cancer Inst. 2006;98(12):839-845.PubMedGoogle ScholarCrossref
15.
Cooperberg  MR, Broering  JM, Carroll  PR.  Time trends and local variation in primary treatment of localized prostate cancer.  J Clin Oncol. 2010;28(7):1117-1123.PubMedGoogle ScholarCrossref
16.
Wilt  TJ, Brawer  MK, Jones  KM,  et al; Prostate Cancer Intervention versus Observation Trial (PIVOT) Study Group.  Radical prostatectomy versus observation for localized prostate cancer.  N Engl J Med. 2012;367(3):203-213.PubMedGoogle ScholarCrossref
17.
Bill-Axelson  A, Holmberg  L, Ruutu  M,  et al; SPCG-4 Investigators.  Radical prostatectomy versus watchful waiting in early prostate cancer.  N Engl J Med. 2011;364(18):1708-1717.PubMedGoogle ScholarCrossref
18.
Ritch  CR, Graves  AJ, Keegan  KA,  et al.  Increasing use of observation among men at low risk for prostate cancer mortality.  J Urol. 2015;193(3):801-806.PubMedGoogle ScholarCrossref
19.
Choo  R, Klotz  L, Danjoux  C,  et al.  Feasibility study: watchful waiting for localized low to intermediate grade prostate carcinoma with selective delayed intervention based on prostate specific antigen, histological and/or clinical progression.  J Urol. 2002;167(4):1664-1669.PubMedGoogle ScholarCrossref
20.
Lu-Yao  GL, Albertsen  PC, Moore  DF,  et al.  Outcomes of localized prostate cancer following conservative management.  JAMA. 2009;302(11):1202-1209.PubMedGoogle ScholarCrossref
21.
Cooperberg  MR, Carroll  PR.  Trends in management for patients with localized prostate cancer, 1990-2013.  JAMA. 2015;314(1):80-82.PubMedGoogle ScholarCrossref
22.
Medicare Access and CHIP Reauthorization Act of 2015, HR 2, 114th Congress, 2nd Sess (2015).
23.
Makarov  DV, Desai  R, Yu  JB,  et al.  Appropriate and inappropriate imaging rates for prostate cancer go hand in hand by region, as if set by thermostat.  Health Aff (Millwood). 2012;31(4):730-740.PubMedGoogle ScholarCrossref
24.
Howlader  N, Noone  A, Krapcho  M,  et al. SEER Cancer Statistics Review, 1975-2012. Bethesda, MD: National Cancer Institute. http://seer.cancer.gov/archive/csr/1975_2012/. Updated November 18, 2015. Accessed December 1, 2015.
25.
Warren  JL, Klabunde  CN, Schrag  D, Bach  PB, Riley  GF.  Overview of the SEER-Medicare data: content, research applications, and generalizability to the United States elderly population.  Med Care. 2002;40(8)(suppl):IV-3-IV-18.PubMedGoogle Scholar
26.
D’Amico  AV, Whittington  R, Malkowicz  SB,  et al.  Biochemical outcome after radical prostatectomy, external beam radiation therapy, or interstitial radiation therapy for clinically localized prostate cancer.  JAMA. 1998;280(11):969-974.PubMedGoogle ScholarCrossref
27.
PSA values and SEER data, 1973-2012. http://seer.cancer.gov/data/psa-values.html. Accessed December 23, 2015.
28.
Hoffman  KE, Niu  J, Shen  Y,  et al.  Physician variation in management of low-risk prostate cancer: a population-based cohort study.  JAMA Intern Med. 2014;174(9):1450-1459.PubMedGoogle ScholarCrossref
29.
Gleason  DF, Mellinger  GT.  Prediction of prognosis for prostatic adenocarcinoma by combined histological grading and clinical staging.  J Urol. 1974;111(1):58-64.PubMedGoogle Scholar
30.
Charlson  ME, Charlson  RE, Peterson  JC, Marinopoulos  SS, Briggs  WM, Hollenberg  JP.  The Charlson comorbidity index is adapted to predict costs of chronic disease in primary care patients.  J Clin Epidemiol. 2008;61(12):1234-1240.PubMedGoogle ScholarCrossref
31.
Holmboe  ES, Concato  J.  Treatment decisions for localized prostate cancer: asking men what’s important.  J Gen Intern Med. 2000;15(10):694-701.PubMedGoogle ScholarCrossref
32.
Sommers  BD, Beard  CJ, D’Amico  AV,  et al.  Decision analysis using individual patient preferences to determine optimal treatment for localized prostate cancer.  Cancer. 2007;110(10):2210-2217.PubMedGoogle ScholarCrossref
33.
Sommers  BD, Beard  CJ, D’Amico  AV, Kaplan  I, Richie  JP, Zeckhauser  RJ.  Predictors of patient preferences and treatment choices for localized prostate cancer.  Cancer. 2008;113(8):2058-2067.PubMedGoogle ScholarCrossref
34.
Filson  CP, Schroeck  FR, Ye  Z, Wei  JT, Hollenbeck  BK, Miller  DC.  Variation in use of active surveillance among men undergoing expectant treatment for early stage prostate cancer.  J Urol. 2014;192(1):75-80.PubMedGoogle ScholarCrossref
Original Investigation
January 2017

Urologist-Level Correlation in the Use of Observation for Low- and High-Risk Prostate Cancer

Author Affiliations
  • 1Department of Urologic Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
  • 2Department of Health Policy, Vanderbilt University Medical Center, Nashville, Tennessee
  • 3Geriatric Research and Educational Center, Veterans Affairs Tennessee Valley Health Care System, Nashville
 

Copyright 2017 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.

JAMA Surg. 2017;152(1):27-34. doi:10.1001/jamasurg.2016.2907
Key Points

Question  Is the probability of observation for low- and high-risk prostate cancer correlated at the urologist level?

Results  In this population-based study using Surveillance, Epidemiology, and End Results (SEER)–Medicare database data from 57 669 patients, substantial urologist-level variation was found in the use of observation for men with low-risk disease. The patterns of observation for low- and high-risk disease were correlated at the urologist level.

Meaning  This analysis exposes the highly variable use of observation for localized prostate cancer among US urologists and provides a framework for the inclusion of physician-level rates of observation as a performance measure for the recently legislated merit-based reimbursement formulas.

Abstract

Importance  The reporting of individual urologist rates of observation for localized prostate cancer may be a valuable performance measure with important downstream implications for patient and payer stakeholder groups. However, few studies have examined the urologist-level variation in the use of observation across all risk strata of prostate cancer.

Objectives  To measure variation in the use of observation at the urologist level by disease risk strata and to evaluate the association between the urologist-level rates of observation for men with low-risk and high-risk prostate cancer.

Design, Setting, and Participants  With the use of linked Surveillance, Epidemiology, and End Results (SEER)–Medicare data, a population-based study of men diagnosed with prostate cancer from January 1, 2004, to December 31, 2009, was performed in SEER catchment areas of the United States. A total of 57 639 men with prostate cancer with 1884 diagnosing urologists were identified. Data were analyzed from October 1 to December 31, 2015.

Main Outcomes and Measures  The main outcome was observation, which is defined as the absence of definitive treatment within 1 year of diagnosis. In each risk stratum, a multivariable mixed-effects model was fit to characterize associations between observation and selected patient characteristics. From these models, the estimated probability of observation was calculated for each urologist within each risk stratum, and the association between the physician-level estimated rates of observation for low-risk and high-risk disease was assessed.

Results  Among the 57 639 men included in the study, the estimated probability of observation for low-risk disease varied impressively (mean, 27.8%; range, 5.1%-71.2%) at the individual urologist level. Considerably less urologist-level variation was seen in the use of observation for intermediate-risk disease (11.1%; range, 4.8%-31.5%) and high-risk disease (5.8%; range, 3.2%-16.5%). Furthermore, the estimated rates of observation for low- and high-risk disease were correlated at the urologist level (Spearman ρ = 0.17; P < .001). A comparable correlation was likewise observed among urologists with high-volume prostate cancer practices (Spearman ρ = 0.24; P < .001).

Conclusions and Relevance  Considerable urologist-level variation is seen in the use of observation for men with low-risk prostate cancer. More important, the use of observation for low-risk and high-risk patients with prostate cancer is correlated at the urologist level. This study reveals the strikingly variable use of observation among US urologists and establishes a framework for the use of urologist-level treatment signatures as a quality measure in the emerging value-based health care environment.

Introduction

Variation in clinical practice has long been recognized at the regional level for various disease processes.1-4 Patient and disease characteristics often provide little explanation for these observations, especially for clinical conditions such as prostate cancer, in which treatments may be discretionary and involve significant trade-offs.5-13 Observation for clinically localized prostate cancer is one such discretionary management strategy with well-documented regional variation in practice.14,15 Since the publication of data from 2 large randomized trials,16,17 observation for low-risk prostate cancer has become widely accepted as a reasonable management strategy among appropriately selected men in whom death or morbidity due to untreated prostate cancer is unlikely.18-20 Nonetheless, persistent unexplained variation remains in the use of observation for localized prostate cancer.21 In light of the clinical equipoise between treatment and observation among men with low-risk disease, these wide variations in prostate cancer care prompt concern regarding the underlying rationale for clinical decision making and underscore the need to understand the patient-, physician-, and system-level factors contributing to these variations.

Recognizing that the degree of variation in prostate cancer management cannot be entirely explained by patient factors, the Centers for Medicare & Medicaid Services (CMS) have begun to develop payment strategies to escape the volume incentives of the current fee-for-service environment. One large-scale payment reform is the Merit-Based Incentive Payment System (MIPS).22 MIPS combines existing incentive programs and creates a composite performance score that will inform a physician’s reimbursement rate based on his or her performance in 4 categories (quality, resource use, meaningful use, and clinical practice improvement activities). Before the enactment of MIPS, CMS is inviting public feedback on physician-level quality measures and performance thresholds in an effort to ensure that the selection of quality and resource-use measures will be meaningful to multiple stakeholder groups. In this regard, developing a benchmark for observation of low-risk prostate cancer may be an attractive quality and resource-use measure for CMS. However, a benchmark aimed at limiting inappropriate resource use in low-risk populations may have the unintended consequence of lowering appropriate resource use for high-risk populations.23

Ascertaining an individual urologist’s treatment patterns by risk is instrumental for the development of urologist-level performance measures relevant for multiple stakeholder groups. To this end, we explored the extent of variation in the use of observation at the urologist level across all risk strata of clinically localized prostate cancer. Furthermore, to test the hypothesis that the use of observation is linked at the urologist level for low- and high-risk disease, we plotted the urologist-specific estimated rates of observation for low- and high-risk disease. We hypothesized that variation in the use of observation across all risk strata would be considerable and that the estimated risks for observation for low- and high-risk disease would be correlated at the individual urologist level.

Methods
Data

We used data from the Surveillance, Epidemiology, and End-Results (SEER) program linked to Medicare claims. The SEER registry is a large population-based data source organized by the National Cancer Institute, which collects information about cancer site, stage, histologic characteristics, and grade for participants in 18 geographic regions that encompass approximately 26% of the general population in the United States.24,25 This study was deemed exempt from the need for approval by the institutional review board of Vanderbilt University, Nashville, Tennessee, which waived the need for informed consent.

Study Cohort and Variables

Our study cohort included men 66 years or older diagnosed with prostate cancer from January 1, 2004, through December 31, 2009 (eMethods and eFigure in the Supplement). Our outcome of interest was observation among men with newly diagnosed prostate cancer, defined as the absence of definitive treatment within 1 year of the diagnosis. Treatment was identified in Medicare claims using codes from the International Classification of Diseases, Ninth Revision (ICD-9), and Common Procedural Terminology, Fourth Edition, for radical prostatectomy, radiotherapy, thermal ablative therapy, or androgen deprivation therapy within 1 year of the diagnosis (eTable in the Supplement). Androgen deprivation therapy included luteinizing hormone receptor agonists and antagonists, antiandrogens, or surgical castration. Observation was assigned for patients who did not undergo any of the listed treatments within 1 year of diagnosis.

Patients were assigned a disease risk category according to the D’Amico classification system26 using clinically relevant prostate-specific antigen (PSA) level ranges as recommended by SEER (eMethods in the Supplement).27 We then assigned patients to a specific diagnosing urologist using Medicare physician specialty codes as previously described.28 Briefly, we considered the diagnosing urologist to be the first chronological urologist who submitted a claim for a prostate biopsy within 6 months of the prostate cancer diagnosis.

Additional patient-level variables included age at diagnosis, race/ethnicity, marital status, diagnosis year, Gleason score (range, 2-10, with higher scores indicating poorly differentiated tumor),29 pretreatment PSA level, SEER geographic regions (North Central, Northeast, South, and West), and median income within the zip code of residence. The ICD-9 codes were used to identify existing patient comorbidities in the 1-year period before the diagnosis of prostate cancer.

Statistical Analysis

We analyzed data from October 1 to December 31, 2015. We fit multivariable mixed-effects models to identify the association between observation and select patient characteristics, assuming a urologist-level random intercept to account for correlation among urologists and calculation of urologist-level estimated probabilities of observation. Separate models were fit for low-, intermediate-, and high-risk patients (3 models). Each model adjusted for patient age (66-69, 70-79, and ≥80 years), Charlson comorbidity index (0, 1-2, and ≥3 [with higher scores indicating more comorbidities]),30 tertiles of median income within the zip code of residence, marital status (married or unmarried), race (nonwhite or white), geographic region, and year of diagnosis (2004-2009). Missing data, including risk and zip code–level median household income, were singly imputed using predictive mean matching and the following variables: stage, PSA level, Gleason score, primary treatment, age, Charlson comorbidity index, race, marital status, region, year, and census tract variables, including the median household income, population density, and percentages of the population who did not speak English well, with a high school education, with 4 years of a college education, and living below the poverty level. After imputation, urologists with at least 1 low-risk, 1 intermediate-risk, and 1 high-risk patient were included in the models.

We calculated estimated probabilities of observation and 95% CIs for each urologist for each level of risk using the risk-specific multivariable mixed-effects models, holding covariates at their frequency in the overall population. Urologists were ranked based on their estimated probability of observation, and each point estimate was plotted relative to the mean estimated probability in each risk group. To evaluate the association between the estimated probability of observation for the low- and high-risk strata, we plotted low- and high-risk estimates, centered at their means, for each urologist. We then evaluated Spearman rank correlations and tested whether correlations differed from zero. To test whether the urologist-level use of observation was also correlated for urologists who were experts in treating prostate cancer, we performed a sensitivity analysis for a subset of urologists who treated at least 10 low-risk and 10 high-risk patients with prostate cancer during the study period using previously published thresholds.28 All analyses were conducted using SAS software (version 9.4; SAS Institute Inc) and R software (version 3.2.1; http://www.R-project.org). Two-sided P < .05 was considered statistically significant.

Results
Study Population

Of the 170 869 patients diagnosed with prostate cancer from January 1, 2004, to December 31, 2009, 57 669 men met the inclusion criteria, representing 1884 diagnosing urologists who had at least 1 low-, 1 intermediate-, and 1 high-risk patient during the study period (eFigure in the Supplement). Of these patients, 20 526 (35.6%) harbored low-risk disease, whereas 22 320 (38.7%) and 14 823 (25.7%) had intermediate- and high-risk disease, respectively. Table 1 summarizes the clinical, demographic, and socioeconomic features of the cohort stratified by risk.

Patient-Level Factors Associated With Observation

In multivariable mixed-effects models, older unmarried men were more likely to receive observation across all risk strata (Table 2). For low- and intermediate-risk disease, increasing comorbidity (ORs, 1.62 [95% CI, 1.46-1.81] and 1.48 [95% CI, 1.30-1.68], respectively; P < .001 for both) and residing in the Northeastern region of the United States (ORs, 0.71 [95% CI, 0.58-0.86] and 0.77 [95% CI, 0.63-0.93], respectively; P < .001 for both) were associated with observation. Whereas nonwhite patients with low-risk disease were less likely to undergo observation (OR, 0.78; 95% CI, 0.70-0.87; P < .001), nonwhite patients with high-risk disease were more likely to receive observation (OR, 1.29; 95% CI, 1.10-1.52; P < .001). Patients with low-risk disease in the top income tertile were more likely to receive observation (OR, 1.12; 95% CI, 1.01-1.25; P = .05), but, among patients with high-risk disease, the middle income tertile was associated with observation (OR, 0.78; 95% CI, 0.65-0.93; P = .03). The likelihood of undergoing observation increased throughout the study period for all risk groups (Table 2).

Urologist-Level Variation in the Use of Observation

From risk-strata–specific mixed-effects models, we calculated estimated probabilities of observation and 95% CIs for each diagnosing urologist. The mean estimated probability of observation among patients with low-risk disease was 27.8% (range, 5.1%-71.2%) (Figure 1). We identified individual urologists with high or low estimated rates of observation as those with 95% CIs that excluded the mean estimated probability in each risk stratum (darkened 95% CI on Figure 1). With the use of this framework, 63 urologists (3.3%) were found to have estimated probabilities of observation in excess of those expected for patients with low-risk prostate cancer, and 44 (2.3%) had estimated probabilities of observation lower than those expected for patients with low-risk prostate cancer. Among patients with intermediate- and high-risk disease, we observed considerably less variation in the estimated risk for observation among individual urologists (Figure 1). The mean estimated probability of observation for intermediate-risk disease was 11.1% (range, 4.8%-31.5%) and for high-risk disease, 5.8% (range, 3.2%-16.5%).

Testing Urologist-Level Correlation in the Use of Observation for Low- and High-Risk Disease

To test whether the use of observation for low- and high-risk disease is linked at the urologist level, we plotted the estimated probabilities for each physician who treated at least 1 low- and 1 high-risk patient during the study period (Figure 2). We identified a positive correlation between the estimated rates of observation for low- and high-risk prostate cancer at the individual urologist level (Spearman ρ = 0.17; P < .001). In a sensitivity analysis among 480 urologists who diagnosed at least 10 low-risk and 10 high-risk patients during the study period, we observed an even stronger correlation among urologist-level estimated rates of observation for low- and high-risk disease (Spearman ρ = 0.24; P < .001). In a second sensitivity analysis whereby only the Gleason score and clinical stage were used in the classification of risk, we found no qualitative differences compared with the main analysis.

Discussion

In this study, we characterize urologist-level variation in the use of observation to manage clinically localized prostate cancer. We found considerable variation in the use of observation at the urologist level for men with low-risk disease and, more important, that the patterns of observation for low- and high-risk disease are correlated at the urologist level. This analysis raises important questions surrounding the key drivers behind decision making for prostate cancer treatment. Furthermore, this study identifies a strong relationship between appropriate and inappropriate use of observation in the management of prostate cancer at the urologist level. We believe the public availability of these urologist-specific data on the use of observation for localized prostate cancer may lay the foundation for a valuable performance measure with clear relevance to patient and payer stakeholder groups. Such public reporting may ultimately facilitate informed decisions by the health care consumer while providing incentives for physicians to reflect more effectively on their own performance and how it relates to their peers at the local, regional, and national levels. Together, these tools may prove useful for private- and public-sector strategies to improve the quality and efficiency of prostate cancer care in the United States.

Our results may also be of interest to CMS before the implementation of value-based payment reforms, such as MIPS, that measure and report physician-level performance using existing claims-based data. MIPS is a novel payment strategy that marks a fundamental shift away from centering physician reimbursement on macroeconomic indicators (eg, the overall increase in Medicare expenditures relative to the sustainable growth rate) to mapping reimbursement to physician-level measures of cost and quality.22 Beginning in 2019, MIPS will provide adjusted payments to physicians based on individual performance determined through 4 categories, including quality of care, meaningful use of electronic medical record systems, resource use, and clinical practice improvement activities. The selection of thoughtful and meaningful physician-level measures of quality and resource use is required should this policy intervention improve the value of health care delivered to Americans. In this study, we characterize the degree of unexplained variation in the use of observation for patients with newly diagnosed prostate cancer. Certainly, physician-level treatment signatures may ultimately be leveraged as a meaningful performance measure, and increasing reimbursement to physicians more likely to observe men with low-risk disease may more effectively align financial incentives to engage in observation as opposed to immediate treatment. Furthermore, although this measure may undoubtedly result in improvements in the degree of prostate cancer treatment-related morbidity at the individual patient level, it may also translate into considerable cost savings to the Medicare program through avoidance of costly prostate cancer treatment.

In practice, however, numerous challenges are associated with prostate cancer treatment signatures to measure physician performance. First, no clear consensus surrounds the optimal rate of observation for men with low-risk prostate cancer. Indeed, some may argue that most of these men should be observed on account of the exceedingly low risk for prostate cancer progression and mortality; however, we must continue to balance the levers of guideline recommendations, patient centeredness, and patient preference. Ample data suggest that individual patient factors and preferences dominate decision making and that variations in rates of observation may reflect variations in patient preferences and characteristics.31-33 Nonetheless, the magnitude of urologist-level variation (5.1%-71.2%) observed in the present study is unlikely to be explained by patient factors alone. Certainly, explanations for the excessively high or low estimated rates of observation at the physician level may exist; however, these findings may serve as a platform to identify the most significant deviations from central tendency to facilitate physician-level quality improvement.

Last, this study demonstrates that the use of observation for low-risk and high-risk disease is linked at the urologist level, which represents an important opportunity for improving the overall quality of care for patients with localized prostate cancer. That observation for low- and high-risk disease is correlated among urologists suggests that efforts to limit inappropriate resource use in low-risk populations may have the inadvertent consequence of lowering appropriate resource use for high-risk populations.23 Therefore, we must do more than simply mandate a standard resource-use pattern of observation for localized prostate cancer because such changes may impede access to care for high-risk patients who stand to benefit most from these services. This problem, however, may be circumvented through the use of a bidirectional measure of quality that ensures that men with intermediate- and high-risk disease are not exposed to undue risk secondary to excess rates of nontreatment.

Although we believe our findings are uniquely informative, they must be understood within the context of several limitations to the study. First, because this study is limited to the SEER registry population enrolled in Medicare, these observations may not be applicable to men younger than 66 years. Second, this analysis was restricted to the Medicare fee-for-service population for whom claims are available; therefore, we cannot speculate whether these outcomes would be similar among managed care populations. Third, SEER-Medicare does not explicitly classify patients who are undergoing observation. Accordingly, we inferred this treatment group using the definition of no treatment within 1 year of diagnosis. Fourth, although we identified a number of important patient-level factors that are associated with observation, additional unmeasured patient- and physician-level factors may lead to biased estimates. Fifth, the classification of risk was, in part, based on PSA values that may be inaccurate in as many as 17% of SEER registrants owing to coding issues from an implied decimal.27 However, we used clinically relevant PSA ranges, as recommended by SEER, which results in an error rate of less than 5%. Furthermore, a sensitivity analysis excluding PSA from the risk criteria results in no major qualitative changes compared with the main analysis. Last, the data in this analysis are from 2004 to 2009, which represents the early adoption period for active surveillance. Although the focus of the study is observation, which includes watchful waiting, the extent to which these results would differ in a more contemporary era or if a more stringent definition of active surveillance had been used is uncertain.34

These limitations notwithstanding, we believe these data have noteworthy implications for health care policy in the United States. Reducing variation in preference-sensitive care must begin with informing demand for relevant discretionary treatments. This process occurs, namely, by providing the consumer with the necessary information to make informed choices with respect to their care. By identifying and publicly reporting physician-specific treatment signatures for prostate cancer, patients will be empowered to set the appropriate demand for discretionary services by seeking care from high-quality balanced physicians. Furthermore, because low-income and nonwhite race are risk factors for inappropriate care in this study, incentivizing appropriate care may not only elevate the quality of care for all patients with prostate cancer but also may help to alleviate socioeconomic disparity. Although a definition of what the appropriate rate of observation is at the physician level may not be possible, public reporting of these rates will be an important initial step in providing a rational strategy for promoting effective disease management at the physician level.

Conclusions

We found striking variation in the patterns for the use of observation for men with low-risk prostate cancer and, more important, that the patterns of observation for low- and high-risk patients is correlated at the individual urologist level. This study reveals the manifestly high variation in the use of observation for prostate cancer among US urologists and establishes a conceptual framework for the use of urologist-level rates of observation as a quality measure for the recently legislated merit-based reimbursement formulas.

Back to top
Article Information

Corresponding Author: Mark D. Tyson, MD, Department of Urologic Surgery, Vanderbilt University Medical Center, Room A1302, Medical Center N, Nashville, TN 37203 (mark.tyson@vanderbilt.edu).

Accepted for Publication: June 1, 2016.

Published Online: September 21, 2016. doi:10.1001/jamasurg.2016.2907

Author Contributions: Dr Tyson had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Tyson, O’Neil, Barocas, Resnick.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Tyson, Graves, Resnick.

Critical revision of the manuscript for important intellectual content: Graves, O’Neil, Barocas, Chang, Penson, Resnick.

Statistical analysis: Graves, O’Neil, Resnick.

Obtaining funding: Tyson.

Administrative, technical, or material support: Graves, O’Neil, Chang, Penson, Resnick.

Study supervision: O’Neil, Barocas, Chang, Penson, Resnick.

Conflict of Interest Disclosures: None reported.

Funding/Support: This study was supported in part by grant 5T32CA106183 from the National Cancer Institute, National Institutes of Health (Dr Tyson), and by Mentored Research Scholar Grant in Applied and Clinical Research, MSRG-15-103-01-CHPHS from the American Cancer Society (Dr Resnick).

Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

References
1.
Wennberg  J, Gittelsohn.  Small area variations in health care delivery.  Science. 1973;182(4117):1102-1108.PubMedGoogle ScholarCrossref
2.
Birkmeyer  JD, Sharp  SM, Finlayson  SR, Fisher  ES, Wennberg  JE.  Variation profiles of common surgical procedures.  Surgery. 1998;124(5):917-923.PubMedGoogle ScholarCrossref
3.
Fisher  ES, Wennberg  DE, Stukel  TA, Gottlieb  DJ, Lucas  FL, Pinder  ÉL.  The implications of regional variations in Medicare spending, part 2: health outcomes and satisfaction with care.  Ann Intern Med. 2003;138(4):288-298.PubMedGoogle ScholarCrossref
4.
Song  Y, Skinner  J, Bynum  J, Sutherland  J, Wennberg  JE, Fisher  ES.  Regional variations in diagnostic practices.  N Engl J Med. 2010;363(1):45-53.PubMedGoogle ScholarCrossref
5.
Wennberg  JE.  Unwarranted variations in healthcare delivery: implications for academic medical centres.  BMJ. 2002;325(7370):961-964.PubMedGoogle ScholarCrossref
6.
Lu-Yao  GL, McLerran  D, Wasson  J, Wennberg  JE; Prostate Patient Outcomes Research Team.  An assessment of radical prostatectomy: time trends, geographic variation, and outcomes.  JAMA. 1993;269(20):2633-2636.PubMedGoogle ScholarCrossref
7.
Wilt  TJ, Cowper  DC, Gammack  JK, Going  DR, Nugent  S, Borowsky  SJ.  An evaluation of radical prostatectomy at Veterans Affairs Medical Centers: time trends and geographic variation in utilization and outcomes.  Med Care. 1999;37(10):1046-1056.PubMedGoogle ScholarCrossref
8.
Lai  S, Lai  H, Krongrad  A, Lamm  S, Schwade  J, Roos  BA.  Radical prostatectomy: geographic and demographic variation.  Urology. 2000;56(1):108-115.PubMedGoogle ScholarCrossref
9.
Krupski  TL, Kwan  L, Afifi  AA, Litwin  MS.  Geographic and socioeconomic variation in the treatment of prostate cancer.  J Clin Oncol. 2005;23(31):7881-7888.PubMedGoogle ScholarCrossref
10.
Hamilton  AS, Wu  X-C, Lipscomb  J,  et al.  Regional, provider, and economic factors associated with the choice of active surveillance in the treatment of men with localized prostate cancer.  J Natl Cancer Inst Monogr. 2012;2012(45):213-220.PubMedGoogle ScholarCrossref
11.
Harlan  L, Brawley  O, Pommerenke  F, Wali  P, Kramer  B.  Geographic, age, and racial variation in the treatment of local/regional carcinoma of the prostate.  J Clin Oncol. 1995;13(1):93-100.PubMedGoogle Scholar
12.
Lai  S, Lai  H, Lamm  S, Obek  C, Krongrad  A, Roos  B.  Radiation therapy in non–surgically-treated nonmetastatic prostate cancer: geographic and demographic variation.  Urology. 2001;57(3):510-517.PubMedGoogle ScholarCrossref
13.
Spencer  BA, Fung  CH, Wang  M, Rubenstein  LV, Litwin  MS.  Geographic variation across veterans affairs medical centers in the treatment of early stage prostate cancer.  J Urol. 2004;172(6, pt 1):2362-2365.PubMedGoogle ScholarCrossref
14.
Shahinian  VB, Kuo  YF, Freeman  JL, Goodwin  JS.  Determinants of androgen deprivation therapy use for prostate cancer: role of the urologist.  J Natl Cancer Inst. 2006;98(12):839-845.PubMedGoogle ScholarCrossref
15.
Cooperberg  MR, Broering  JM, Carroll  PR.  Time trends and local variation in primary treatment of localized prostate cancer.  J Clin Oncol. 2010;28(7):1117-1123.PubMedGoogle ScholarCrossref
16.
Wilt  TJ, Brawer  MK, Jones  KM,  et al; Prostate Cancer Intervention versus Observation Trial (PIVOT) Study Group.  Radical prostatectomy versus observation for localized prostate cancer.  N Engl J Med. 2012;367(3):203-213.PubMedGoogle ScholarCrossref
17.
Bill-Axelson  A, Holmberg  L, Ruutu  M,  et al; SPCG-4 Investigators.  Radical prostatectomy versus watchful waiting in early prostate cancer.  N Engl J Med. 2011;364(18):1708-1717.PubMedGoogle ScholarCrossref
18.
Ritch  CR, Graves  AJ, Keegan  KA,  et al.  Increasing use of observation among men at low risk for prostate cancer mortality.  J Urol. 2015;193(3):801-806.PubMedGoogle ScholarCrossref
19.
Choo  R, Klotz  L, Danjoux  C,  et al.  Feasibility study: watchful waiting for localized low to intermediate grade prostate carcinoma with selective delayed intervention based on prostate specific antigen, histological and/or clinical progression.  J Urol. 2002;167(4):1664-1669.PubMedGoogle ScholarCrossref
20.
Lu-Yao  GL, Albertsen  PC, Moore  DF,  et al.  Outcomes of localized prostate cancer following conservative management.  JAMA. 2009;302(11):1202-1209.PubMedGoogle ScholarCrossref
21.
Cooperberg  MR, Carroll  PR.  Trends in management for patients with localized prostate cancer, 1990-2013.  JAMA. 2015;314(1):80-82.PubMedGoogle ScholarCrossref
22.
Medicare Access and CHIP Reauthorization Act of 2015, HR 2, 114th Congress, 2nd Sess (2015).
23.
Makarov  DV, Desai  R, Yu  JB,  et al.  Appropriate and inappropriate imaging rates for prostate cancer go hand in hand by region, as if set by thermostat.  Health Aff (Millwood). 2012;31(4):730-740.PubMedGoogle ScholarCrossref
24.
Howlader  N, Noone  A, Krapcho  M,  et al. SEER Cancer Statistics Review, 1975-2012. Bethesda, MD: National Cancer Institute. http://seer.cancer.gov/archive/csr/1975_2012/. Updated November 18, 2015. Accessed December 1, 2015.
25.
Warren  JL, Klabunde  CN, Schrag  D, Bach  PB, Riley  GF.  Overview of the SEER-Medicare data: content, research applications, and generalizability to the United States elderly population.  Med Care. 2002;40(8)(suppl):IV-3-IV-18.PubMedGoogle Scholar
26.
D’Amico  AV, Whittington  R, Malkowicz  SB,  et al.  Biochemical outcome after radical prostatectomy, external beam radiation therapy, or interstitial radiation therapy for clinically localized prostate cancer.  JAMA. 1998;280(11):969-974.PubMedGoogle ScholarCrossref
27.
PSA values and SEER data, 1973-2012. http://seer.cancer.gov/data/psa-values.html. Accessed December 23, 2015.
28.
Hoffman  KE, Niu  J, Shen  Y,  et al.  Physician variation in management of low-risk prostate cancer: a population-based cohort study.  JAMA Intern Med. 2014;174(9):1450-1459.PubMedGoogle ScholarCrossref
29.
Gleason  DF, Mellinger  GT.  Prediction of prognosis for prostatic adenocarcinoma by combined histological grading and clinical staging.  J Urol. 1974;111(1):58-64.PubMedGoogle Scholar
30.
Charlson  ME, Charlson  RE, Peterson  JC, Marinopoulos  SS, Briggs  WM, Hollenberg  JP.  The Charlson comorbidity index is adapted to predict costs of chronic disease in primary care patients.  J Clin Epidemiol. 2008;61(12):1234-1240.PubMedGoogle ScholarCrossref
31.
Holmboe  ES, Concato  J.  Treatment decisions for localized prostate cancer: asking men what’s important.  J Gen Intern Med. 2000;15(10):694-701.PubMedGoogle ScholarCrossref
32.
Sommers  BD, Beard  CJ, D’Amico  AV,  et al.  Decision analysis using individual patient preferences to determine optimal treatment for localized prostate cancer.  Cancer. 2007;110(10):2210-2217.PubMedGoogle ScholarCrossref
33.
Sommers  BD, Beard  CJ, D’Amico  AV, Kaplan  I, Richie  JP, Zeckhauser  RJ.  Predictors of patient preferences and treatment choices for localized prostate cancer.  Cancer. 2008;113(8):2058-2067.PubMedGoogle ScholarCrossref
34.
Filson  CP, Schroeck  FR, Ye  Z, Wei  JT, Hollenbeck  BK, Miller  DC.  Variation in use of active surveillance among men undergoing expectant treatment for early stage prostate cancer.  J Urol. 2014;192(1):75-80.PubMedGoogle ScholarCrossref
×