Importance
Recent governmental and private initiatives have sought to reduce health care costs by making health care prices more transparent.
Objective
To determine whether the use of an employer-sponsored private price transparency platform was associated with lower claims payments for 3 common medical services.
Design
Payments for clinical services provided were compared between patients who searched a pricing website before using the service with patients who had not researched prior to receiving this service. Multivariable generalized linear model regressions with propensity score adjustment controlled for demographic, geographic, and procedure differences. To test for selection bias, payments for individuals who used the platform to search for services (searchers) were compared with those who did not use the platform to search for services (nonsearchers) in the period before the platform was available. The exposure was the use of the price transparency platform to search for laboratory tests, advanced imaging services, or clinician office visits before receiving care for that service.
Setting and Participants
Medical claims from 2010-2013 of 502 949 patients who were insured in the United States by 18 employers who provided a price transparency platform to their employees.
Main Outcomes and Measures
The primary outcome was total claims payments (the sum of employer and employee spending for each claim) for laboratory tests, advanced imaging services, and clinician office visits.
Results
Following access to the platform, 5.9% of 2 988 663 laboratory test claims, 6.9% of 76 768 advanced imaging claims, and 26.8% of 2 653 227 clinician office visit claims were associated with a prior search on the price transparency platform. Before having access to the price transparency platform, searchers had higher claims payments than nonsearchers for laboratory tests (4.11%; 95% CI, 1.87%-6.41%), higher payments for advanced imaging services (5.57%; 95% CI, 1.83%-9.44%), and no difference in payments for clinician office visits (0.26%; 95% CI; 0.53%-0.005%). Following access to the price transparency platform, relative claim payments for searchers were lower for searchers than nonsearchers by 13.93% (95% CI, 10.28%-17.43%) for laboratory tests, 13.15% (95% CI, 9.49%-16.66%) for advanced imaging, and 1.02% (95% CI, 0.57%-1.47%) for clinician office visits. The absolute payment differences were $3.45 (95% CI, $1.78-$5.12) for laboratory tests, $124.74 (95% CI, $83.06-$166.42) for advanced imaging services, and $1.18 (95% CI, $0.66-$1.70) for clinician office visits.
Conclusions and Relevance
Use of price transparency information was associated with lower total claims payments for common medical services. The magnitude of the difference was largest for advanced imaging services and smallest for clinical office visits. Patient access to pricing information before obtaining clinical services may result in lower overall payments made for clinical care.
Prices of medical services for commercially insured patients vary widely,1-3 yet there is little correlation between price and the quality of care.4 From an economic perspective, health care pricing is unique because many insurance contracts preclude disclosing negotiated rates,7 resulting in patients making health care choices on factors other than cost.5,6 Recent changes in the health care insurance market have resulted in commercially insured patients bearing a greater proportion of their health care costs.8 In 2013, 48% of US residents had employer-sponsored health insurance. Of those, 20% were enrolled in a high-deductible health plan, up from 4% in 2006.9 As patients have an increasing responsibility to pay for their care, they will likely demand access to prices charged for that care.
Previous studies of high-deductible health plans found that deductibles did not result in lower payments for care.10 These studies were conducted when patients did not have access to health care pricing information and did not have the ability to select clinical services based on cost. To address this limitation, in addition to initiating cost sharing, employers seeking to reduce health care spending should provide their employees with accurate cost and quality information to help the patients make well-informed decisions about their care.11-14
Several state-administered initiatives have increased price transparency by reporting hospital charges or average reimbursement rates. In recent years, price transparency initiatives have emerged in the private sector and enhanced state efforts by providing personalized price information to patients.15 Pricing information made available to patients reflects actual out-of-pocket costs for each individual patient by accounting for billed charge discounts, health benefit design, and deductibles.
Although it is widely perceived that greater transparency of pricing information should reduce health care costs, to our knowledge, no prior studies have shown this using private price transparency platforms. We examined the association between the availability of health service prices to patients and the total claims payments (the total amount paid by patient and insurer) for these services. We hypothesized that providing personalized price information would allow patients to identify and choose less expensive providers resulting in lower payments for medical services.
The study population consisted of employees, their spouses, and their dependents from 18 large, self-insured employers who had access to a price transparency platform, Castlight Health, for varying amounts of time between 2010 and 2013. These employers, which represented such industries as retail, biotechnology, and manufacturing, offered a variety of insurance plans, including high-deductible, limited network, and preferred provider organization plans. Patients not residing in a metropolitan statistical area were excluded.
The University of California, Berkeley, Committee for Protection of Human Subjects did not require informed consent.
Eligible employees and their adult family members could access the platform online (both Internet and mobile) or by calling on the telephone. When registered individuals searched for a procedure, they were shown personalized out-of-pocket costs, which were based on the particular individual’s insurance design, network, and deductible status. For the services examined in this study, prices shown on the website were based on individual CPT (Current Procedural Terminology) codes; for more complex services (eg, joint replacement surgery), prices were shown at the episode level. For clinician office visits, patients could also see satisfaction ratings and other nonprice information (eg, where the physician went to medical school), but these dimensions were not examined in this study because such information was not consistently available for all services or for all clinicians. Example results of pricing information available to patients are shown in eAppendix Figures A1 through A3 in the Supplement.
We examined searches for laboratory tests, advanced imaging services (magnetic resonance imaging [MRI] and computed tomographic [CT] scans), and clinician office visits. These services were selected for 2 reasons: First, they are among the most frequently obtained outpatient services. Second, because these are usually elective services, patients may choose health care service facilities and locations based on price in advance of obtaining these services. We excluded all inpatient and emergency department claims because patients have limited ability to shop for providers of these services.
For each of these 3 services, we examined the relationship between searching the price transparency website and medical claims. Each of the 18 employers provided claims data for up to 2 years before they provided the platform (preperiod), and then for all subsequent periods. The data from the preperiod were used to examine potential baseline differences between searchers and nonsearchers.
To link searches to claims, we first identified searches for laboratory tests, advanced imaging services, or clinician office visits conducted before obtaining that service. We defined laboratory test searches as searches for a laboratory procedure (eg, lipid panel or obstetric panel) or as containing the word string lab. We defined imaging searches as searches containing MRI, CT, magnetic resonance imaging, or computed tomography. We defined clinician office visit searches as searches for any type of clinician office visit (eg, primary care clinician or endocrinologist). Because family members may use a common account or search for another family member, we attributed any search to all household members.
For each service category, we defined searchers as those with at least 1 service-specific claim following a search. Those who did not search for a given service before receiving a claim for that service were defined as nonsearchers. Our primary treatment group consisted of patients with a service-specific claim made within 14 days after a search. We chose the 14-day period to approximate the time between searching for a service and realistically receiving an appointment. We also examined alternative search periods as robustness tests (eAppendix Section G Table G1 in the Supplement).
Medical claims at the procedure-code level were our primary unit of analysis. We classified the search status of each claim by the relation of the claim’s date to the patient’s search history. Because our outcome of interest was total health care spending, we used each claim’s total payment amount (ie, the sum of the patient and employer payments) as the dependent variable. We first examined unadjusted payment differences between searchers and nonsearchers. Next, we used multivariable generalized linear model regressions with a log link and gamma distribution to isolate the association of searching vs other observed differences between searchers and nonsearchers.16 The generalized linear model regressions were weighted using inverse-probability weights obtained from a propensity score model. Weights were determined using probit regression to predict the probability of being a searcher based on demographic variables: age, sex, year, and employer. The generalized linear model regression coefficients were converted into dollar values by computing the average marginal effect of the predicted values.
We controlled for demographic characteristics, time, geography, and employer interaction with insurance carrier. Specifically, year and month, metropolitan statistical area, and employer interaction with insurance carrier were considered to be fixed effects. To control for differences in procedure type, we included CPT-code as fixed effects in regression models. We also included each claim’s patient cost-sharing. For office visits, we controlled for clinician specialty.
Although the statistical model controls for a variety of confounding factors, searchers might differ from nonsearchers in various ways so that unobservable differences between them might still bias the results. For example, those who chose to search for a service may have already known price information through other channels and may have searched simply to confirm existing knowledge. After having access to the platform, such individuals would likely have had lower payments than nonsearchers. New price information would not have been the cause of any payment differences.
To test for unobservable differences, we used the data from before the platform had opened and conducted multivariable placebo regressions that examined differences in payments between those who would become searchers and those who never searched. We hypothesized that selection bias would obviate our results if these regressions showed that searchers had lower payments than nonsearchers before the transparency platform was available.
We also assessed for selection bias by performing a falsification test using multivariable regressions to compare payments received by searchers for services unrelated to the search with payments for nonsearchers. For example, we compared imaging services fees paid on behalf of nonsearchers with those paid on behalf of individuals who had searched for laboratory test fees. This test only included laboratory tests and advanced imaging services. We hypothesized that any difference in payments for unrelated services between searchers and nonsearchers would reflect selection bias.
We conducted 3 additional tests for robustness. First, we categorized claims into 2 cost-sharing categories: no cost sharing (<5% of the cost paid by the patient) and full or partial cost sharing. We then used multivariable regressions to compare payments between the searchers and nonsearchers. We hypothesized that payment differences between searchers and nonsearchers would be higher for claims with full cost sharing due to a higher financial incentive to shop for care.
Next, we used multivariable regressions to examine differences in claims payments for 2 treatment groups: those with a relevant medical claim from 15 to 30 days after a search and those with a claim from more than 30 days after a search. For this test, we expected to find smaller differences between searchers and nonsearchers because the amount of time between the search and the claim had increased.
The heterogeneous effects of clinician office visits for new vs established patients were assessed using multivariable regression. We expected to find smaller payment differences between searchers and nonsearchers for established visits because patients might be less likely to change their established clinicians. Continuity of care, patient experience, and other nonprice attributes might play a larger role in how patients choose a clinician with whom he/she had planned to establish a continuing relationship.
Additional robustness and sensitivity tests are described in the eAppendix in the Supplement. These tests include testing sensitivity of results to alternate search window definitions, restricting the analysis sample to patients who used the price transparency platform in both the before and after periods, excluding high users from the analysis, using alternate controls for plan networks, and alternate controls for clinician satisfaction ratings.
All analyses were conducted using STATA version 13.0 (StataCorp). Robust standard errors were clustered at the zip code level. All significance testing was 2-sided with a significance threshold of P < .05.
Descriptive Characteristics
A total of 502 949 individuals representing 253 757 households were included in this study. After each employer provided access to the platform, 304 247 individuals from 195 401 households received laboratory services, 37 384 individuals from 34 245 households underwent advanced imaging, and 446 290 individuals from 236 942 households visited a clinician (eAppendix Section B Figure B1 in the Supplement).
A total of 7485 households searched for a laboratory test, 2184 for an advanced imaging service, and 51 481 for a clinician office visit. After access to the platform, 5.9% of laboratory claims matched a laboratory search, 6.9% of advanced imaging claims matched an advanced imaging search, and 26.8% of clinician office visit claims matched a clinician office visit search. Table C2 in the eAppendix Section 3 of the Supplement presents the proportion of searchers who did not have a claim for the searched service or who had a claim for that service within 14 days, from 15 through 30 days, and 30 or more days after a search.
Both groups had similar age, sex, zip code median household income, and medical condition status. Both groups had qualitatively similar medical spending levels and use of laboratory, advanced imaging, and clinician office visit services as the nonsearchers had the year before the program started (Table 1).
The study population covered much of the United States, including a mixture of urban and rural geographic areas from every state and 75% of metropolitan statistical areas (Figure 5 in the Supplement). Median claim payments in the period after access were $14 for laboratory tests (interquartile range [IQR], $8-$26), $728 for advanced imaging services (IQR, $473-$1164), and $112 for clinician office visits (IQR, $75-$138; eAppendix Figure D1 in the Supplement).
The Figure compares the total claims payments with both groups following access to the transparency price platform. Those who searched 14 days before receiving care had lower claim payments than those who did not. Adjusted payments were 13.93% (95% CI, 10.28%-17.43%) lower for laboratory tests, 13.15% (95% CI, 9.49%-16.66%) lower for advanced imaging, and 1.02% (95% CI, 0.57%-1.47%) lower for clinician office visits. The relative differences translate into lower absolute dollar payments of $3.45 (95% CI, $1.78-$5.12) for laboratory tests, $124.74 (95% CI, $83.06-$166.42) for advanced imaging, and $1.18 (95% CI, $0.66-$1.70) for clinician office visits (Table 2).
Claims for searchers had lower relative payments than those of nonsearchers: 0.78% (95% CI, 0.25%-1.23%) lower clinician office visits for established patients and 2.40% (95% CI, 1.66%-3.13%) lower for new patient office visits. The absolute payment differences were $0.86 (95% CI, $0.28-$1.43) for established patient and $3.29 (95% CI, $2.31-$4.27) for new patient clinician visits (eAppendix Table F1 in the Supplement).
In the period before either group had access to the price transparency platform, payments for searchers were 4.11% higher (95% CI, 1.87%-6.41%) for laboratory tests and 5.57% higher (95% CI, 1.83%-9.44%) for advanced imaging but were 0.26% (95% CI, 0.53%-0.005%) lower for clinician office visits than for nonsearchers.
Similarly, in the falsification test, in which we compared payments obtained by searchers for services unrelated to the search with payments received by nonsearchers, we found that searchers had 3.33% (95% CI, −4.62% to 11.96%) higher relative payments for unrelated services than did nonsearchers. However, this result was not statistically significant (P = .42).
For all 3 services, payments were lower for searchers than for nonsearchers, regardless of cost sharing. For laboratory tests and advanced imaging services, the difference was larger for claims that required patient cost sharing than for claims that did not. For searchers with cost-sharing laboratory claims, the relative payments were 16.36% (95% CI, 12.46%-20.08%) lower than they were for nonsearchers; however, for claims without cost sharing, the difference was 14.13% (95% CI, 9.11%-18.88%). For advanced imaging, searchers had 14.97% lower relative payments (95% CI, 11.47%-18.34%) for cost-sharing claims and 13.63% lower relative payments (95% CI, 2.77%-23.27%) for claims without cost sharing than did nonsearchers. Cost-sharing claims for clinician office visits for searchers was 0.76% lower (95% CI, 0.27%-1.24%) and were 2.26% (95% CI, 1.41%-3.10%) lower for clinician office visits claims without cost sharing than for nonsearchers (Table 3).
For all 3 service periods (within 14 days, between 15 and 30 days, and >30 days), the difference in payments was largest for claims received within 14 days of a search. In addition, the payment differences for services obtained within 14 days were statistically different from services obtained more than 30 days after a search (Table 2).
Additional analyses presenting the robustness of the main results, including alternative search periods, alternative sample populations, additional insurance design controls, and clinician quality are presented in the Supplement. The results from these tests were similar to the main results.
When eligible patients searched using the price transparency platform prior to getting a service, searching was associated with lower payments for clinical services—namely, advanced imaging and laboratory tests—and for claims with cost sharing. Savings for imaging services were in the hundred-dollar range and average savings for laboratory tests were a few dollars per test. This naturally raises the question: Why did patients change behavior for seemingly modest savings per service? It may be that less expensive physicians and health care services facilities were also higher quality or were more convenient; therefore, those who searched based on nonprice attributes visited lower-cost providers. Or it may be that forward-looking patients thought that savings per laboratory test would accumulate over time. Some patients, especially those with chronic conditions, need periodic laboratory tests or other medical services. It may be that the savings were sufficient to change behavior or that unexpectedly high prices induced behavior change, even in cases for which consumers were responsible for only a fraction of the total price as out-of-pocket costs. However, we do not have data on patient motivations to know the relative importance of each of the above or other explanations for our findings.
We also demonstrated that payments for claims, even without cost sharing, were lower for those who searched than for those who did not. This result may be in part due to inertia because clinician choices when employees must pay a deductible might persist even after they have reached the deductible and have little or no cost sharing. In addition if less expensive clinicians have higher-quality service or are more convenient, then those who search based on nonprice attributes may also happen to seek care from lower-cost clinicians. We found a smaller reduction in payments as the time between searching for a service and receiving that service increased, suggesting that price information is most effective when obtained near the date of medical service.
This study has several limitations. First, given that searching is not randomly assigned, unobserved factors distinguishing people who searched before getting a service and people who did not search may explain some of the difference in payments. However, in the preperiod, those who would later become searchers had higher laboratory test and advanced imaging payments than those who would not become searchers, suggesting that they were not necessarily more frugal before they had access to the price transparency platform. Similarly, the confidence in our conclusion that searching on the transparency platform was associated with pursuing lower-cost services was reinforced by the negative-falsification test showing that the costs for services not associated with searches were no different between groups. Nevertheless, it is possible that nonsearchers may differ from searchers in unobservable ways such as experience navigating the health care system. In addition, potential bias from contemporaneous events cannot be considered related to either reduced payments or increased use of the price transparency platform. These biases would cause overestimation of the payment reductions found in this study.
Second, whether the results found in this study would generalize to those who chose not to search is unclear. Evaluating why patients did not use the price transparency platform and what types of interventions might increase use is important. It is also important to investigate why patients who searched did not go on to receive the services for which they searched.
Third, the study was not designed to determine whether patients are making better decisions; rather, we only examined whether patients who actively search are choosing lower-cost clinicians. The study did not examine quality of care, convenience, or other nonprice attributes. For example, factors such as physician characteristics, including patient satisfaction, medical education, years of experience, and board certification may influence decision making but were not examined in this study. (Appendix K in the Supplement discusses patient satisfaction with their physicians.)
It is possible that these tools might also affect use of care. For example, knowing that some prices are very high, some patients may forego care. Conversely, cost savings from price shopping might enable patients to increase use, which may lead to improved adherence to recommended treatments but also to overuse of services. For this reason, our study cannot determine whether the price transparency technology reduces overall health care spending. Future research should extend this analysis to services beyond the 3 used in this study. It should also examine how use is affected to better understand the broader effect of price transparency on health care spending and population health.
The Affordable Care Act (ACA) recognizes price transparency’s potential and requires hospitals to publish charges for common services. Insurance plans offered through the exchanges are required to communicate price information to enrollees.17 Although the ACA focuses on increasing price transparency, tailoring price information to the privately insured market remains a challenge.
Use of price transparency information was associated with lower total claims payments for common medical services. The magnitude of the difference was largest for advanced imaging services and smallest for clinician office visits.
Corresponding Author: Neeraj Sood, PhD, Schaeffer Center for Health Policy and Economics, 3335 S Figueroa St, Unit A, Los Angeles, CA 90089-7273 (nsood@healthpolicy.usc.edu).
Author Contributions: Mr Whaley had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: All authors.
Acquisition, analysis, or interpretation of data: Whaley, Schneider Chafen, Pinkard, Bravata, Sood.
Drafting of the manuscript: Whaley, Schneider Chafen, Pinkard, Kellerman, Bravata, Sood.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Whaley, Schneider Chafen, Pinkard, Sood.
Obtained funding: Sood.
Administrative, technical, or material support: All authors.
Study supervision: Sood.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Mr Whaley, Drs Schneider Chafen, Kellerman, and Bravata, and Ms Pinkard reported that they are employees of Castlight Health. Dr Sood reported that he is a paid advisor for Castlight Health. Dr Kocher reported that he is on the board of directors of Castlight Health.
Funding/Support: Dr Sood was supported by grant 1R01AG043850-01 from the National Institutes of Health.
Role of Funders/Sponsors: The National Institutes of Health had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Previous Presentation: The results of this study were presented at the Academy of Health, June 2014, in San Diego, California, and at the American Society of Health Economists, June 22-25, 2014, in Los Angeles, California.
Additional Information: Deidentified data will be made available to researchers in a HIPPA-compliant manner upon request for the purposes of replication.
Additional Contributions: We thank Timothy Brown, PhD (University of California, Berkeley), Karen Eggleston, PhD (Stanford University), Vicki Fung, PhD (Mongan Institute for Health Policy), Alan Garber, MD, PhD (Harvard University), Robert Huckman, PhD (Harvard University), Anupam Jena, MD, PhD (Harvard University), Mark McClellan, MD, PhD (Brookings Institution), Ateev Mehrotra, MD (Harvard University), Mary Reed, DrPh (Kaiser Permanente Division of Research), and James Robinson, PhD (University of California, Berkeley) for thoughtful reviews of an earlier draft of the manuscript; Eugenia Bisignani (Castlight Health) for data assistance, and Allison Pitt (Stanford University) and Zachary Wagner-Rubin (University of California, Berkeley) for excellent research assistance. Huckman and McClellan are paid members of the Castlight Health Inc. No others were compensated for their contributions.
1.Robinson
JC. Variation in hospital costs, payments, and profitabilty for cardiac valve replacement surgery.
Health Serv Res. 2011;46(6pt1):1928-1945. doi:10.1111/j.1475-6773.2011.01288.x.
PubMedGoogle ScholarCrossref 2.Baker
L, Bundorf
MK, Royalty
A. Private insurers’ payments for routine physician office visits vary substantially across the United States.
Health Aff (Millwood). 2013;32(9):1583-1590. doi:10.1377/hlthaff.2013.0309.
PubMedGoogle ScholarCrossref 3.Hsia
RY, Akosa Antwi
Y, Weber
E. Analysis of variation in charges and prices paid for vaginal and caesarean section births: a cross-sectional study.
BMJ Open. 2014;4(1):e004017. doi:10.1136/bmjopen-2013-004017.
PubMedGoogle ScholarCrossref 4.Massachusetts Division of Health Care Finance and Policy. Massachusetts Health Care Cost Trends Price Variation in Health Care Services. Boston: Massachusetts Division of Health Care Finance and Policy; 2011.
5.Reinhardt
UE. The pricing of US hospital services: chaos behind a veil of secrecy.
Health Aff (Millwood). 2006;25(1):57-69. doi:10.1377/hlthaff.25.1.57.
PubMedGoogle ScholarCrossref 6.Rosenthal
JA, Lu
X, Cram
P. Availability of consumer prices from US hospitals for a common surgical procedure.
JAMA Intern Med. 2013;173(6):427-432. doi:10.1001/jamainternmed.2013.460.
PubMedGoogle ScholarCrossref 7.Government Accountability Office. Health care price transparency: meaningful price information is difficult for consumers to obtain prior to receiving care. Washington, DC: United States Government Accountability Office; 2011.
http://www.gao.gov/products/GAO-11-791. Accessed January 17, 2014.
8.Claxton
G, Rae
M, Panchal
N,
et al. Health benefits in 2013: moderate premium increases in employer-sponsored plans.
Health Aff (Millwood). 2013;32(9):1667-1676. doi:10.1377/hlthaff.2013.0644.
PubMedGoogle ScholarCrossref 11.Reed
M, Fung
V, Price
M,
et al. High-deductible health insurance plans: efforts to sharpen a blunt instrument.
Health Aff (Millwood). 2009;28(4):1145-1154. doi:10.1377/hlthaff.28.4.1145.
PubMedGoogle ScholarCrossref 13.Mehrotra
A, Hussey
PS, Milstein
A, Hibbard
JH. Consumers’ and providers’ responses to public cost reports, and how to raise the likelihood of achieving desired results.
Health Aff (Millwood). 2012;31(4):843-851. doi:10.1377/hlthaff.2011.1181.
PubMedGoogle ScholarCrossref 14.Emanuel
E, Tanden
N, Altman
S,
et al. A systemic approach to containing health care spending.
N Engl J Med. 2012;367(10):949-954. doi:10.1056/NEJMsb1205901.
PubMedGoogle ScholarCrossref 16.Manning
WG, Mullahy
J. Estimating log models: to transform or not to transform?
J Health Econ. 2001;20(4):461-494. doi:10.1016/S0167-6296(01)00086-8.
PubMedGoogle ScholarCrossref 17.Hostetter
M, Klein
S. Health Care Price Transparency: Can It Promote High-Value Care? Washington, DC: Commonwealth Fund; 2012.