An employee was coded as a user if he or she or anyone else in the family searched the website. Three measures of price transparency tool utilization were defined: users who searched at least once (≥1 log-on), users who searched at least 3 times on different days (≥3 log-ons), and users who searched at least twice (≥2 log-ons) with a 30-day gap between searches.
eAppendix 1. Supplemental Methods
eAppendix 2. Pre-Intervention Trends
eAppendix 3. Two-Year Utilization Rate
eAppendix 4. Employer A and B Analyses
eAppendix 5. Main Analysis With Only Controls From Intervention Markets
eAppendix 6. Two-Part Model Analysis
eAppendix 7. Two-Year Follow-up Analysis
eAppendix 8. Analysis on Price Transparency Tool Users
Customize your JAMA Network experience by selecting one or more topics from the list below.
Desai S, Hatfield LA, Hicks AL, Chernew ME, Mehrotra A. Association Between Availability of a Price Transparency Tool and Outpatient Spending. JAMA. 2016;315(17):1874–1881. doi:10.1001/jama.2016.4288
There is increasing interest in using price transparency tools to decrease health care spending.
To measure the association between offering a health care price transparency tool and outpatient spending.
Design, Setting, and Participants
Two large employers represented in multiple market areas across the United States offered an online health care price transparency tool to their employees. One introduced it on April 1, 2011, and the other on January 1, 2012. The tool provided users information about what they would pay out of pocket for services from different physicians, hospitals, or other clinical sites. Using a matched difference-in-differences design, outpatient spending among employees offered the tool (n=148 655) was compared with that among employees from other companies not offered the tool (n=295 983) in the year before and after it was introduced.
Availability of a price transparency tool.
Main Outcomes and Measures
Annual outpatient spending, outpatient out-of-pocket spending, use rates of the tool.
Mean outpatient spending among employees offered the tool was $2021 in the year before the tool was introduced and $2233 in the year after. In comparison, among controls, mean outpatient spending changed from $1985 to $2138. After adjusting for demographic and health characteristics, being offered the tool was associated with a mean $59 (95% CI, $25-$93) increase in outpatient spending. Mean outpatient out-of-pocket spending among those offered the tool was $507 in the year before introduction of the tool and $555 in the year after. Among the comparison group, mean outpatient out-of-pocket spending changed from $490 to $520. Being offered the price transparency tool was associated with a mean $18 (95% CI, $12-$25) increase in out-of-pocket spending after adjusting for relevant factors. In the first 12 months, 10% of employees who were offered the tool used it at least once.
Conclusions and Relevance
Among employees at 2 large companies, offering a price transparency tool was not associated with lower health care spending. The tool was used by only a small percentage of eligible employees.
Price transparency tools have increased in popularity in response to observed price variation across physicians, hospitals, and other clinical sites1-3 and because patients bear a larger fraction of spending through increased deductibles, co-payments, and co-insurance. More than half of US states have passed legislation establishing price transparency websites or mandating that health plans, hospitals, or physicians make price information available to patients.4 Websites have emerged to “crowd source” price information, and health plans have introduced price transparency tools for enrollees.5,6 Employers have also contracted with companies such as Truven Health Analytics and Castlight to provide their employees with price transparency tools.7
These tools can help patients identify and seek less expensive care from providers such as hospitals, physicians, laboratories, imaging centers, and other clinicians. Given the weak relationship between price and quality,8 it is assumed that patients can price shop for less expensive health care services without sacrificing quality.
Despite the enthusiasm for price transparency efforts, little is known about their association with health care spending. One study found that users of a price transparency tool received less expensive laboratory tests, advanced imaging, and office visits.7 However, these results were limited to the narrow population that used the tool and a limited set of health care services. From the perspective of an employer or health plan that is deciding whether to offer such a tool, the most relevant question is whether the tool will reduce aggregate spending across all their employees or enrollees. To understand the association between price transparency tool availability and outpatient spending, this study compared the health care spending patterns of employees of 2 companies that offered a price transparency tool with patterns among employees of other companies that did not offer the tool.
The objective was to study whether having access to more price information was associated with reduction in annual outpatient spending in the first 12 months after introduction. Outpatient spending was defined as payments to a physician, hospital, laboratory, imaging center, or other clinician such as a nurse practitioner or physician assistant (henceforth, physicians and other clinicians will be referred to as clinicians), summing patient out-of-pocket spending and health plan reimbursement for all outpatient care. Employees (and their dependents) of 2 large companies that offered the Truven Treatment Cost Calculator were compared with a control population who was not offered a price transparency tool. The Harvard Medical School Human Studies Committee deemed the study exempt from review.
The tool provides users with estimated total and out-of-pocket spending for approximately 330 services including imaging, outpatient surgeries, and office visits. The website facilitates price comparison by providing the average price for the service within the community and clinician-specific prices. Two features of this price transparency tool address shortcomings of older tools.3,9 First, to provide accurate real-time estimates of a user’s out-of-pocket amount, the tool’s results incorporate benefit structure and the user’s remaining deductible. Second, search results reflect episode-level prices rather than single-procedure prices. For example, outpatient colonoscopy prices include estimates for the clinician performing the procedure, anesthesia, and related laboratory services. Moreover, price estimates for 8 chronic conditions (eg, diabetes) reflect recommended, routine care and potential spending associated with complications over a year. Overall, the capabilities and implementation strategies appear similar to other tools currently offered.10 More than 21 million people across the United States have access to this price transparency tool.11
The Truven Health MarketScan Commercial Claims and Encounters database comprises deidentified health insurance claims for inpatient care, outpatient care, and prescription drugs for more than 50 million people from self-insured employers and health plans. Within these data, markets were defined by metropolitan statistical areas (MSAs); for those who lived outside an MSA, states served as markets.
The tool’s web log file contains data on searches for specific types of care (but does not capture other uses of the website). It identifies time and date of search as well as procedures or conditions searched. For 1 employer that offered the tool, price results from the search were also available.
The intervention population consisted of employees of 2 companies that offered the price transparency tool and provided their claims data to MarketScan. Employer A introduced the tool on January 1, 2012. It is located in the western United States and had 58 271 employees in 6 markets enrolled in preferred provider organization plans with an individual deductible up to $500. Employer B introduced the tool on April 1, 2011. It is a national employer with 90 973 employees in 361 markets across all regions enrolled in higher cost-sharing plans with individual deductibles between $500 and $2500 (details on how individuals’ deductibles were determined are available in eAppendix 1 in the Supplement).
These employers were chosen because they did not make changes to their deductibles or co-payments in the year before or after the tool was offered. Additionally, they had the highest rates of tool utilization. This higher uptake was attributable to greater promotional efforts, which included promotion during the open enrollment period and ongoing promotion such as senior management endorsements, prominent display on the employee web portal, posters, and mailings.
The intervention population comprised employees offered the tool because spending across all employees is arguably the most important outcome from an employer’s perspective, and such an intention-to-treat analysis addresses confounding from unobservable differences between tool users and nonusers.
To ensure that any observed changes were not due to changes in employee composition, the intervention population was restricted to those continuously enrolled in a single health plan in the 12 months before and 12 months after introduction of the tool. Employees aged 65 years or older were excluded to avoid confounding from Medicare coverage.
The control population included employees in MarketScan who were also continuously enrolled over the study period, matched to employees in the intervention population using both exact matching and propensity score matching. Exact matches on health plan characteristics that may affect the strength of the association between availability of price transparency and spending were required: plan type (preferred provider organization, high-deductible health plan/consumer-directed health plan) and deductible category (individual annual deductible of $1-$500, $501-$1250, or $1251-$2500). In addition, matching was done on propensity scores. The propensity score model included age categories, sex, and comorbidities in the preintervention year and produced the probability of each person being offered the price transparency tool. Greedy nearest neighbor matching with replacement was done to find potential controls with a similar propensity score. Matched propensity scores were required to be within a caliper of 0.8 SDs.12 Each individual in the intervention group was matched to up to 2 control individuals.
Matching was done in 2 stages. In the first stage, controls in the same market were identified (79% of matches). Intervention cohort members who were not matched in this first stage were matched to those in an “expanded” market. The expanded market consisted of MSAs with per capita outpatient spending levels in the preintervention year and spending trends for up to 4 years prior to the intervention similar to the treatment group MSAs (details in eAppendix 1 in the Supplement). Trends were calculated using all enrollees, not just continuously enrolled individuals.
Because the intervention was introduced at 2 different time points, the study period for each control was aligned with that of their matched intervention group counterpart.
The association between the availability of a price transparency tool and health spending using a difference-in-differences framework was assessed.13 The approach used the change in intervention group outcomes from the 12 months before to the 12 months after the introduction of the tool, minus the corresponding change in the matched controls during the same period. The model for the mean outcome Qit (such as spending) for employee i at time t is
E(Qit) = β0 + β1(PTi × postt) + β2PTi + β3(postt× Ai) + β4(postt× Bi) + γXit
where PTi indicates whether employee i was offered a price transparency tool, postt indicates a postintervention time period, Ai indicates being an employee of employer A or matched control, Bi indicates being an employee of employer B or matched control, and Xit (age category, sex, comorbidity indicators) are observable features of the person. Interaction terms for employer A or B cohort dummies and postt account for the difference in timing of the intervention. The parameter of interest was β1, which estimated the differential pre-post change in the intervention vs control cohorts. Statistical tests were 2-sided and were considered statistically significant at P<.05. All statistical analyses were completed using SAS software, version 9.3 (SAS Institute Inc).
A key assumption of difference-in-differences analyses is that the trends in outcomes are similar before the intervention. Quarterly outpatient spending trends in the preintervention year are presented in eAppendix 2 in the Supplement. Spending levels were similar between the intervention and control cohorts, but in the last quarter of the preintervention year, the control cohort’s spending increased relatively faster.
The primary outcome was total annual spending for outpatient care. Outpatient care is the focus of the analysis because it is more suitable to price shopping than inpatient care, which is often emergent and the price of which almost invariably exceeds the deductible. A secondary spending outcome was the patient’s annual out-of-pocket spending for outpatient care. Out-of-pocket spending was the sum of all deductible, co-payment, and co-insurance payments. To lessen the effect of outliers, outpatient spending and out-of-pocket outpatient spending were winsorized at the 1% and 99% levels.14
The analyses on these 2 spending outcomes were estimated with a linear regression model. Heteroscedasticity-consistent standard errors were used to calculate 95% confidence intervals. Because the control cohort could not be matched to employers, standard errors could not be clustered by employer. In a sensitivity check, the analyses were repeated using a 2-part model with a first-stage probit on the probability of nonzero spending and a second-stage probit on log-transformed spending with a normal distribution and identity link.
In the data, the facility fees for services such as surgery, radiology, and laboratory tests performed in a hospital-based outpatient department (HOPD) were often aggregated for an entire set of services provided on a given day instead of being broken down by individual service. This limited the ability to calculate service-level price. As a proxy for measuring switching to less expensive clinicians, HOPD utilization was examined. Prices at HOPDs are typically much higher than at freestanding facilities for equivalent services.15 Numerous interventions encourage patients to save money by switching away from clinicians in HOPDs to clinicians in freestanding facilities.16,17 Because it was an intuitive measure of receiving less expensive care, the number of services that were received in HOPD settings was used as a proxy.
The number of HOPD visits was modeled using a negative binomial regression on the subsample of employees who had at least 1 outpatient service in the preintervention and postintervention years. To account for differences across employees in overall outpatient utilization in a given year, the log-transformed count of total outpatient visits for that employee was included in the model.
Given that employees with higher cost sharing may be more responsive to the price transparency tool, a subgroup analysis of employees with high annual deductibles (>$1250) was conducted. Also, a subgroup analysis of those with chronic conditions (Charlson Comorbidity Index score >1) was conducted18 because employees with more health issues may have more opportunities to shop for health care using a price transparency tool.
To test the robustness of the results, several additional tests were conducted. First, the analysis was done separately for employers A and B to test whether results are consistent across both. Second, to address potential confounding from selecting matches from other markets, an analysis using only controls from within the same markets as the intervention cohort was done. Third, for employer B—for which data were available—a 2-year follow-up analysis was done to assess the longer-term effects of offering price transparency. Fourth, as noted above, for the analyses on outpatient spending and out-of-pocket spending, a 2-part model for the probability of nonzero spending and the magnitude of spending conditional on nonzero spending was conducted. Fifth, the analyses were conducted on the subpopulation of employees who had used the tool at least once and their matched controls.
Use rates for the price transparency tool and search behavior among the intervention cohort were examined. Employees were coded as users if they or anyone else in their family searched the website. All searches for a given service on a day were counted as a single use. Three measures of price transparency tool usage were defined: users who searched at least once, users who searched at least 3 times on different days, and users who searched at least twice with a 30-day gap in between. The latter 2 measures identify users who were more engaged in the tool and used it repeatedly. Most commonly searched services and price estimates for employer A, for which such information was available, were described.
The intervention and control populations included 148 655 and 295 983 individuals, respectively. In the intervention population, 50% were female, 67% were aged 18 years or older, 11% had 1 or more chronic illnesses, and 21% had an annual deductible greater than $1250 (Table 1). Standardized mean differences were 1.5% or less, indicating that the intervention and control populations were well balanced on measured characteristics.
Among the intervention cohort, mean total outpatient spending increased from $2021 to $2233, while among the control cohort, mean spending increased from $1985 to $2138. In the adjusted difference-in-differences analysis, offering a price transparency tool was associated with higher mean spending ($59; 95% CI, $25-$93). This constitutes a 2.9% increase in spending from preintervention spending (Table 2).
Mean outpatient out-of-pocket spending in the intervention cohort increased from $507 to $555 compared with $490 to $520 in the control cohort. In the adjusted difference-in-differences regression, offering a price transparency tool was associated with higher mean annual out-of-pocket spending ($18; 95% CI, $12-$25; a 3.6% increase from preintervention spending).
Hospital-based outpatient department services among those in the intervention group with at least 1 outpatient visit increased from 1.45 to 1.54 compared with an increase from 1.98 to 2.00 in the control cohort. In the adjusted model, offering a price transparency tool was associated with an increase in the use of HOPD services (0.06 [4.1%]; 95% CI, 0.04-0.08).
In subgroup analyses among employees with a higher annual deductible (>$1250) and those with more chronic conditions, there were no statistically significant decreases in overall spending, out-of-pocket spending, or shift in visits from the HOPD (Table 3).
As in the overall analysis, there was no evidence of a statistically significant decrease in outpatient spending, out-of-pocket outpatient spending, or switching to non-HOPD settings in either employer separately (eAppendix 4 in the Supplement). For employer A, there was a non–statistically significant decrease (−$17; 95% CI, −$71 to $38) in outpatient spending, although results on other outcomes were consistent with the main results (out-of-pocket spending increased by $20 [95% CI, $12-$28] and HOPD visits increased by 0.12 [95% CI, 0.09-0.16]). There was no statistically significant decrease when only controls from within the same markets as the intervention cohort were used (eAppendix 5 in the Supplement), when a 2-part model was specified (eAppendix 6 in the Supplement), or 2 years after tool introduction for employer B (eAppendix 7 in the Supplement). Fifth, users of the tool had increased spending compared with their matched controls ($407; 95% CI, $301-$541), although, as discussed in eAppendix 8 in the Supplement, it is likely the increase observed among users was driven at least partially by a selection bias.
In the first 12 months, 10% of employees in the treatment group searched the website for a price estimate at least once, 8% searched at least 3 times, and 3% searched at least twice with a 30-day gap in between (Figure). Among employees who searched at least once, 86% had at least some spending in the postintervention period. Top searches on the price transparency website were for obstetric deliveries, colonoscopy, office visits, and gastric bypass surgery. The majority of searches (68%) were for services with total price estimates higher than $500, and 53% of searches had total price estimates higher than $1250. For employer B, data were available for up to 24 months after offering the tool. By 24 months, 18% of employees had logged on once and 15% had logged on at least 3 times (eAppendix 3 in the Supplement).
In this analysis, offering a health care services price transparency tool to employees was not associated with lower outpatient spending. This was also true in subanalyses focused on employees with higher health plan deductibles and those with comorbidities at baseline. Furthermore, those offered the price transparency tool did not shift their care from higher-priced HOPD settings to lower-priced ambulatory settings.
A series of factors may underlie the lack of a negative association between offering the price transparency tool and outpatient spending. First, despite selecting 2 employers with the highest uptake and substantial marketing from the employers, use of the tool was relatively low, with only 10% of employees logging on in the first year of its introduction. Such low use rates have been reported for other price transparency tools.7,19-22 Moreover, low utilization is the most commonly reported challenge to price transparency initiatives by insurers who offer tools.10 Patients may not find the information compelling or may simply forget about the tool if they seek health care infrequently.
Second, there may be limited opportunities for patients to save money via the tool. Price shopping is most useful for care that is nonemergent and of lower cost, and there may be a limited set of services that meet those criteria. A recent report found that only 40% of spending is attributable to shoppable services.23 In this study, a substantial fraction of searches were for services whose prices exceeded the employee’s deductible, so that out-of-pocket amounts would be the same regardless of which clinician or hospital was chosen. Also, approximately half of employees met their deductible within the year. After reaching their deductible, patients may have little incentive to price shop.20 Third, a common service through which patients could benefit from price shopping is clinician office visits. However, many patients have established relationships with their clinicians that they may wish to maintain regardless of price.
Price transparency could be effective if combined with health plan benefit designs that create a larger incentive to receive care from less expensive clinicians. For example, under reference-based pricing, a patient pays the difference between the negotiated and reference price (for example, $30 000 for hip surgery in one program).24 Because patients are responsible for the “last dollar,” they may be more cost conscious, even for higher-priced services such as surgery. Bonus programs in which patients receive incentives if they receive care from less expensive clinicians or facilities may also increase patient interest in price data.25 Proactively contacting patients and providing information about less expensive care may be more effective than passively waiting for them to seek this information on their own via a website.17
Among those offered the tool, a modest but statistically significant increase ($59 [2.9%]) in outpatient spending was observed. Offering price transparency could increase spending if patients equate higher prices with higher quality and therefore use the tool to selectively choose higher-priced clinicians. The tool reports both total price and out-of-pocket amounts, and patients may use total price to identify higher-priced clinicians when their out-of-pocket price are the same. However, given the statistically significant increase in spending was not observed in all subanalyses and given findings of prior work on price transparency,7 this is speculative and would need to be confirmed in future studies. A more conservative interpretation is that the study failed to find evidence of meaningful savings associated with availability of a price transparency tool.
This analysis had several limitations. First, it was limited to 2 employers and a single price transparency tool. Future work should evaluate other price transparency initiatives. Second, the analysis focused on the first year after price transparency was introduced. A 2-year follow-up analysis for 1 employer with available data was done, and results were qualitatively similar. Third, the outcomes do not capture other beneficial aspects of a price transparency tool such as helping patients better estimate the out-of-pocket spending they will face, track their deductible, or identify which clinicians or sites are in the health plan’s network. Fourth, the analysis focused on aggregate spending. As seen in prior work, users of a price transparency tool might save money for individual services. Aggregate spending, however, is the outcome we believe to be most relevant to employers or health plans evaluating whether to offer such a tool, and reported savings in small subgroups for a few services would not drive meaningful aggregate savings (consistent with the main conclusion). Finally, as demonstrated in eAppendix 2 in the Supplement, before the intervention, the control group’s spending grew relatively faster than the intervention group’s spending. This difference, had it persisted, would bias the results toward erroneously finding that the tool reduced spending. Since the results suggest the opposite, this likely does not affect the conclusion that the price transparency tool was not associated with a decrease in spending. However, it could indicate other unobserved differences between the intervention and control groups.
Among employees at 2 large companies, offering a price transparency tool was not associated with lower health care spending. The tool was used by only a small percentage of eligible employees.
Corresponding Author: Ateev Mehrotra, MD, MPH, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave, Boston, MA 02115 (firstname.lastname@example.org).
Author Contributions: Dr Desai had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Desai, Hatfield, Chernew, Mehrotra.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Desai, Hatfield.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Desai, Hatfield, Hicks.
Obtained funding: Chernew, Mehrotra.
Administrative, technical, or material support: Chernew, Mehrotra.
Study supervision: Mehrotra.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.
Funding/Support: This work was supported by a grant from the Laura and John Arnold Foundation and the Marshall J. Seidman Center for Studies in Health Economics and Health Care Policy at Harvard Medical School.
Role of the Funders/Sponsors: The funding sources did not play a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.
Create a personal account or sign in to: