Monthly results from quality improvement, using 2 different but overlapping definitions of perfect care, are highlighted. Thick line segments indicate the 4-month baseline and measurement periods used to assess statistical significance of the process redesign. Perfect care index is reported as the percentage of perfect care encounters per period of measurement. For perfect care definition 1, the perfect care index comprised 6 nationally and locally defined quality indicators: (1) 30-day readmission; (2) Surgical Care Improvement Project composite16; (3) 35 Hospital Acquired Condition/Patient Safety Indicator measures17,18; (4) admission to the orthopedic acute care unit during hospitalization; (5) early mobility (out of bed on day of surgery); and (6) emergency department visit within 90 days of discharge. Early mobility was identified as a key exposure for improving outcomes and decreasing length of stay and facility costs.19,20 For perfect care definition 2, the perfect care index was defined as the following: (1) 35 Hospital Acquired Condition/Patient Safety Indicator measures17,18; (2) admission to the orthopedic acute care unit during hospitalization; (3) emergency department visit within 90 days of discharge; and (4) discharge to home with home health services. First evaluation year corresponds to the implementation year, while second evaluation year was the postimplementation year.
eTable 1. Examples of Perfect Care Indexes
eTable 2. Population Characteristics for Value Improvement Projects
eFigure 1. Sample Encounter-Level Report for Perfect Care Index and Costs
eFigure 2. Sample Patient-Level Report for System Utilization Costs Over Time
eFigure 3. Total Joint Replacement Care Pathway
Lee VS, Kawamoto K, Hess R, Park C, Young J, Hunter C, Johnson S, Gulbransen S, Pelt CE, Horton DJ, Graves KK, Greene TH, Anzai Y, Pendleton RC. Implementation of a Value-Driven Outcomes Program to Identify High Variability in Clinical Costs and Outcomes and Association With Reduced Cost and Improved Quality. JAMA. 2016;316(10):1061-1072. doi:10.1001/jama.2016.12226
Transformation of US health care from volume to value requires meaningful quantification of costs and outcomes at the level of individual patients.
To measure the association of a value-driven outcomes tool that allocates costs of care and quality measures to individual patient encounters with cost reduction and health outcome optimization.
Design, Setting, and Participants
Uncontrolled, pre-post, longitudinal, observational study measuring quality and outcomes relative to cost from 2012 to 2016 at University of Utah Health Care. Clinical improvement projects included total hip and knee joint replacement, hospitalist laboratory utilization, and management of sepsis.
Physicians were given access to a tool with information about outcomes, costs (not charges), and variation and partnered with process improvement experts.
Main Outcomes and Measures
Total and component inpatient and outpatient direct costs across departments; cost variability for Medicare severity diagnosis related groups measured as coefficient of variation (CV); and care costs and composite quality indexes.
From July 1, 2014, to June 30, 2015, there were 1.7 million total patient visits, including 34 000 inpatient discharges. Professional costs accounted for 24.3% of total costs for inpatient episodes ($114.4 million of $470.4 million) and 41.9% of total costs for outpatient visits ($231.7 million of $553.1 million). For Medicare severity diagnosis related groups with the highest total direct costs, cost variability was highest for postoperative infection (CV = 1.71) and sepsis (CV = 1.37) and among the lowest for organ transplantation (CV ≤ 0.43). For total joint replacement, a composite quality index was 54% at baseline (n = 233 encounters) and 80% 1 year into the implementation (n = 188 encounters) (absolute change, 26%; 95% CI, 18%-35%; P < .001). Compared with the baseline year, mean direct costs were 7% lower in the implementation year (95% CI, 3%-11%; P < .001) and 11% lower in the postimplementation year (95% CI, 7%-14%; P < .001). The hospitalist laboratory testing mean cost per day was $138 (median [IQR], $113 [$79-160]; n = 2034 encounters) at baseline and $123 (median [IQR], $99 [$66-147]; n = 4276 encounters) in the evaluation period (mean difference, −$15; 95% CI, −$19 to −$11; P < .001), with no significant change in mean length of stay. For a pilot sepsis intervention, the mean time to anti-infective administration following fulfillment of systemic inflammatory response syndrome criteria in patients with infection was 7.8 hours (median [IQR], 3.4 [0.8-7.8] hours; n = 29 encounters) at baseline and 3.6 hours (median [IQR], 2.2 [1.0-4.5] hours; n = 76 encounters) in the evaluation period (mean difference, −4.1 hours; 95% CI, −9.9 to −1.0 hours; P = .02).
Conclusions and Relevance
Implementation of a multifaceted value-driven outcomes tool to identify high variability in costs and outcomes in a large single health care system was associated with reduced costs and improved quality for 3 selected clinical projects. There may be benefit for individual physicians to understand actual care costs (not charges) and outcomes achieved for individual patients with defined clinical conditions.
Fee-for-service payment models reward care volume over value.1,2 Under fee-for-service models, health care costs are increasing at a rate of 5.3% annually, accounted for 17.7% of the US gross domestic product in 2014, and are projected to increase to 19.6% of the gross domestic product by 2024.3 Value-based payment models and alternative payment models incentivize the provision of efficient, high-quality, patient-centered care through financial penalties and rewards.4 Under alternative payment models, clinicians will theoretically deliver higher-quality care that results in better outcomes, fewer complications, and reduced health care spending. To implement alternative payment models effectively, physicians must understand actual care costs (not charges) and outcomes achieved for individual patients with defined clinical conditions—the level at which they can most directly influence change.
Few large health care organizations have accurately measured total care costs at the individual patient level and have related costs to quality.5,6 In 2012, University of Utah Health Care initiated an enterprise-wide effort to improve clinical outcomes and reduce costs and built a management and reporting tool, called value-driven outcomes, that allows clinicians and managers to analyze actual system costs and outcomes at the level of individual encounters and by department, physician, diagnosis, and procedure.7
This report describes how the value-driven outcomes tool was used to (1) identify overall care costs across the health care system, (2) measure cost variability across Medicare severity diagnosis related groups (MS-DRGs) to identify the greatest opportunities for cost reduction and outcome optimization, and (3) support value improvement initiatives for selected conditions.
Question Is use of an analytic tool that allocates clinical care costs and quality measures to individual patient encounters in a health care system associated with reduced costs and improved patient outcomes?
Findings In this observational study in a health care system with 1.7 million patient visits per year, costs of care varied considerably. In pre-post comparisons, implementation of the analytic tool was associated with a significant decrease in costs (7%-11% for total joint replacement and 11% for laboratory testing) and improvement in quality.
Meaning Implementation of a tool that provides physicians with information about the costs of clinical care and quality for individual patients with defined conditions was associated with a reduction in costs and improvement in quality.
The project was reviewed by the University of Utah Institutional Review Board and was deemed not to meet the definition of human subjects research. It was therefore exempt from institutional review board oversight, and informed consent was not required.
The value-driven outcomes tool uses the definition of value by Porter and Teisberg8: health outcomes achieved per dollar spent, in which outcomes are measured in terms of quality metrics, such as the patient’s overall health status and the avoidance of hospital-acquired morbidities. The value-driven outcomes tool is a modular, extensible framework that allocates care costs to individual patient encounters. It draws information from the health care system’s enterprise data warehouse, which includes data on patient encounters; national quality metrics and clinician-defined metrics; supply, pharmacy, imaging, and laboratory utilization; human resource utilization; and the general ledger (ie, the organization’s complete record of financial transactions). The value-driven outcomes tool uses these data to calculate and integrate cost information with relevant quality and outcome measures.7
Accurately assigning costs is complex and can take multiple perspectives, including those of the health care system, payer, patient, or society. To understand the role of prospective payment in the health care system context, the value-driven outcomes cost accounting approach takes the health care system perspective and identifies costs attributable to direct patient care. Certain large groups of costs, such as space, equipment, labor, and professional time, are allocated based on a patient’s estimated use of those resources, whereas costs for supplies, medications, and contracted services are based on the health care system’s actual acquisition costs. Physician costs are allocated according to work relative value units (wRVUs) as follows: physician salary and benefits are multiplied by the percentage of effort devoted to clinical care; education, research, and service (eg, committee and administrative work) are not included in the percentage of effort. Annual clinical compensation divided by annual wRVUs produces a measure of cost (in dollars) per wRVU for each physician7 (Table 1). For the analysis of overall health system costs, total direct care costs for inpatient admissions and outpatient visits were determined overall and by major departments from July 1, 2014, through June 30, 2015.
For every MS-DRG (such as MS-DRG 470, major joint replacement of the lower extremity without major complications or comorbidities, and MS-DRG 871, sepsis), the overall cost per unit or cost per case, the components of that cost (Table 1), and cost variability were identified. Variability was calculated using the coefficient of variation (CV; standard deviation divided by mean) to standardize the measure of dispersion across conditions. Highly variable, high-cost conditions were identified as potential areas for care standardization and value improvement.
Clinical teams consisting of physicians, nurses, administrators, and quality improvement staff defined clinically relevant and patient-centered outcomes, which were then queried from the data warehouse. Outcomes included risk-adjusted mortality,9 patient safety measures (eg, hospital-acquired infections), clinical process measures, and unplanned hospital readmissions or emergency department visits. Patient satisfaction data10 and patient-reported outcomes (including physical and emotional functioning) were collected directly from patients using surveys such as the Patient-Reported Outcomes Measurement Information System.11 Elements of care provision, including key quality indexes, were collected on every case.
Additionally, the care team selected key quality and outcome variables that were combined into a single binary measure termed perfect care. If a continuous variable was chosen as a key variable, the team established an evidence-based threshold (for example, for time receiving mechanical ventilation following coronary artery bypass grafting surgery, <24 hours of time would be considered perfect care). If a composite index was included (such as Surgical Care Improvement Project [SCIP] composite), it was treated as an all-or-none measure; 1 SCIP failure would result in a perfect care score of 0. Perfect care was set to 1 for an encounter only if the care team accomplished all the key elements. The perfect care index is reported as the percentage of perfect care encounters per period of measurement (see eTable 1 in the Supplement for examples of perfect care indexes).
Multidisciplinary value improvement teams included clinicians, administrative leaders, and process engineers. After these teams defined the key metrics for quality and perfect care, they viewed and monitored care costs and quality metrics (Table 2) using institutional web-based value-driven outcomes visualization tools. The data were used to provide feedback to clinicians monthly on an individual patient basis or aggregated at the clinician or service-line level to facilitate broader understanding of variations in cost and quality. Examples of individual patient–specific reports are included in eFigure 1 and eFigure 2 in the Supplement. Cost and outcome variability among physicians were used to identify opportunities for clinical improvement.
Three of the initial 5 pilot improvement projects are reported herein: total joint replacement of the lower extremity (hip and knee), hospitalist laboratory utilization, and sepsis management. Total joint replacement and sepsis were identified as initial pilots based on an opportunity assessment of total volume, total cost, and high variation using the value-driven outcomes cost variation analyses. Laboratory utilization was selected as an initiative to use value-driven outcomes data to improve care across clinical conditions within a specific direct cost category and was based on the Choosing Wisely campaign12 and interinstitutional benchmarking through the University HealthSystem Consortium. The 2 pilot projects not discussed herein were coronary artery bypass grafting surgery and hip fracture care, owing to delays in project initiation.
All evaluations were based on direct comparisons of outcomes between designated time intervals preceding and following the exposures without adjustment for covariates (such as age, sex, race, and socioeconomic status).
Changes in mean costs and length of stay after exposures were assessed using a 12-month baseline period of April 1, 2012, to March 31, 2013, and successive 12-month evaluation periods of April 1, 2013, to March 31, 2014, and April 1, 2014, to March 31, 2015. Costs were normalized to the mean cost during the baseline period. The proportions of patients meeting initial and modified perfect care criteria were compared between designated 4-month intervals. Two patients whose costs exceeded the mean cost by more than 5 SDs on the log scale were excluded from cost analyses. Only attending physicians who practiced during the entire study period were included.
Changes in daily laboratory utilization, daily laboratory costs, length of stay, and risk of 30-day readmission were assessed between a baseline period of July 1, 2012, to January 31, 2013, and an evaluation period of February 1, 2013, to April 30, 2014, which followed exposure to education (a 30-minute baseline didactic lecture on laboratory overuse and associated cost implications and provision of a pocket card outlining cost differences between common laboratory tests).
Quiz Ref IDThe primary evaluation of the sepsis value improvement project was the time from systemic inflammatory response syndrome (SIRS) criteria13 being met to first anti-infective agent administration. Criteria for SIRS have historically been used to diagnose sepsis in the context of infection. All patients evaluated in this analysis were selected by International Classification of Diseases, Ninth Revision codes and International Statistical Classification of Diseases and Related Health Problems, Tenth Revision codes for sepsis; as such, all were presumed to have infection and should have received an anti-infective agent. Patients who did not have a diagnostic code for sepsis were not considered in this analysis. For patients who received anti-infective agents prior to meeting SIRS criteria, the time to anti-infective agent administration was considered to be 0 hours. Secondary evaluation measures included length of stay, mortality, and total direct cost normalized to the baseline mean cost. The proportions of patients with anti-infective agents administered within 24 hours of meeting SIRS criteria for nosocomial and multidrug-resistant infections as well as community-acquired infections were measured to assess whether the pattern of anti-infective agent use changed. The baseline period was July 1, 2014, to December 31, 2014, and the evaluation period was November 2, 2015, to February 29, 2016. Potential sepsis cases were identified through billing data and confirmed by physician medical record audit. Patients were excluded if they never received anti-infective agents, did not have documentation of infection or sepsis (based on diagnostic code), or were transferred from another hospital while receiving anti-infective agents.
Descriptive summaries are provided as counts and percentages for binary variables and as means and standard deviations for numeric variables, with medians and interquartile ranges (IQRs) also provided for highly skewed continuous variables. Proportions of deaths were compared between the evaluation and baseline periods of the sepsis project using Fisher exact tests; generalized linear models14 were used to analyze changes between the baseline and evaluation periods for all other outcomes. The generalized linear models used binary outcomes for comparisons of perfect care indexes, 30-day mortality, and the proportions of patients with anti-infective agents administered for nosocomial and multidrug-resistant infections and for community-acquired infections. Gamma outcomes were used for costs, length of stay, and time to administration of anti-infective agents.
For each of these outcomes, log and identity link functions were used to evaluate relative change and absolute change, respectively. Negative binomial outcome models with log link functions and offset equal to log length of stay were used to analyze relative changes in the number of tests ordered per day, including basic metabolic panels, complete metabolic panels, and complete blood counts. A Taylor series approximation was applied to the results of these analyses to evaluate absolute changes in numbers of laboratory tests per day. In the joint replacement and laboratory utilization projects, statistical inferences were performed using asymptotic likelihood ratio or Wald statistics. To account for positive skewness and smaller sample sizes, confidence intervals in the sepsis project were obtained using the bias-correction and accelerated bootstrap method15 with 1000 bootstrap samples, and P values were computed using permutation tests.
The joint replacement and laboratory utilization analyses were conducted using SAS version 9.4 statistical software (SAS Institute Inc). The sepsis analysis was performed using R version 3.3.0 statistical software (R Foundation). All hypothesis tests were performed using 2-sided α = .05 without adjustment for multiple comparisons.
During the fiscal year from July 1, 2014, to June 30, 2015 (Table 3), University of Utah Health Care had approximately 34 000 inpatient discharges, 52 000 emergency department visits, and 1.7 million total patient visits.
Quiz Ref IDInpatient total direct care costs ($470.4 million) accounted for 46.0% of total direct costs, and outpatient direct costs ($553.1 million) accounted for 54.0%. For inpatient care, facility utilization (37.7%) and professional services (24.3%) were the largest cost components.Table 3 also shows total annual direct costs by discharge department (inpatient). Resource use varied considerably by both department and care location. With $151.4 million in inpatient costs, the surgery department had the highest overall costs among departments. Together, the surgery and internal medicine departments constituted 51.8% of total inpatient costs.
Cost components (eg, laboratory tests, supplies, professional costs) for inpatient and outpatient care varied considerably across departments (Table 3). Among inpatient episodes, professional costs accounted for 24.3% of total costs and exceeded 30% for obstetrics and gynecology (47.5%) and neurosurgery (32.9%). Supply costs represented 32.0% of all orthopedic surgery inpatient costs and 16.8% of all neurosurgery inpatient costs. Laboratory costs also varied considerably. For example, 7.0% of inpatient costs in internal medicine were attributable to laboratory testing, compared with 2.8% for neurosurgery and 2.6% for orthopedic surgery.
Table 3 also shows total annual direct costs by physician department for outpatient visits. With $107.2 million in outpatient costs, the surgery department had the highest overall costs among departments. Together, the surgery and internal medicine departments constituted 49.3% of total outpatient costs. For outpatient care, professional services (41.9%), facility utilization (18.4%), and therapy services (eg, physical and respiratory therapy; 14.2%) were the largest components (Table 3). Among outpatient visits, professional costs accounted for 41.9% of total costs overall, with pediatrics (78.6%), dermatology (77.3%), and family and preventive medicine (62.4%) exceeding 60%; however, these 3 departments had among the lowest supply and medication costs.
Total professional and facility costs for the MS-DRG discharge diagnoses with the highest total direct costs over 1 year and their CVs across hospitalizations are shown in Table 4. The total cost and CV provided an assessment of the largest potential opportunities for value improvement through care standardization.
As shown in Table 4, patient conditions with the highest CV included postoperative infection (MS-DRG 853; CV = 1.71; magnitude of difference between lowest and highest total direct cost per patient = $225 927) and sepsis (MS-DRG 871; CV = 1.37; magnitude of difference between lowest and highest total direct cost per patient = $210 679), both areas of current clinical improvement work. For the highest-volume elective procedures, component cost variability was also examined. Total joint replacement of the lower extremity (MS-DRG 470) showed an overall CV of 0.33. However, the 2 largest component costs, supply costs (CV = 0.66; magnitude of difference between lowest and highest total supply direct cost per patient = $20 966) and facility utilization costs (CV = 0.44; magnitude of difference between lowest and highest total facility utilization direct cost per patient = $12 085), illustrated higher variability, suggesting an important focus area for improvement.
The clinical characteristics of the patients in the baseline period and in the evaluation period for each of the 3 clinical improvement projects are provided in eTable 2 in the Supplement. Results from the 3 projects are provided in Table 5.
Orthopedic surgeons identified lower extremity joint replacement (MS-DRG 470) as a high-volume elective procedure associated with variability in supply and facility utilization costs (Table 4). In November 2012, a team led by an orthopedic surgeon and facilitated by a process engineer developed a consensus clinical pathway for patients undergoing hip and knee joint replacement (eFigure 3 in the Supplement).
The multidisciplinary team defined a perfect care index for joint replacement comprising 6 nationally and locally defined quality indicators: (1) 30-day readmission; (2) SCIP composite16; (3) 35 Hospital Acquired Condition/Patient Safety Indicator measures17,18; (4) admission to the orthopedic acute care unit during hospitalization; (5) early mobility (out of bed on day of surgery); and (6) emergency department visit within 90 days of discharge.
Care process redesign began in April 2013 and included 1 component of the care pathway intervention, early mobility,19,20 as has been previously reported.21 After 1 year, the 4-month mean perfect care index increased from 54% to 80% (26% increase; 95% CI, 18%-35%; P < .001). Because several components of the initial perfect care index were consistently being met (readmission, SCIP composite, early mobility), the team undertook a second stage of continuous improvement and in September 2013 defined a new perfect care index comprising measures 3, 4, and 6 from the original index plus a new measure, successful discharge of the patient to home, supported by home health services. The 4-month mean for the revised perfect care index increased from 50% during May through August 2013 to 65% during December 2014 through March 2015 (15% absolute increase; 95% CI, 6%-24%; P = .002) (Figure and Table 5).
Compared with the baseline year (n = 634 admissions), mean direct costs were reduced by 7% (95% CI, 3%-11%; P < .001) during the implementation year (first evaluation year, n = 637 admissions) and by 11% (95% CI, 7%-14%; P < .001) between the baseline year and the postimplementation year (second evaluation year, n = 658 admissions).
During the first improvement cycle, early mobility (out of bed on the day of surgery) showed the greatest improvement. After modifying the schedules of in-house physical therapists to ensure same-day mobility, the mean (SD) length of stay declined from 3.50 (1.53) days during the baseline year to 3.17 (1.21) days during the first evaluation year (reduction, 0.33 days; 95% CI, 0.20-0.47 days; P < .001) and to 2.88 (1.16) days during the second evaluation year (reduction, 0.63 days; 95% CI, 0.50-0.76 days; P < .001).
This decrease in facility utilization and length of stay accounted for 34% of the cost reduction between the baseline year and the postimplementation year (second evaluation year). Given widely varying costs despite comparable outcomes across similar implants, contracts were renegotiated and lower supply pricing was responsible for 41% of the overall cost savings between the baseline and postimplementation years.
In February 2013, hospitalists launched a quality improvement project to reduce unnecessary inpatient laboratory testing. The project included (1) clinician education, (2) a rounding checklist including discussion of all laboratory testing plans, (3) monthly value-driven outcomes feedback via in-person group review of current and year-to-date comparative individual and peer laboratory utilization data, and (4) a financial incentive program that shared 50% of hospital cost savings with the department to support future quality improvement projects.22
The mean (SD) cost per day for laboratory testing on the hospitalist service was $138 ($233) (median, $113; IQR, $79-$160) during the baseline period and $123 ($213) (median, $99; IQR, $66-$147) during the multifaceted intervention (mean difference, −$15; 95% CI, −$19 to −$11; P < .001). The number of basic metabolic panels, complete metabolic panels, and complete blood count tests per day were reduced by 0.13 (95% CI, 0.10-0.16), 0.10 (95% CI, 0.07-0.13), and 0.28 (95% CI, 0.26-0.31) tests per day, respectively, from the baseline mean (SD) of 0.75 (1.03), 0.32 (0.68), and 0.92 (0.79) tests per day (all P < .001). The change in mean length of stay was not statistically significant (mean [SD] length of stay, 4.48 [5.12] days [median, 3.17 days; IQR, 2.02-5.00 days] in baseline period and 4.54 [4.67] days [median, 3.20 days; IQR, 2.10-5.14 days] in intervention period; mean difference, 0.06 days; 95% CI, −0.11 to 0.23 days; P = .48). The risk of 30-day readmission was reduced from 14% (280 of 2034) at baseline to 11% (491 of 4276) during the intervention (difference, −2%; 95% CI, −4% to −1%; P = .01). In contrast, for nonhospitalist admissions that excluded obstetrics, rehabilitation, and psychiatry visits, the mean (SD) cost per day for laboratory testing was $130 ($432) in the baseline period and $132 ($420) in the evaluation period.22 The annual cost savings associated with this project were greater than $250 000 per year.
Sepsis was identified as one of the highest-volume MS-DRGs with highly variable costs (CV = 1.37; Table 4). Sepsis is also one of the top 3 causes of inpatient mortality at the University of Utah and nationwide.23 Early recognition and timely administration of anti-infective agents are important factors in sepsis management.23 A retrospective review of 157 patients with sepsis during a 6-month interval showed that anti-infective agents were administered a median of 3.7 hours (IQR, 1.2-7.8 hours) and a mean (SD) of 8.1 (14.4) hours after patients met SIRS criteria. For the 29 patients in the baseline period on the acute internal medicine service, the median time to anti-infective agent administration was 3.4 hours (IQR, 0.8-7.8 hours) and the mean (SD) was 7.8 (11.0) hours.
To reduce time to anti-infective agent administration in patients with sepsis, a multifaceted educational campaign targeting improved recognition and treatment of sepsis was developed and implemented for all clinical staff. A notification system based on Modified Early Warning System triggers24,25 was also embedded in the electronic health record, along with corresponding sepsis order sets and real-time Modified Early Warning System scores on patient lists. Progress was tracked using the value-driven outcomes tool.
After 4 months of implementation (November 2, 2015, to February 29, 2016) on the acute internal medicine service, the time from meeting SIRS criteria to administration of anti-infective agents for 76 patients was reduced to a median time of 2.2 hours (IQR, 1.0-4.5 hours) and a mean (SD) time of 3.6 (4.7) hours (mean difference from baseline to implementation, −4.1 hours; 95% CI, −9.9 to −1.0 hours; P = .02). Additional details on time to anti-infective agent administration and secondary measures are provided in Table 5, both in comparison with the whole-hospital baseline audit sample and a subset of this sample restricted to the acute internal medicine service. There was no significant change in the use of anti-infective agents for nosocomial and multidrug-resistant infections within 24 hours of SIRS criteria being met, and there was no significant difference in mortality.
Implementing an analytic tool that allocates clinical care costs and quality measures to individual patient encounters was associated with significant improvements in value of care delivered across 3 clinical conditions that showed high cost variation at baseline. For total joint replacement, a composite quality index increased during the 2-year intervention period, and mean direct costs were 7% to 11% lower. Quiz Ref IDThe initiative to reduce hospitalist laboratory testing was associated with 11% lower costs, with no significant change in length of stay and a lower 30-day readmission rate. A sepsis intervention was associated with reduced mean times to anti-infective administration following fulfillment of SIRS criteria in patients with infection.
As identified by an Institute of Medicine report,2 variability in the delivery of health care is one of the greatest opportunities to improve quality and reduce costs through process improvement and standardization. With component cost analyses, the underlying drivers of cost variability can be identified and allow targeted interventions, such as supply negotiations or staff management in the case of the total joint replacement initiative. The capacity to measure the quality and cost implications of interventions in real time facilitates physician engagement and assurance that cost-reduction initiatives can lead to quality improvement and vice versa.
Quiz Ref IDThe value-driven outcomes tool can quantify quality in terms of blended indexes that incorporate both nationally endorsed and validated measures as well as local physician- and patient-defined outcome measures. Clinician-defined quality indexes have the advantage of securing greater physician engagement in quality improvement processes, leveraging local drivers of quality,26 and providing a simple framework that can be modified over time and across practice sites. This framework also allows efficient incorporation of new standardized measurement sets and tailored risk adjustment. For example, 8 patient satisfaction survey instruments and 39 patient-reported outcome survey instruments have recently been incorporated into the value-driven outcomes program. These measures play an increasing role in the assessment of value of clinical care from patients’ perspectives. Ongoing efforts include risk adjustment in patient quality and outcome measures and broadening the definition of outcome measures to include patient experience and patient-reported outcomes.
Health systems and individual physicians are increasingly held accountable for both the quality and cost of care they provide as is evident by the recent Medicare Access and CHIP Reauthorization Act of 2015 (MACRA),27 but recent research on cost reform primarily focuses on cost from the payer’s perspective. Charge transparency is embraced by payers as a strategy to drive cost reduction through informed consumer choice and price competition. For example, ProMedica28 posts detailed price sheets online for each of their hospitals and clinics for most services. Other efforts, such as those ongoing in Maryland, require public and private payers to pay the same rates, leading to reduced costs for private payers and successful cost containment.29 However, without quality and outcome transparency, such as that championed by the Centers for Medicare & Medicaid Services,18 patients are not able to make truly informed health care choices.
While patients as consumers can create market forces to reduce health care costs, clinicians and health care systems have the greatest opportunity, the most knowledge, and the responsibility to improve value.30- 32 A study of 696 trauma admissions to the University of Michigan concluded that 35% of the total costs per patient were under the immediate control of physicians.33 Alternative payment models shift the focus from charges to the cost of care delivery. Within this context, there is an alternative, complementary strategy to managing costs while also attending to quality: transparency of cost and outcome data to physicians at the level of individual encounters and conditions.
This work to measure and improve care value builds on others’ work on the use of data and performance feedback to improve clinical practice.34 Regular feedback through the value-driven outcomes tool is critical to monitoring status; defining clear targets, action plans, and supportive tools; and providing peer comparison data (eFigure 1 and eFigure 2 in the Supplement).
This work also builds on others’ efforts to improve care value at the enterprise level.2,35 Best practices in value improvement that have been adopted include top-level leadership focus on value improvement; a culture of continuous improvement; leveraging information technology to identify opportunities, track progress, and support evidence-based care; and engaging multidisciplinary teams to redesign care processes, reduce unwarranted variation, and improve care value.
The tools needed to understand variation in costs and outcomes at a unit of analysis that is actionable (individual patients with specific clinical conditions) have not been generally available to date. The value-driven outcomes costing method relies on a combination of actual costs measured from sources such as the supply management system, time-based allocations (per minute or per hour in the intensive care unit, operating room, or emergency department, for example), and wRVU-based allocations (physician costs).7 By creating peer-to-peer physician comparisons and by targeting areas with the highest variability across the enterprise, we create opportunities to systematically deliver more affordable, higher-quality care.
There are alternative, more precise, and also more labor-intensive costing methods. With time-driven activity-based costing,6 process mapping is undertaken for specific clinical conditions, and costing is determined by an average capacity cost rate (dollars per minute) for each clinical resource and personnel, multiplied by the time spent by each resource involved in the process. Unlike time-driven activity-based costing wherein costs are determined individually for each clinical condition, the value-driven outcomes approach provides a scalable measurement solution across diagnoses and the entire health care enterprise. Moreover, with the value-driven outcomes tool, all allocated costs for a given period reconcile to the actual hospital accounting expenses (general ledger) for that same period. Our costing method, like most cost accounting systems,36 also enables assessment of systemwide resource allocations and facilitates analysis of trends in service demand.
Quiz Ref IDHowever, this approach and this study have several limitations. First, the data and approach lack insight into care provided outside the health care organization, particularly for pharmacy, laboratory, and imaging services. Based on patients within the University of Utah health insurance plan, an estimated 40% of laboratory services and 30% of imaging services are provided outside the health care organization. Second, Utah has unique population characteristics—the population is younger and more physically active compared with the national average,37 so the findings may not be completely generalizable to health systems in other states. Third, the clinical improvement studies used pre-post designs generally without concurrent control groups or statistical adjustment for potential confounding factors, so causality cannot be established. Fourth, continuous quality improvement includes a package of changes that can be adapted over time. As such, the discrete component that contributes most to change cannot be isolated.
Fifth, physicians were not blinded to the interventions, but rather were aware of the outcomes being assessed as part of the process. For example, the orthopedic surgeons and physical therapists knew that early mobility was a key component of perfect care following total joint replacement. This component of the quality improvement process likely influenced the observed outcomes. Sixth, exposing outcomes and costs publicly could lead to unintended consequences, such as clinicians shifting away from providing care to higher-cost, higher-risk patients. Risk adjustment in quality and outcome measures may mitigate these effects. The analytic framework currently has limited risk adjustment; consequently, changes in costs and outcomes may be confounded by unaccounted changes in patient risk profiles. To date, the overall case mix index at the University of Utah has remained relatively stable, although ongoing evaluation will be needed.
Seventh, the value-driven outcomes tool includes only direct costs. Indirect overhead costs such as information technology, administrative staff, hospital operations, and maintenance are generally estimated to represent almost half of total hospital costs, increasing more rapidly than medical inflation during the past decade.38 Similar tools are needed to assess the relationship of indirect costs with quality and care. Furthermore, direct costs per admission or per outpatient visit are not reported owing to the sensitive business nature of these data. Finally, there is also a need to further demonstrate the generalizability and scalability of the value-driven outcomes approach across many more conditions and units, both at the University of Utah and at other health care systems.
These limitations reflect the complexity of transforming a health care system from one based on volume to one based on value. Health care systems and their related hospitals are complex organizations, particularly those in academic medical centers where clinical care encounters competing agendas such as research and education. The variable distribution of costs of clinical services (as shown in Table 3 and Table 4) reflects the University of Utah experience and most likely will vary according to institution. The goals of the value-driven outcomes program were to estimate and increase awareness of this variation across units, departments, and clinicians; attempt to reduce that variation; and thereby reduce costs and improve quality.
Implementation of a multifaceted value-driven outcomes tool to identify high variability in costs and outcomes in a large single health care system was associated with reduced costs and improved quality for 3 selected clinical projects. There may be benefit for physicians to understand actual care costs (not charges) and outcomes achieved for individual patients with defined clinical conditions.
Corresponding Author: Vivian S. Lee, MD, PhD, MBA, University of Utah Senior Vice President’s Office, 175 N Medical Dr E, Clinical Neurosciences Bldg, Salt Lake City, UT 84132 (firstname.lastname@example.org).
Author Contributions: Dr Lee and Mr Park had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Lee, Kawamoto, Johnson, Pelt, Pendleton.
Acquisition, analysis, or interpretation of data: Kawamoto, Hess, Park, Young, Hunter, Johnson, Gulbransen, Pelt, Horton, Graves, Greene, Anzai, Pendleton.
Drafting of the manuscript: Lee, Kawamoto, Hess, Hunter, Johnson, Gulbransen, Pelt, Horton, Greene, Pendleton.
Critical revision of the manuscript for important intellectual content: Lee, Kawamoto, Hess, Park, Young, Johnson, Gulbransen, Pelt, Horton, Graves, Anzai, Pendleton.
Statistical analysis: Kawamoto, Young, Greene.
Administrative, technical, or material support: Lee, Kawamoto, Park, Young, Hunter, Johnson, Gulbransen, Anzai, Pendleton.
Study supervision: Park, Pelt, Pendleton.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. The University of Utah Health Sciences Center is currently exploring potential options for maximizing the adoption and effects of the value-driven outcomes analytical tool, including potentially the provision of commercial products and services based on the tool; Drs Lee and Kawamoto, Messrs Park and Young, and Ms Hunter were codevelopers of the tool. Messrs Park and Young reported being a nonequity coinventor of technologies for delivering health care data applications using multitenant clouds and software agents and having patents pending for “Multi-tenant Cloud for Healthcare Data Application Delivery” and “Agent for Healthcare Data Application Delivery.” Dr Pelt reported receiving personal fees and research support outside the submitted work related to consulting, speaker’s bureaus, and unrelated research from Zimmer Biomet. No other disclosures were reported.
Funding/Support: The statistical analyses were supported by the University of Utah Study Design and Biostatistics Center, with funding in part from the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant 5UL1TR001067-02.
Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: We thank the teams from the University of Utah Enterprise Data Warehouse, Decision Support, Department of Biomedical Informatics, administrative staff, physicians, and care teams who contributed to the development of the value-driven outcomes tool and its implementation. Angela Presson, PhD, University of Utah, Salt Lake City, assisted with statistical analyses for sepsis quality improvement; Polina Kukhareva, MPH, MS, University of Utah, Salt Lake City, assisted with statistical analyses for laboratory testing improvement; and Danielle Sample, MPH, and Joe Borgenicht, University of Utah, Salt Lake City, provided editorial assistance; they received no additional compensation for this work.