Customize your JAMA Network experience by selecting one or more topics from the list below.
Auerbach AD, Hilton JF, Maselli J, Pekow PS, Rothberg MB, Lindenauer PK. Case Volume, Quality of Care, and Care Efficiency in Coronary Artery Bypass Surgery. Arch Intern Med. 2010;170(14):1202–1208. doi:10.1001/archinternmed.2010.237
How case volume and quality of care relate to hospital costs or length of stay (LOS) are important questions as we seek to improve the value of health care.
We conducted an observational study of patients 18 years or older who underwent coronary artery bypass grafting surgery in a network of US hospitals. Case volumes were estimated using our data set. Quality was assessed by whether recommended medications and services were not received in ideal patients, as well as the overall number of measures missed. We used multivariable hierarchical models to estimate the effects of case volume and quality on hospital cost and LOS.
The majority of hospitals (51%) and physicians (78%) were lowest-volume providers, and only 18% of patients received all quality of care measures. Median LOS was 7 days (interquartile range [IQR], 6-11 days), and median costs were $25 140 (IQR, $19 677-$33 121). In analyses adjusted for patient and site characteristics, lowest-volume hospitals had 19.8% higher costs (95% CI, 3.9%-38.0% higher); adjusting for care quality did not eliminate differences in costs. Low surgeon volume was also associated with higher costs, though less strongly (3.1% higher costs [95% CI, 0.6%-5.6% higher]). Individual quality measures had inconsistent associations with costs or LOS, but patients who had no quality measures missed had much shorter LOS and lower costs than those who missed even one.
Avoiding lowest-volume hospitals and maximizing quality are separate approaches to improving health care efficiency through reducing costs of coronary bypass surgery.
Improving quality and reducing costs of care are crucial goals for US health care. One approach to improving outcomes is to promote care at higher-volume sites,1-3 while other efforts have focused on improving adherence to quality of care measures. Few data exist to describe the interaction between quality, case volume, and costs or length of stay (LOS), even as we seek to constrain costs and increase the efficiency—and value—of health care.4 We have recently published findings suggesting that overall quality of care markedly influences patient outcomes following cardiac surgery,5 but higher volume has a weaker association with outcomes. These findings suggest that care quality may be a more important potential driver for value improvement based on outcome improvement regardless of case volume. However, care value is improved if outcomes are unchanged but use of resources falls.
Understanding whether case volume or quality reduce costs or LOS has implications for health systems. If higher case volume were independently associated with lower costs or shorter LOS, this would provide a rationale for investing in infrastructure required to maximize access to high-volume hospitals or surgeons.3 However, a positive relationship between higher quality and efficiency might provide justification for investments in infrastructure needed to create high-reliability systems of care.6
To explore these issues, we analyzed data collected from adults undergoing coronary artery bypass surgery in a sample of US hospitals. Using these data, we first examined the relationship between surgeon and hospital volume, and costs and LOS. We then examined the relationships between case volume and costs and LOS after adjusting for individual measures of care quality, as well as overall care quality.
Our data were collected on 81 289 patients cared for by 1451 physicians at 164 hospitals participating in Perspective (Premier Inc, Charlotte, North Carolina), a voluntary, fee-supported database developed for measuring quality and health care utilization and which we have used in previous research.5,7-9
In addition to standard hospital discharge file data, Perspective contains a date-stamped log of all materials (eg, serial compression devices used to prevent venous thromboembolism), and medications (eg, β-blockers) charged for during hospitalization. Perspective charge data are collected electronically from participating sites and audited regularly to ensure data validity. Perspective sites are representative of the US hospital population and perform similarly on publicly reported quality measures.10
Patients in our analysis were admitted between October 1, 2003, and September 1, 2005, were 18 years or older, and had coronary bypass grafting (CABG) as their principal procedure as defined by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code. The institutional review board at University of California, San Francisco, approved our study.
In addition to patient age, sex, race or ethnicity, marital status, insurance information, and principal diagnosis, we classified comorbidities using the method of Elixhauser et al.11 Data regarding LOS and hospital costs were obtained from the Perspective discharge file. Three-quarters of hospitals that participate in Perspective report costs derived from their cost accounting systems, while others provide costs using Medicare cost to charge ratios. In addition, the database contains information about hospital size, teaching status, and location.
Because some hospitals did not contribute data for the entire study period, we estimated the annual case volume by dividing each hospital's or physician's observed patient count by the total number of months that the hospital or physician contributed patients to the data set and then multiplied this number by 12. These “annualized” volumes were then divided into quartiles as has been done in previous work.1,12-14
Because diagnosis codes cannot reliably distinguish between complications and preexisting conditions, we measured the proportion of ideal candidates for each care process who failed to receive them—a missed quality measure. We developed these measures by translating recommendations from the Surgical Care Improvement Project)15 and American Heart Association/American College of Cardiologists Guidelines16 into a series of dichotomous quality measures.5 These measures, many of which are also included in recently published recommendations,17 included whether antimicrobials were used to prevent surgical site infection on the operative day, whether that antimicrobial was discontinued in 48 hours, whether serial compression devices were used to prevent venous thromboembolism in the 2 days following surgery, and whether aspirin, β-blockers, or lipid-lowering statin drugs were administered in the 2 days following surgery. Other measures (such as those related to glucose control) cannot be detected in Perspective data and were not targeted.
To provide a more sensitive measure of system-level ability to provide reliable care,18 we also counted the total number of individual quality measures missed during hospitalization.
We first described study patients and hospitals using univariable methods. Mixed-effect models were used to account for clustering of patients within physicians and within hospitals. Length of stay and costs were log-transformed to account for skew and to stabilize variance of residuals in multivariable models. Beta estimates and 95% confidence intervals (CIs) were converted to percentage differences using the following formula: 100 × (exp[estimate] − 1).
Models were constructed using manual variable selection methods; volume and quality measures were entered manually, while additional covariates (confounding factors) were selected for inclusion if they were associated with the outcome at P < .01, if including them changed estimates for the primary predictors by more than 10%, or if they had face validity. Models of LOS were adjusted for age, sex, race, insurance type, diagnosis-related group severity of illness score, admission status, geographic area, comorbid illnesses (congestive heart failure, valvular disease, hypertension, paralysis, neurological disorders, chronic obstructive pulmonary disease, diabetes with complications, renal failure, obesity, weight loss, electrolyte disorder, blood loss, deficiency anemia, alcohol or drug abuse, psychoses, and depression), and whether an internal mammary graft was used during the procedure. Models of costs included age, sex, race, insurance type, admission status, number of beds, severity score, comorbid illnesses (congestive heart failure, valvular disease, hypertension, paralysis, neurological disorders, diabetes, diabetes with complications, renal failure, coagulopathy, weight loss, electrolyte disorder, blood loss, deficiency anemia, psychoses, and alcohol abuse), whether an internal mammary graft was used during the procedure, and source of costs (actual costs or cost to charge ratio).
Multivariable models first assessed the associations between hospital volume, physician volume, and individual (or overall) quality measures as single predictors in individual models for each predictor, after adjusting only for patient and hospital confounding factors. To determine the degree to which volume effects and missed quality effects were related, our next models included our volume and individual quality measures in one fully adjusted model; a separate fully adjusted model included volume and overall quality measures.
To assess potential collinearity between our key predictors (hospital volume, physician volume, and quality measures), we examined Pearson correlations between them. In view of the large number of observations, these analyses gave no evidence for collinearity (all correlations, <0.3). In addition, we examined models including only subsets of these variables, and found no evidence for instability. All analyses were carried out using SAS version 9.1 (SAS Institute Inc, Cary, North Carolina).
A total of 81 289 patients underwent CABG at one of our study sites between October 1, 2003, and September 30, 2005 (Table 1). Mean (SD) age of patients was 65.0 (10.9) years, and 72% were men. Most were white, married, and had Medicare insurance. The most common comorbidities in our cohort were hypertension (72%), diabetes without chronic complications (31%), and chronic obstructive pulmonary disease (23%). Most received care at nonteaching hospitals in the South. Median LOS was 7 days (interquartile range [IQR], 6-11 days), and median costs were $25 140 (IQR, $19 677-$33 121).
We have published details of our quality measures and their characteristics in our study population previously.5 Most patients (77%) did not have charges for serial compression devices, but few did not receive a β-blocker (22%), or had no antimicrobial charges on the operative day (6%) (eTable 1). Very few patients (12%) had no missed quality measures, and 44% missed 3 or more. The majority of hospitals and physicians in our cohort were lowest-volume health care providers (eTable 2). Hospital volume ranged from 112 (IQR, 80-154) per year in the lowest-volume quartile to 644 (IQR, 536-754) per year in the highest quartile. Physician volume ranged from 12 (IQR, 11-18) per year in the lowest-volume quartile to 155 (IQR, 141-173) per year in the highest quartile. The proportion of patients with 1 or more missed quality measure was slightly higher as volume rose.
Lowest-volume hospitals had substantially higher costs but similar LOS compared with other hospitals; these differences persisted whether volume measures were adjusted for patient factors alone or for individual care quality measures (Table 2). Physician volume was not associated with LOS in individual models adjusting for clinical factors alone or clinical factors and quality measures. However, lowest-volume physicians had higher unadjusted costs, and these differences were not eliminated after adjusting for clinical factors or clinical factors and individual quality measures.
A number of individual quality measures were associated with unadjusted differences in LOS, many of which were altered substantially by adjusting for clinical risk factors. The addition of volume as another adjuster in our models did not appreciably alter the adjusted associations between individual quality measures and LOS or costs, suggesting that the associations between volume and resource use and between quality and resource use were independent of each other. In both individual and fully adjusted models, receiving antimicrobial prophylaxis was associated with longer LOS but not costs, and receipt of an antimicrobial after the first 48 hours and nonuse of serial compression devices for prevention of venous thromboembolism were associated with substantially longer LOS in individual or fully adjusted models.
Associations between hospital and physician volume and costs or LOS, adjusting for overall care quality, were essentially identical to those adjusting for individual quality measures, suggesting independence of the association between overall quality's and volume measures' associations with LOS or costs (Table 3). However, missing any quality measures was strongly associated with higher adjusted costs and LOS, whether or not volume measures were included.
In preplanned analyses, we tested for statistical interactions between case volume measures and overall quality. In these analyses, we noted statistically significant interactions between hospital volume, overall quality, and LOS and costs, suggesting small incremental benefits of having higher quality care at a higher-volume hospital or from a busier surgeon.
In this large cohort of patients undergoing CABG, hospitals with the lowest operative volumes tended to have higher costs but similar LOS compared with high-volume hospitals; a weak association between low-volume surgeons and higher costs was also observed. These findings persisted even after adjusting for observable patient characteristics and after adjustment for whether recommended care processes were missed. In contrast, missing 1 or more quality measure was strongly associated with higher costs and longer LOS, which was essentially independent of the volume of the surgeon or hospital. These findings suggest that efficiency can be improved in CABG by advising patients to avoid low-volume health care providers, while encouraging investment in improving the reliability of hospital care.
The relationship between higher volume of care and better outcomes of cardiac surgery is well established.13,19-21 Because cost savings attributable to volume-based referrals has generally been modest (<5%),22 the volume-based referrals have been thought to improve value largely based on improved clinical outcomes.22-24 Our data suggest that the bulk of savings would result if patients avoided low-volume hospitals (as high as 16% savings if quality is not taken into account), and that little savings would result from a shift of patients from second highest– to highest-volume centers (or third highest to highest). In our study, patients living near a lowest-volume hospital (approximately one-half of our hospitals) could choose from any of the 79 higher-volume hospitals rather than just the 19 in the highest quartile, saving between $85 and $171 million per year.
Our results more often suggest that promotion of adherence to process measures is a separate approach for improving care efficiency in cardiac surgery, but maximizing overall rather than individual measure performance is critical. While worse performance on individual measures in our study was inconsistently associated with costs or LOS and had a minimal impact on the association between volume and outcomes, the number of care processes missed was a strong and consistent predictor of longer LOS and costs. Differences in the associations between costs and LOS between individual and overall quality measures are important because overall quality and all-or-none measurement are thought to be a more valid measure of a systems' ability to deliver all aspects of care reliably to individual patients.18 Our data suggest that overall system performance in quality may have a direct effect on patient care efficiency and provide another rationale for “all or none” quality measurement as a method to compel widespread improvements in care,25 or at the least efforts to standardize care.26 Importantly, our overall quality measure was strongly associated with cost reductions even though it included individual measures with weak (or reversed) associations with resource use. Refining this listing to just those measures or reweighting them (another proposed method for maximizing impact of quality reporting) is likely to only magnify the importance of overall quality in identifying optimal systems.
Our study has a number of limitations. First, because we used administrative data from the inpatient stay only, we cannot easily distinguish complications from preexisting disease. However, we constructed our quality measures to focus on patients who had no documented contraindications, and we did not use comorbidities to define outcomes. Our quality measures focus primarily on inpatient medications and cannot distinguish continuation of home medications and initiation of medications in hospital. This factor may be influencing the associations between resource use and aspirin, β-blockers, and statins but is less likely to affect antimicrobial or serial compression device use. In addition, our quality measures were collected from electronic billing systems rather than medical chart abstraction and have not been validated in a scientific study. However, because the business model of Premier Inc focuses on provision of accurate benchmarking data to their members, all charge and diagnosis data are regularly audited for accuracy. Our cost data include those incurred during hospitalization and may miss costs of posthospital care. Because we did not have these data available, our cost models did not adjust for differences in local wage index or share of low-income patients. As an observational study, the results are subject to biases related to nonrandom assignment of patients to receive medications or devices, as well as the documentation biases described herein. However, our results were robust even after adjusting for all available patient-level and hospital-level data associated with our measures of resource use. Although participating hospitals are similar to other US centers in terms of size, teaching status, and location, it is possible that they differ from non-Premier sites in subtle ways not captured in our data. Having said this, previous research in Premier sites has produced results useful to policy makers. In addition, while we constructed our volume measures to be consistent with those used in previous work, it is possible that they do not adequately represent expertise accrued if low-volume surgeons were performing other complex cardiovascular surgical procedures frequently or performed surgical procedures outside of our Premier hospitals. Finally, it is likely that some surgical procedures in our dataset were at least partially performed by fellows or residents. To address this potential concern, we adjusted for whether the surgery was performed at a teaching hospital.
Our results add to the literature by suggesting that one strategy to enhance the value of CABG is to direct patients away from lower-volume surgeons and hospitals to institutions and health care providers who perform the procedure regularly. However, our findings also suggest that quality improvement efforts focused on improving adherence to process measures as an all-or-none metric will also have beneficial effects on the value of care through reductions in cost and LOS. Health care reform efforts aimed at improving the value of care in the United States should examine whether strategies that incentivize systems to provide maximal care quality would be useful in this effort.
Correspondence: Andrew D. Auerbach, MD, MPH, Department of Medicine Hospitalist Group, University of California, San Francisco, 505 Parnassus Ave, PO Box 0131, San Francisco, CA 94143-0131 (email@example.com).
Accepted for Publication: December 23, 2009.
Author Contributions: Dr Auerbach had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Auerbach and Lindenauer. Acquisition of data: Auerbach and Lindenauer. Analysis and interpretation of data: Auerbach, Hilton, Maselli, Pekow, Rothberg, and Lindenauer. Drafting of the manuscript: Auerbach, Maselli, and Lindenauer. Critical revision of the manuscript for important intellectual content: Auerbach, Hilton, Pekow, Rothberg, and Lindenauer. Statistical analysis: Auerbach, Hilton, Maselli, Pekow, and Lindenauer. Obtained funding: Auerbach. Administrative, technical, and material support: Auerbach and Lindenauer.
Financial Disclosure: None reported.
Funding/Support: This study was supported by grant 05-1755 from the California Healthcare Foundation. Dr Auerbach was also supported by a K08 Patient Safety Research and Training Grant (K08 HS11416-02) from the Agency for Healthcare Research and Quality during the execution of this project.
Additional Contributions: Denise Remus, RN, PhD, and Kathy Belk assisted in assembling the data set used for this analysis.