Circles indicate zip codes that had any conversions; numbers in panel keys report numbers of hospitals that converted from nonprofit to for-profit status.
An increasing number of US hospitals have converted from nonprofit to for-profit status. This video summarizes findings from a 2014 study characterizing the hospitals that made the switch in the 2000s and found improvements in hospital finances without any improvements in health care quality, patient mortality, or provision of care to poor or minority black or Hispanic patients.
eTable 1. Quality Metrics from Hospital Compare
eTable 2. Detailed methodology for mortality calculations
eTable 3. Matched Control Hospitals
eTable 4. Change in Hospital Financial Performance for Converting Hospitals Versus Controls in Post-Conversion Years 3-4
eTable 5. Change in Hospital Quality and Outcomes for Converting Hospitals Versus Controls in Post-Conversion Years 3-4
eTable 6. Change in Patient Population for Converting Hospitals Versus Controls in Post-Conversion Years 3-4
eTable 7. Change in Hospital Financial Performance for Converting Hospitals Versus All Other Hospitals
eTable 8. Change in Hospital Quality and Outcomes for Converting Hospitals Versus All Other Hospitals
eTable 9. Change in Patient Population for Converting Hospitals Versus All Other Hospitals
eTable 10. Change in Hospital Financial Performance for Converting Hospitals Versus All Other Hospitals, Excluding For-Profit Hospitals
eTable 11. Change in Hospital Quality and Outcomes for Converting Hospitals Versus All Other Hospitals, Excluding For-Profit Hospitals
eTable 12. Change in Patient Population for Converting Hospitals Versus All Other Hospitals, Excluding For-Profit Hospitals
eTable 13. Change in Hospital Financial Performance for Converting Hospitals Versus Controls, Before Conversion
eTable 14. Change in Hospital Quality and Outcomes for Converting Hospitals Versus Controls, Before Conversion
eTable 15. Change in Patient Population for Converting Hospitals Versus Controls, Before Conversion
Customize your JAMA Network experience by selecting one or more topics from the list below.
Joynt KE, Orav EJ, Jha AK. Association Between Hospital Conversions to For-Profit Status and Clinical and Economic Outcomes. JAMA. 2014;312(16):1644–1652. doi:10.1001/jama.2014.13336
An increasing number of hospitals have converted to for-profit status, prompting concerns that these hospitals will focus on payer mix and profits, avoiding disadvantaged patients and paying less attention to quality of care.
To examine characteristics of US acute care hospitals associated with conversion to for-profit status and changes following conversion.
Design, Setting, and Participants
Retrospective cohort study conducted among 237 converting hospitals and 631 matched control hospitals. Participants were 1 843 764 Medicare fee-for-service beneficiaries at converting hospitals and 4 828 138 at control hospitals.
Conversion to for-profit status, 2003-2010.
Main Outcomes and Measures
Financial performance measures, quality process measures, mortality rates, Medicare volume, and patient population for the 2 years prior and the 2 years after conversion, excluding the conversion year, assessed using difference-in-difference models.
Hospitals that converted to for-profit status were more often small or medium in size, located in the south, in an urban or suburban location, and were less often teaching institutions. Converting hospitals improved their total margins (ratio of net income to net revenue plus other income) more than controls (2.2% vs 0.4% improvement; difference in differences, 1.8% [ 95% CI, 0.5% to 3.1%]; P = .007). Converting hospitals and controls both improved their process quality metrics (6.0% vs 5.6%; difference in differences, 0.4% [95% CI, −1.1% to 2.0%]; P = .59). Mortality rates did not change at converting hospitals relative to controls for Medicare patients overall (increase of 0.1% vs 0.2%; difference in differences, −0.2% [95% CI, −0.5% to 0.2%], P = .42) or for dual-eligible or disabled patients. There was no change in converting hospitals relative to controls in annual Medicare volume (−111 vs −74 patients; difference in differences, −37 [95% CI, −224 to 150]; P = .70), Disproportionate Share Hospital Index (1.7% vs 0.4%; difference in differences, 1.3% [95% CI, −0.9% to 3.4%], P = .26), the proportion of patients with Medicaid (−0.2% vs 0.4%; difference in differences, −0.6% [95% CI, −2.0% to 0.8%]; P = .38) or the proportion of patients who were black (−0.4% vs −0.1%; difference in differences, −0.3% [95% CI, −1.9% to 1.3%]; P = .72) or Hispanic (0.1% vs −0.1%; difference in differences, 0.2% [95% CI, −0.3% to 0.7%]; P = .50).
Conclusions and Relevance
Hospital conversion to for-profit status was associated with improvements in financial margins but not associated with differences in quality or mortality rates or with the proportion of poor or minority patients receiving care.
During the past decade, there has been increasing attention paid to the growing number of nonprofit or public hospitals that have become for-profit institutions. These conversions are controversial. Advocates argue that for-profit organizations bring needed resources and experienced management to struggling institutions, improving the quality and efficiency of the care that these hospitals provide. Critics are concerned that once hospitals become “for-profit” they will focus on financial metrics such as improving payer mix and increasing volume, shunning disadvantaged patients and paying less attention to the provision of high-quality care.1
Although these debates are taking place across the nation as hospitals convert to for-profit status, there is little contemporary empirical evidence on what happens to patient care or to patient mix when hospitals convert. Most of the data on conversions are from the 1990s, and those data generally suggest that conversions were associated with higher margins2,3 but also higher mortality rates.4,5 However, these transitions took place during an era in which national efforts such as the Hospital Compare program6 designed to monitor hospital quality were not yet in existence and prior to the emergence of powerful consumer advocate groups focused on quality and safety, such as the Leapfrog Group.7,8 Thus, whether prior findings on conversions would hold today is unclear. For policy makers and clinical leaders considering the potential effects of for-profit conversions, more contemporary data would be helpful.
Therefore, in this study, we set out to answer 3 key questions. First, which hospitals are likely to convert to for-profit status? Second, what is the relationship between hospital conversion and changes in both financial health and clinical care? Third, what is the relationship beween hospital conversions to for-profit status and changes in hospitals’ patient populations, in terms of annual case volume as well as provision of care to low-income racial and ethnic minority populations?
We used Medicare inpatient data from 2002-2010 to identify nonfederal hospitals providing acute care services to Medicare beneficiaries in the 50 US states or the District of Columbia. To identify hospitals that had changed from nonprofit or public ownership to for-profit status during the study period, we used the Medicare Cost Reports from 2002-2010 to assess hospitals’ ownership during each year and confirmed changes using American Hospital Association data. We linked these data with Rural Urban Commuting Area codes, which describe urbanization.9 We obtained data on hospital size, ownership, teaching status, clinical resources, and region from American Hospital Association surveys from 2002-2010. The study was approved by the Office of Human Research Administration at the Harvard School of Public Health; a waiver of informed consent was granted because of the deidentified nature of the data.
We considered 3 sets of factors in assessing performance before and after conversion: financial performance, quality of care and outcomes, and measures of patient population.
We obtained hospitals’ financial performance from the Medicare Cost Reports, which have previously been used by the Centers for Medicare & Medicaid Services (CMS), the Medicare Payment Advisory Committee, and others to calculate hospitals’ margins and other metrics of financial performance.10 We used established methods to calculate total margins (ratio of net income to net revenue plus other income; higher positive numbers are better, negative numbers suggest that the hospital is losing money), operating margins (net revenue from patient care and other operations [such as pharmacy, meal service, and parking lot receipts] minus total operating expenses divided by net revenues from patient care and other operations, again with higher positive numbers representing greater financial health).11,12 We calculated liquidity (ratio of current assets to total liabilities; higher numbers greater than 1 are better, and numbers less than 1 suggest that the hospital may have significant difficulties with cash flow), and capitalization (ratio of fund balances to total assets; lower numbers less than 1 are better). As has been done previously,11 we Winsorized total and operating margins, setting all values below the fifth percentile and above the 95th percentile equal to the values at those percentiles. We did not Winsorize liquidity or capitalization, because these variables were not found to have significant outliers.
We used the Hospital Compare database, made publicly available by CMS, to obtain information on each hospital’s performance on processes of care for common medical conditions.6 These metrics include performance on items such as reperfusion for myocardial infarction, measurement of ejection fraction for heart failure, and timely administration of antibiotics for pneumonia, are scored on a 100-point scale, and are currently used in multiple public reporting and pay-for-performance programs under CMS6,13,14 (see eTable 1 in the Supplement for a full list of metrics). We then created a single composite score across the 3 conditions.15 We used the American Hospital Association survey to calculate nurse staffing (full-time equivalent nurses per 1000 patient-days) as an additional measure of quality.
To assess clinical outcomes, we defined our study population as Medicare fee-for-service beneficiaries admitted to a US acute care hospital in our sample in 2002-2012. We used patient-level data to determine mortality within 30 days of admission, following the CMS approach to classify “index admissions,” assigning all patients to the admitting hospital, regardless of whether they were transferred, and excluding patients discharged to hospice.16 We created patient-level repeated-measures logistic regression models for each year, accounting for clustering of patients within hospitals. We adjusted for age, sex, race/ethnicity, and 29 comorbid medical conditions using the Medicare risk adjustment model developed by the Agency for Healthcare Research and Quality (eTable 2 in the Supplement).17 We used this method rather than publicly reported mortality data for 2 reasons. First, risk-standardization used for public reporting shrinks small hospitals to the mean and therefore is less well-suited to analyses meant to find associations between hospital characteristics and outcomes when there is a known volume-outcome relationship.18 Second, publicly reported mortality data are only available for acute myocardial infarction, congestive heart failure, and pneumonia and are limited to 2008 and beyond.
We were particularly interested in the outcomes associated with conversion to for-profit status among vulnerable populations, and we therefore conducted 2 additional mortality analyses. First, we limited our population to patients dually eligible for both Medicare and Medicaid, to capture a socioeconomically vulnerable group. Second, we limited our population to patients younger than 65 years who qualified for Medicare based on the presence of a disability, to capture a clinically vulnerable group.
We also examined change in the hospital’s patient population. First, we calculated annual Medicare volume from the Medicare data. We also examined the Disproportionate Share Hospital (DSH) Index, a measure of the share of low-income patients served (calculated as the sum of the percentage of Medicare inpatient days for patients receiving Supplemental Security Income and the percentage of total inpatient days for patients eligible for Medicaid but not for Medicare), obtained from the Medicare Impact files. The benefit of using the DSH Index is that it captures care for low-income patients whose primary payer is Medicaid in addition to those who qualify for Medicare. We obtained the proportion of Medicare patients who were Medicaid-eligible using Medicare data. The final 2 measures were the proportion of black and Hispanic Medicare patients, as self-identified in the Medicare beneficiary enrollment file.
We began by comparing characteristics of hospitals that converted to for-profit institutions with characteristics of hospitals that did not. Then, from the group of nonconverting hospitals, we selected controls (up to 3 controls for each converting hospital), matching on size category (categorizing small hospitals as those with <100 beds, medium hospitals as those with 100-399 beds, and large hospitals as those with ≥400 beds), teaching status (teaching vs nonteaching), and Hospital Referral Region. We chose to use a matching approach to reduce the likelihood of unmeasured confounding. We set the reference year as the year of conversion for each hospital, dropping the conversion year from the analysis; this year also served as the “reference” year for each hospital’s matched controls. Using this strategy allowed us to effectively control for secular trends in any of our outcomes during the decade-long study period, since each converting hospital was being directly compared with contemporary controls over a similar time window.
Using this group as our controls, we conducted a set of hospital-level difference-in-differences analyses to compare changes in hospital performance on the financial, quality, and patient population metrics outlined above from the 2 years before to the 2 years after conversion. Difference-in-differences analyses compare the change in an outcome over time in the intervention group to the change in an outcome over time in the control group. Our models therefore included time period, conversion status, and the interaction between them as predictors; the interaction term was the difference-in-differences term and represents our primary predictor of interest. A random effect for the match group was included in the model to account for correlation between hospitals within the match group and across time. We controlled for additional characteristics not included in our matching algorithm, including ownership (prior to conversion) and region of the country.
We also conducted a number of sensitivity analyses to ensure that our results were robust to our choice of time period and our choice of controls. First, we limited our analyses to the year prior to a conversion and the year after a conversion; these results were qualitatively very similar and are not shown. Next, to determine whether changes might take more time to accrue, we changed our postconversion period to be the third and fourth year after conversion; for example, hospitals converting in 2007 would have 2010-2011 as the postconversion years for this longer-term analysis (and thus this analysis only included hospitals converting in 2007 or earlier). We then repeated all of our analyses with a more permissive control group, using all other US hospitals as controls rather than only the matched hospitals, and then repeated our analyses with a more restrictive control group, in which we removed all for-profit hospitals from the overall hospital group. Last, to determine whether there was a preconversion “dip” in performance that might be influencing our results, we examined performance 3 years before conversion compared with 1 year before conversion for those hospitals with 3 years of preconversion data.
We considered P < .05 (2-sided) significant. Analyses were performed using SAS 9.2 (SAS Institute Inc) and Stata 12.1 (StataCorp).
Between 2003 and 2010, 237 hospitals converted from nonprofit to for-profit status (Figure). Hospitals that converted to for-profit status were more often small or medium in size, located in the south, and in an urban or suburban location than hospitals that did not convert and were less likely to be teaching institutions (Table 1).
We then selected 631 matched controls. These control hospitals were well matched to our converting hospitals on size, region, teaching status, and location, although they were more likely to be part of a hospital system and somewhat less likely to have a medical intensive care unit (eTable 3 in the Supplement).
The total number of patients included in analyses in the converting hospitals was 1 843 764 (median per hospital, 6683 [interquartile range, 11 038]); the total number of patients included in analyses in the matched controls was 4 828 138 (median per hospital, 6661 [interquartile range, 10 104]) (eTable 4 in the Supplement). Roughly 57% of patients were women, and the median age was 75 years; 82% of patients were white, and 30% were dual-eligible. Common medical comorbidities such as diabetes, chronic kidney disease, hypertension, and chronic obstructive pulmonary disease were similarly prevalent at converting hospitals and matched controls (Table 2).
Hospitals that converted to for-profit status had lower total margins than control hospitals prior to conversion (−1.2% [95% CI, −2.1% to −0.3%] vs 1.7% [95% CI, 1.0% to 2.3%]) but improved more postconversion (2.2% improvement vs 0.4% improvement; difference in differences, 1.8% [95% CI, 0.5% to 3.1%]; P = .007) (Table 2). Patterns were similar for operating margins (baseline, −6.6% [95% CI, −7.9% to −5.3%] for converting hospitals vs −3.1% [95% CI, −4.1% to −2.1%] for controls; 3.2% improvement for converting hospitals vs 0.2% worsening for controls; difference in differences, 3.3% [95% CI, 1.5% to 5.2%]; P < .001). Liquidity was lower in the converting hospitals than controls in both periods; there was no difference between converting hospitals and controls in the change in capitalization during the study period (Table 3). Average Medicare payments per hospitalization were similar at baseline ($5866 [95% CI, $5648 to $6084] vs $5859 [95% CI, $5681 to $6037]) and increased similarly over the study period ($605 vs $606; difference in differences, −$1 [95% CI, −$273 to $270]; P = .99).
Hospitals that converted had similar performance on process quality indicators for acute myocardial infarction, congestive heart failure, and pneumonia compared with controls at baseline (84.3% [95% CI, 82.9% to 85.8%] vs 85.5% [95% CI, 84.1% to 86.7%]). Both groups improved from the preconversion period to the postconversion period (6.0% for converting hospitals vs 5.6% for controls; difference in differences, 0.4% [95% CI, −1.1% to 2.0%]; P = .59) (Table 4). Hospitals that converted had similar nurse staffing at baseline (6.7 [95% CI, 6.3 to 7.0] vs 6.5 [95% CI, 6.2 to 6.8] full-time equivalent nurses per 1000 patient-days) and the change over time in each group was the same (0.2 vs 0.1 increase; difference in differences, −0.1 [95% CI, −0.4% to 0.5%]; P = .85).
When we examined 30-day risk-adjusted all-cause, all-condition mortality rates at converting hospitals vs controls, we found no difference in mortality rates among converting hospitals at baseline (8.2% [95% CI, 7.9% to 8.5%] vs 8.1% [95% CI, 7.9% to 8.3%]); there was little change in the postconversion period in either group (0.2% vs 0.1% improvement; difference in differences, −0.2% [95% CI, −0.5% to 0.2%]; P = .42) (Table 4). Patterns were the same when we limited the sample to dual-eligible beneficiaries (baseline mortality, 7.4% [95% CI, 7.1% to 7.7%] vs 7.2% [95% CI, 7.0% to 7.4%]; 0.3% vs 0.3% improvement; difference in differences, 0.0% [95% CI, −0.5% to 0.4%]; P = .86) and when we limited the sample to the younger than 65 years disabled population (baseline mortality, 3.7% [95% CI, 3.3% to 4.0%] vs 3.8% [95% CI, 3.6% to 4.1%]; 0.0% vs 0.2% improvement; difference in differences, 0.3% [95% CI, −0.2% to 0.8%]; P = .30) (Table 4).
We had postulated that converting hospitals might improve their financial health by increasing patient volume. We found no evidence to support this hypothesis: prior to conversion, converting hospitals and controls had similar annual Medicare patient volume (1755 [95% CI, 1598 to 1913] vs 1921 [95% CI, 1789 to 2052] admissions, P = .08); in the postconversion period, both groups’ admissions decreased (111 vs 74 admissions per year decrease; difference in differences, −37 [95% CI, −224 to 150]; P = .70) (Table 5).
Prior to conversion, converting hospitals had a higher DSH Index than nonconverting hospitals, indicating a higher proportion of care provided to the poor (29.0% [95% CI, 27.1% to 30.9%] vs 26.3% [95% CI, 24.7% to 27.9%]), and both types of hospitals increased their DSH index similarly in the postconversion period (1.7% increase vs 0.4% increase; difference in differences, 1.3% [95% CI, −0.9% to 3.4%]; P = .26). There were also no differences in the change over time in the proportion of patients who were Medicaid-eligible (−0.2% vs 0.4%; difference in differences, −0.6% [95% CI, −2.0% to 0.8%]; P = .38) or who were black (−0.4% vs −0.1%; difference in differences, −0.3% [95% CI, −1.9% to 1.3%]; P = .72) or Hispanic (0.1% vs −0.1%; difference in differences, 0.2% [95% CI, −0.3% to 0.7%]; P = .50).
When we repeated these analyses comparing changes during 3 to 4 years postconversion, we found nearly identical results (eTables 4-6 in the Supplement). Results remained similar when we used all other nonconverting hospitals (not just our matched hospitals) as controls (eTables 7-9 in the Supplement) and when we excluded the for-profit hospitals from our control group (eTables 10-12 in the Supplement): conversion was associated with improvements in the financial health of the hospitals but not with significant changes in quality of care or patient population. When we looked for evidence of a preconversion “dip” in performance at converting hospitals compared with controls, we found no significant differences between the 2 groups (eTables 13-15 in the Supplement).
We found that between 2002 and 2010, 237 US hospitals switched from nonprofit to for-profit status. This conversion was associated with better subsequent financial health but had no relationship to the quality of care delivered or to mortality rates at the converting hospitals. We also found no evidence that for-profit conversion was associated with any increase in Medicare payments or annual Medicare case volume or decrease in the provision of care to poor patients or to racial or ethnic minorities.
Prior to conversion, we found that hospitals that would eventually become for-profit institutions were struggling financially, with negative total margins; this is in keeping with prior research2 and is likely why these hospitals were targeted for conversion. We also found that after conversion, there was a significant improvement in total and operating margins. The mechanism for this improvement is unclear; we explored a number of the ways a hospital could improve its margins, including increasing average per-hospitalization Medicare payment and Medicare patient volume, but did not find meaningful differences. We also found no evidence that the improvements in financial health came through avoiding care for poor patients. Therefore, it is likely that the financial gains likely came through 1 of 2 mechanisms: cost-cutting or better payments through private payers. Although we cannot test this directly, it is possible that the corporations purchasing these hospitals brought experienced management to struggling institutions, which allowed them to improve their efficiency. For a hospital with persistently negative margins, for-profit status may also bring access to capital and other financial resources that can lead to changes in the hospital’s economic viability.
We found no evidence that conversion was associated with worsening care, as measured by processes of care, nurse staffing, or outcomes. On the other hand, for-profit hospitals have often argued that conversion will provide resources that will lead to better care, and our study failed to find any evidence to support this notion, either. In fact, our findings suggest that as regulators and policy makers consider for-profit conversions, the likely changes that could be anticipated will primarily be in the financial health of the institution, with little relationship, either positive or negative, to the quality of care provided or the institution’s mortality rates. Although there may be individual instances in which quality or outcomes improve or decline after a conversion, we did not find any consistent pattern during the past decade.
Our findings differ somewhat from work examining hospital conversions in the 1990s. Picone et al4 found that hospitals that converted to for-profit status between 1985 and 1995 had higher mortality rates, higher profitability, and lower staffing after conversion. Shen,5 examining the same time frame, demonstrated that mortality rates for acute myocardial infarction increased after nonprofit hospitals converted to for-profit status, although there was no change when public hospitals converted. Thorpe et al3 focused on conversions between 1991 and 1997 and found that hospitals converting to for-profit status provided less uncompensated care and had higher margins after conversion, although another study found no change in community benefit among converting hospitals in the 1990s in California, Florida, and Texas.19 It is possible that our results differ from these earlier studies because of policies that have gone into effect during the past decade that more closely monitor, report, and reward quality metrics, such as the Hospital Compare program for national public reporting of quality and outcomes.6 Additionally, an increasing focus on publicizing hospitals’ performance on quality and safety from private organizations such as the Leapfrog Group8 and Consumer Reports may have affected hospitals’ attention to performance. There also may have been increasing regulatory attention on both for-profit and nonprofit institutions during the study period,20 which could have affected hospitals’ patterns of care provision.
There are limitations to our study. First, administrative data are limited in their ability to provide clinical detail for risk adjustment, so we were not able to fully account for changes in risk profile. Our financial data come from the cost reports and are reliant on self-report by hospitals; hospitals could have manipulated their margins in anticipation of being converted. The quality metrics we examined, although used widely to judge hospital performance, only reflect quality for a limited number of conditions and are not a comprehensive measure of all care provided. We only examined outcomes for Medicare patients and therefore cannot be certain whether our results would generalize to privately insured or uninsured patients or to patients with Medicaid as their primary payer. We do not have a reliable measure of free care provided and therefore cannot rule out a decrease in this specific type of care provision. Despite having a full census of converting hospitals, our sample size was modest and we therefore could have missed small effects of conversion on quality or outcomes. For example, our confidence intervals for all-cause, all-condition mortality suggest that we cannot exclude a mortality decrement of less than 0.2%.
Hospital conversion to for-profit status in the 2000s was associated with improvements in financial margins but not associated with differences in the measured quality of care provided, mortality rates overall or for selected vulnerable populations, or the proportion of poor or minority patients receiving care.
Corresponding Author: Karen E. Joynt, MD, MPH, Brigham and Women’s Hospital, 75 Francis St, Boston, MA 02115 (email@example.com).
Author Contributions: Dr Joynt had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Joynt, Jha.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Joynt, Jha.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Orav.
Administrative, technical, or material support: Jha.
Study supervision: Jha.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest and none were reported.
Funding/Support: Dr Joynt was supported by grant 1K23HL109177-01 from the National Heart, Lung, and Blood Institute, National Institutes of Health.
Role of the Sponsors: The National Heart, Lung, and Blood Institute had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: Jie Zheng, PhD, and Sidney T. Le, BA, both of the Harvard School of Public Health, Boston, Massachusetts, contributed to the analysis of data for this project. Both received routine compensation for employment. Mr Le is now at the University of California, San Francisco, School of Medicine.