Key PointsQuestion
What was the change in discharge to institutional postacute care after lower extremity joint replacement episodes among Medicare beneficiaries following implementation of the Comprehensive Care for Joint Replacement (CJR) bundled payments in 2016?
Findings
In this interim analysis of the first year of a 5-year randomized trial of 75 metropolitan statistical areas (MSAs) that were assigned the bundled payment model and 121 control MSAs that were not, the mean percentage of patient discharges to institutional postacute care was 33.7% in the control group and was 2.9 percentage points lower in MSAs covered by the CJR model, a significant difference.
Meaning
These interim findings suggest that CJR may reduce institutional postacute care following lower extremity joint replacement episodes among Medicare beneficiaries, although further evaluation is needed as the program is fully implemented over time.
Importance
Bundled payments are an increasingly common alternative payment model for Medicare, yet there is limited evidence regarding their effectiveness.
Objective
To report interim outcomes from the first year of implementation of a bundled payment model for lower extremity joint replacement (LEJR).
Design, Setting, and Participants
As part of a 5-year, mandatory-participation randomized trial by the Centers for Medicare & Medicaid Services, eligible metropolitan statistical areas (MSAs) were randomized to the Comprehensive Care for Joint Replacement (CJR) bundled payment model for LEJR episodes or to a control group. In the first performance year, hospitals received bonus payments if Medicare spending for LEJR episodes was below the target price and hospitals met quality standards. This interim analysis reports first-year data on LEJR episodes starting April 1, 2016, with data collection through December 31, 2016.
Exposure
Randomization of MSAs into the CJR bundled payment model group (75 assigned; 67 included) or to the control group without the CJR model (121 assigned; 121 included). Instrumental variable analysis was used to evaluate the relationship between inclusion of MSAs in the CJR model and outcomes.
Main Outcomes and Measures
The primary outcome was share of LEJR admissions discharged to institutional postacute care. Secondary outcomes included the number of days in institutional postacute care, discharges to other locations, Medicare spending during the episode (overall and for institutional postacute care), net Medicare spending during the episode, LEJR patient volume and patient case mix, and quality-of-care measures.
Results
Among the 196 MSAs and 1633 hospitals, 131 285 eligible LEJR procedures were performed during the study period (mean volume, 110 LEJR episodes per hospital) among 130 343 patients (mean age, 72.5 [SD, 0.91] years; 65% women; 90% white). The mean percentage of LEJR admissions discharged to institutional postacute care was 33.7% (SD, 11.2%) in the control group and was 2.9 percentage points lower (95% CI, −4.95 to −0.90 percentage points) in the CJR group. Mean Medicare spending for institutional postacute care per LEJR episode was $3871 (SD, $1394) in the control group and was $307 lower (95% CI, −$587 to −$27) in the CJR group. Mean overall Medicare spending per LEJR episode was $22 872 (SD, $3619) in the control group and was $453 lower (95% CI, −$909 to $3) in the CJR group, a statistically nonsignificant difference. None of the other secondary outcomes differed significantly between groups.
Conclusions and Relevance
In this interim analysis of the first year of the CJR bundled payment model for LEJR among Medicare beneficiaries, MSAs covered by CJR, compared with those that were not, had a significantly lower percentage of discharges to institutional postacute care but no significant difference in total Medicare spending per LEJR episode. Further evaluation is needed as the program is more fully implemented.
Trial Registration
ClinicalTrials.gov Identifier: NCT03407885; American Economic Association Registry Identifier: AEARCTR-0002521
The shift toward alternative payment models in Medicare is an important trend in US health care. By 2016, 30% of traditional Medicare reimbursement had been shifted from fee-for-service (FFS) models to alternative payment models.1 Bundled payments are one of the leading alternative payment models. Quiz Ref IDUnder bundled payments, health care organizations (such as hospitals, physician groups, and postacute care providers) receive a single “bundled” payment for all services related to a specific treatment (eg, hip replacement). By holding multiple parties jointly accountable for quality and costs, bundled payments may encourage coordination of care and reduce unnecessary utilization. However, because these parties are paid a fixed amount irrespective of the volume of services, they may also respond by reducing necessary care or by trying to treat only healthier patients with lower expected costs.2
Bundled payments have been widely touted for their potential to have a substantial, positive effect on health care delivery,3-7 yet there is limited rigorous evidence on their effects. Most studies of bundled payments have been observational, focusing on the experience of a small number of hospitals that voluntarily participated. These studies have tended to find large savings,8-12 but voluntary participation makes separating treatment from selection effects difficult,13 and the small number of participating hospitals raises concerns about generalizability.
To address these gaps in scientific knowledge, this study took advantage of the Centers for Medicare & Medicaid Services’ (CMS’s) random assignment of some metropolitan statistical areas (MSAs) to a Medicare bundled payment model for lower extremity joint replacement (LEJR)—ie, hip and knee replacement. In 2014, there were 486 249 LEJR procedures, accounting for $6.2 billion of Medicare inpatient spending (4.52% of all inpatient Medicare spending).14,15 CMS designed and randomly assigned the 5-year, nationwide bundled payment program for LEJR, known as Comprehensive Care for Joint Replacement (CJR), which began on April 1, 2016. Unlike previous bundled payment programs, participation was mandatory for all covered hospitals. The purpose of this study was to analyze results from an interim analysis of the first performance year of this 5-year program.
The primary objective of this study was to evaluate interim outcomes during the first year of implementation of a bundled payment model for LEJR, focusing on discharge to institutional postacute care following joint replacement hospitalization episodes.
Institutional review board (IRB) exemption was obtained from the Massachusetts Institute of Technology’s Committee on the Use of Humans as Experimental Subjects (COUHES 1710117275) and IRB approval for Medicare data analysis was obtained from Dartmouth’s IRB (15475). CMS did not require informed consent for patients in CJR; both IRBs waived informed consent for the analysis of the Medicare claims data.
In July 2015, CMS publicly announced in the Federal Register its exclusion criteria and randomization procedure for selecting the 196 eligible MSAs16; MSAs were excluded primarily because of low LEJR discharge volume. Within eligible MSAs randomized to CJR, hospitals were required to participate in CJR if they were paid under prospective payment and not already participating in Model 1 or Phase 2 (Models 2 or 4) of the Bundled Payments for Care Improvement Initiative (BPCI), a preexisting Medicare voluntary bundled payments model for LEJR.
Within eligible hospitals, the patient inclusion criteria included Medicare Part A and Part B coverage, no readmission during the episode for LEJR, and no death during the episode. Applying the hospital and patient exclusion criteria to all Medicare FFS LEJR episodes in eligible MSAs in 2016, we estimated that 75% of episodes would be covered by CJR if the MSA was selected for treatment; section 1 of eAppendix 1 and eTable 1 in Supplement 1 provide more detail on the eligibility criteria and this estimate.
The 196 eligible MSAs were divided into 8 strata based on quartile of historical LEJR payments and above- vs below-median MSA population. CMS set different treatment probabilities (ie, probability of selection for CJR) for MSAs by strata (ranging from 30% to 45%); MSAs with higher historical LEJR spending had higher treatment probabilities. CMS performed the randomization in SAS Enterprise Guide version 7.1 (SAS Institute Inc) using the PROC SURVEYSELECT statement with METHOD=SRS.
In July 2015, CMS publicly announced that based on this randomization, 75 MSAs were initially assigned to treatment and 121 to control (Figure). Following prior independent analyses of government-implemented randomization protocols,17,18 we verified via simulation that we could reproduce the randomization procedure to within statistical sampling error (section 1 of eAppendix 1 and eTable 2 in Supplement 1).
In November 2015, CMS publicly updated the MSA exclusion criteria in response to comments that the original criteria did not take into account hospitals and physician group practices that entered into Phase 2 BPCI by October 1, 2015; as a result, 8 MSAs were excluded from the treatment group without a corresponding set of MSAs excluded from the control group.19
The CJR is a Medicare bundled payment model for LEJR that holds acute care hospitals financially responsible for Medicare spending over the entire episode of care. An episode begins with a hospital stay with a discharge in 1 of 2 included diagnosis related groups (DRGs) (MS-DRGs 469 and 470) and ends 90 days after discharge. The CJR was introduced in April 2016 and designed to last for 5 years.
Quiz Ref IDUnder CJR, hospitals face financial incentives to reduce Medicare FFS spending and to maintain or increase quality. At the end of each year, hospitals that (1) have per-episode Medicare FFS spending below the target price (set by CMS based on historical hospital and regional episode spending and the reason for admission) and (2) have met a minimum quality standard (5 of 20 on a composite quality score) receive “shared savings” from CMS for the difference between the target price and spending up to a stop-gain amount (ie, maximum bonus payment to the hospital), with higher scores making them eligible for greater savings. Hospitals that have FFS spending of more than the target price are responsible for paying the difference up to a stop-loss amount (ie, maximum financial penalty to the hospital). The upside and downside risks increase over time. In the first year, the stop gain was +5% and there was no downside risk; by the fifth year, the stop gain and stop loss were each scheduled to be 20% of the target price.19,20
The use of random assignment by a government agency such as CMS is rare but not unprecedented and can be valuable for scientific research. For example, the state of Oregon used a random lottery to expand Medicaid coverage, enabling academic research on the Oregon Health Insurance Experiment.17,18,21-23
Data and Outcome Measures
We studied the first performance year of CJR, which includes episodes that begin on or after April 1, 2016, and end no later than December 31, 2016. Specifically, we analyzed episodes starting between April 1, 2016 ,and September 15, 2016; the end date was chosen so that all episodes would fall within the performance year (given a mean length of stay for an LEJR admission of 3.1 days for DRG 470 and 7.0 days for DRG 469).24
We used Medicare FFS claims data for 100% of enrollees from 2012-2014 and 2016 in the 196 eligible MSAs. We limited the sample to episodes that would have been covered by CJR if the MSA were included in the treatment group. We omitted data from 2015 because treatment MSAs were announced midway through 2015 and behavior was potentially affected during that year. We also used Hospital Compare data25 to construct an estimate of the targeted quality measure, and data on hospital-specific end-of-year reconciliation (ie, bonus) payments.20 Section 2 of eAppendix 1 in Supplement 1 provides more detail on data and outcomes.
The primary outcome was the share of LEJR admissions discharged to institutional postacute care—these are skilled nursing facilities, long-term care hospitals, and inpatient rehabilitation facilities. Existing observational studies11,12 suggested that this would be the primary margin of adjustment, and this was a margin where power calculations suggested that reasonably sized effects could be detected (prespecified analysis plan available in eAppendix 2 in Supplement 1).
Secondary outcomes included the number of days in institutional postacute care during the episode, discharges to other locations, Medicare FFS spending during the episode (both overall and for institutional postacute care), and net Medicare spending during the episode, which adds to Medicare FFS spending any reconciliation payments made to a treatment hospital under CJR.
Other secondary outcomes included LEJR patient volume, patient comorbidity severity, and several quality measures. Patient comorbidity severity was measured by the Elixhauser Comorbidity Index, which is the sum of 31 different comorbidity indicators.26,27 The targeted quality measure was a modified composite quality score, derived from measures of total hip/knee arthroplasty complication rates and patient experience ratings (see section 2 of eAppendix 1 in Supplement 1 for more detail). The score ranges from 0 to 18, with higher numbers indicating better quality. Because the score in the first year was based almost entirely on data from prior to the introduction of CJR,20 we noted in the preanalysis plan that we did not expect it to be affected (eAppendix 2 in Supplement 1). We therefore also examined nontargeted quality measures: the 90-day emergency department visit rate, a quality measure used in prior analyses of voluntary bundled payments for LEJR11,12; the 90-day all-cause readmission rate, a standard quality measure previously used for LEJR11,28; and the 90-day complication rate for total hip and total knee arthroplasty, a part of the targeted quality measure that we could observe for admissions during the study period.
The analyses were conducted at the MSA level. Not all randomly assigned MSAs were included in CJR (Figure). As prespecified (eAppendix 2 in Supplement 1), we therefore used a standard instrumental variable approach29 to evaluate the relationship between inclusion in CJR and the outcomes; in the first-stage regression, assignment to CJR was used as an instrument for inclusion in CJR, and in the second-stage regression, inclusion in CJR was related to outcomes. As prespecified, all regressions included strata fixed effects because treatment probabilities varied by strata, and controlled for 2 years of lags of the dependent variable (specifically, 2013 and 2014) to improve statistical power. Based on historical data, we estimated the analysis to have power to detect a 2-percentage-point reduction in the primary outcome (2-sided α = .05 at 80% power). All analyses were prespecified (prior to obtaining postintervention data) other than analysis of the Elixhauser Comorbidity Index, which was requested during the review process, and the specific breakdown of discharges to noninstitutional postacute care, which was done for ease of exposition. Because of the strata sampling design, comparison of raw means of the treatment and control MSAs was not appropriate; instead, values for the control MSAs and the estimated differences (using the instrumental variable approach) between bundled payment and control MSAs are reported. Raw comparisons of means between original bundled payment and control MSAs—separately by strata—are presented in section 2 of eAppendix 1 in Supplement 1. Because we did not adjust for multiple testing of secondary outcomes, secondary outcome analyses should be considered exploratory.
A number of sensitivity and additional analyses were performed. Results were analyzed without controlling for the lags of the dependent variable, and intention-to-treat analyses were conducted to compare outcomes for the 75 MSAs originally assigned to treatment with outcomes for the 121 control MSAs. Additional detailed analyses of the number and type of patients were also reported. Section 2 of eAppendix 1 in Supplement 1 provides more detail.
All statistical analyses were conducted using Stata version 15.1 (StataCorp), with 2-tailed tests with a statistical significance threshold of P<.05.
Of the 196 eligible MSAs, 75 were initially assigned to treatment and 121 to control; 8 of the 75 were subsequently excluded from CJR so that 67 MSAs were covered by CJR. The eFigure in Supplement 1 shows the geographic distribution. Assignment to CJR thus increased the chance of being included in CJR by 89.1 percentage points (95% CI, 81.8-96.4 percentage points; P < .001).
In 2016, 167 hospitals were excluded from treatment or control MSAs because of preexisting BPCI participation, leaving 1647 eligible hospitals; 1633 of these hospitals had an eligible episode. Table 1 describes the CJR-eligible study population at the MSA level. On average, among eligible MSAs, the patient study population was 90% white and 65% female, with a mean age of 72.5 (SD, 0.91) years and a mean Elixhauser Comorbidity Index of 2.4 (SD, 0.29). Eligible hospitals had a mean of 289.1 beds and performed a mean of 20.4 CJR-eligible LEJR procedures per month over the study period (April-September 2016); 25.3% were for-profit hospitals and 9.3% were teaching hospitals. Among eligible MSAs, the mean number of acute care hospitals was 14.8 and the mean number of institutional postacute care providers was 50.9.
Prior to randomization, characteristics were balanced across control and treatment MSAs. Table 2 shows balance at baseline (2014) for the outcome variables; an F test failed to reject equality of all of the outcomes (P = .54). eTable 3 in Supplement 1 shows balance on MSA demographics and eTables 4 and 5 in Supplement 1 show raw means for each stratum.
Health Care Use and Spending
Table 3 shows results of the instrumental variable analysis of the relationship between inclusion of MSAs in the CJR model and health care use and Medicare spending. For the primary outcome, the mean percentage of LEJR admissions discharged to institutional postacute care was 33.7% (SD, 11.2%) in the control group and was 2.9 percentage points lower (95% CI, −4.95 to −0.90 percentage points; P = .005) in the CJR group, a significant difference.
Table 3 also shows secondary outcomes. The mean percentage of LEJR patients discharged to home without home health care was 32.2% (SD, 23.3%) in the control group and was 2.6 percentage points higher (95% CI, −0.79 to 5.90 percentage points; P = .14) in the CJR group, a statistically nonsignificant difference. There was no statistically significant relationship between inclusion in CJR and discharges to other destinations.
Medicare spending on institutional postacute care was $3871 (SD, $1394) in the control group and was $307 lower (95% CI, −$587 to −$27; P = .04) in the CJR group, a statistically significant difference. There was no statistically significant relationship between inclusion in CJR and total Medicare FFS spending per episode, either gross or net of CJR reconciliation payments. Mean total Medicare spending per LEJR episode was $22 872 (SD, $3619) in the control group. Excluding reconciliation payments, total Medicare spending per LEJR episode was $453 lower (95% CI, −$909 to $3; P = .06) in the CJR group, and inclusive of reconciliation payments, total Medicare spending per LEJR episode was $234 higher (95% CI, −$214 to $683; P=.31) in the CJR group, a nonsignificant difference.
Health Care Quality and Volume
Table 4 shows no statistically significant or substantively meaningful relationship between inclusion in CJR and any of the targeted quality or nontargeted quality measures. For example, the 90-day emergency department visit rate was 20.1% (SD, 2.9%) for the control group, and was 0.25 percentage points higher (95% CI, −0.44 to 0.93 percentage points; P = .48) in the CJR group, a statistically nonsignificant difference.
Table 4 also shows no statistically significant relationship between inclusion in CJR and patient volume or case mix. For example, the mean number of CJR-eligible admissions per 1000 enrollees was 7.2 (SD, 3.5) in the control group and was 0.05 higher (95% CI, −0.32 to 0.42; P = .80) in the CJR group; the mean Elixhauser Comorbidity Index was 2.3 (SD, 0.27) in the control group and was 0.01 lower (95% CI, −0.07 to 0.05; P = .73) in the CJR group. eTables 6 and 7 in Supplement 1 further show no statistically significant relationship between inclusion in CJR and patient admissions under the various exclusion criteria (such as readmission for LEJR or death) and other measures of case mix (such as age or number of Charlson comorbidities27,30).
eTable 8 in Supplement 1 shows that not controlling for lags of the dependent variable reduced precision but did not substantively change the results. eTables 9 and 10 in Supplement 1 show intention-to-treat analysis of the relationship between original assignment to CJR and the outcomes, and eTable 11 and 12 in Supplement 1 show raw means for the outcome data by strata.
Quiz Ref IDIn an instrumental variable analysis of the first year of the CJR bundled payment model, MSAs that were covered by the CJR model, compared with those that were not, had a significantly lower percentage of patient discharges to institutional postacute care and significantly lower spending on institutional postacute care but no significant change in overall Medicare expenditures, particularly after accounting for the reconciliation (ie, bonus) payments. These results are consistent with previous work suggesting that reductions in postacute care are the first-line response of health systems to the introduction of alternative payment mechanisms.31 There was no significant relationship between inclusion in CJR and targeted or nontargeted measures of quality of care, nor did hospitals appear to change their rates of admissions for covered patients or strategically admit patients with lower illness severity, as some observers have feared.2
The relationship between CJR and health care use and spending was smaller than those from prior observational studies of Medicare bundled payment programs. A difference-in-differences analysis of the associations of BPCI for LEJR found a 4% decline in Medicare spending12; a pre-post analysis of BPCI for LEJR in one health system found a 21% decline and received considerable attention in the media.11,32 In addition, a matched-control study of Medicare’s Heart Bypass Center Demonstration Project estimated net savings of 14% per episode8 and was cited as reason to expect substantial savings from subsequent bundled payment programs.3
Quiz Ref IDThere are several possible reasons the findings from the current study of a randomized, mandatory bundled payment program contrast with prior studies of voluntary bundled payment models for LEJR. One reason is selection on expected costs. Estimates of the savings from voluntary programs may be biased upward if hospitals with lower expected spending are more likely to sign up; fully controlling for this can be challenging.13 A second reason is selection on treatment effects.33 The effect of bundled payment on the subsample of hospitals that select into the voluntary program may not be representative of the average effect across all hospitals.
Quiz Ref IDA third reason is the size of the incentives. In the first year of CJR, financial incentives were about one-fourth the size of those in the voluntary BPCI model and had no downside risk; by years 4 and 5, CJR incentives will be the same as those in BPCI. An examination of the effects of CJR in subsequent years will be important, as the phase-in of downside risk and larger incentives may reduce Medicare spending (holding hospital behavior fixed) and may cause larger changes in hospital behavior. However, a comprehensive analysis of the CJR program as initially designed will not be possible. In December 2017, CMS modified CJR to be voluntary for 33 of the 67 included MSAs starting in performance year 3. This means that starting in performance year 3, the treatment vs control analysis performed herein can only be implemented for the remaining 34 MSAs where enrollment is mandatory.34
This study has several limitations. First, this study evaluated outcomes only during the first year following implementation of the bundled payment program, when the maximum stop-gain and stop-loss incentives had not yet been implemented. Second, the analysis was powered to detect differences in discharges to institutional postacute care but not the corresponding difference in total episode Medicare spending. Third, the analysis did not explore heterogeneity in effects of bundled payments across patients or hospitals, which might shed light on potential mechanisms; in this spirit, early work has compared characteristics of hospitals that did and did not achieve shared savings in the first year.35 Fourth, potentially important health outcomes such as pain or functional limitations were not analyzed. Fifth, analysis of the relationship between bundled payments and the targeted quality measure is not yet meaningful, given that in the first year it was almost entirely measured prior to the start of CJR. Sixth, the analysis pertained only to health care use in the bundle during the covered episode; as more data become available, it will be of interest to see what happens to health care use and spending over longer horizons and health care use outside of the bundle (such as prescription pain medication covered by Medicare Part D).
In this interim analysis of the first year of the CJR bundled payment model for LEJR among Medicare beneficiaries, MSAs covered by CJR, compared with those that were not, had a significantly lower percentage of discharges to institutional postacute care but no significant difference in total Medicare spending per LEJR episode. Further evaluation is needed as the program is more fully implemented.
Corresponding Author: Amy Finkelstein, PhD, Department of Economics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Bldg E52, Room 442, Cambridge, MA 02139 (afink@mit.edu).
Accepted for Publication: August 2, 2018.
Author Contributions: Ms Ji and Dr Skinner had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Finkelstein, Mahoney.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Finkelstein, Mahoney.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: All authors.
Obtained funding: Finkelstein, Mahoney, Skinner.
Administrative, technical, or material support: All authors.
Supervision: Finkelstein, Mahoney.
Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Skinner reports that he is an investor in Dorsata Inc, a clinical pathway software startup, and a consultant to Sutter Health Inc. No other disclosures were reported.
Funding/Support:J-PAL North America and the National Institute on Aging (grant P01AG019783-15) provided research support.
Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.
Data Sharing Statement: See Supplement 2.
7.Miller
DC, Gust
C, Dimick
JB, Birkmeyer
N, Skinner
J, Birkmeyer
JD. Large variations in Medicare payments for surgery highlight savings potential from bundled payment programs.
Health Aff (Millwood). 2011;30(11):2107-2115. doi:
10.1377/hlthaff.2011.0783PubMedGoogle ScholarCrossref 8.Cromwell
J, Dayhoff
DA, Thoumaian
AH. Cost savings and physician responses to global bundled payments for Medicare heart bypass surgery.
Health Care Financ Rev. 1997;19(1):41-57.
PubMedGoogle Scholar 12.Dummit
LA, Kahvecioglu
D, Marrufo
G,
et al. Association between hospital participation in a Medicare bundled payment initiative and payments and quality outcomes for lower extremity joint replacement episodes.
JAMA. 2016;316(12):1267-1278. doi:
10.1001/jama.2016.12717PubMedGoogle ScholarCrossref 16.Centers for Medicare & Medicaid Services. Medicare program; hospital inpatient prospective payment systems for acute care hospitals and the long-term care hospital prospective payment system policy changes and fiscal year 2016 rates; revisions of quality reporting requirements for specific providers, including changes related to the electronic health record incentive program; extensions of the Medicare-dependent, small rural hospital program and the low-volume payment adjustment for hospitals: final rule; interim final rule with comment period.
Fed Regist. 2015;80(158):49325-49886.
PubMedGoogle Scholar 19.Centers for Medicare & Medicaid Services. Medicare program; comprehensive care for joint replacement payment model for acute care hospitals furnishing lower extremity joint replacement services.
Fed Regist. 2015;80(226):73273-73554.
PubMedGoogle Scholar 21.Baicker
K, Finkelstein
A, Song
J, Taubman
S. The impact of Medicaid on labor market activity and program participation: evidence from the Oregon Health Insurance Experiment.
Am Econ Rev. 2014;104(5):322-328. doi:
10.1257/aer.104.5.322PubMedGoogle ScholarCrossref 24.Centers for Medicare and Medicaid Services. Medicare program; hospital inpatient prospective payment systems for acute care hospitals and the long-term care hospital prospective payment system and policy changes and fiscal year 2017 rates; quality reporting requirements for specific providers; graduate medical education; hospital notification procedures applicable to beneficiaries receiving observation services; technical changes relating to costs to organizations and Medicare cost reports; finalization of interim final rules with comment period on LTCH PPS payments for severe wounds, modifications of limitations on redesignation by the Medicare Geographic Classification Review Board, and extensions of payments to MDHs and low-volume hospitals: final rule.
Fed Regist. 2016;81(162):56761-57345.
PubMedGoogle Scholar 28.Colla
C, Bynum
J, Austin
A, Skinner
J. Hospital Competition, Quality, and Expenditures in the US Medicare Population. Cambridge, MA: National Bureau of Economic Research; November 2016. NBER working paper 22826.