Key Points español 中文 (chinese) Question
Is there between-hospital variation in quality metrics for patients undergoing cancer surgery in California?
Findings
In this cohort study of 138 799 patients from 351 California hospitals, substantial between-hospital variation was found in in-hospital mortality, 90-day readmission, and 90-day mortality after cancer surgical procedures.
Meaning
Recognizing the multifaceted nature of hospital performance through consideration of mortality and readmission simultaneously may help to prioritize strategies for improving surgical outcomes.
Importance
Although current federal quality improvement programs do not include cancer surgery, the Centers for Medicare & Medicaid Services and other payers are considering extending readmission reduction initiatives to include these and other common high-cost episodes.
Objectives
To quantify between-hospital variation in quality-related outcomes and identify hospital characteristics associated with high and low performance.
Design, Setting, and Participants
This retrospective cohort study obtained data through linkage of the California Cancer Registry to hospital discharge claims databases maintained by the California Office of Statewide Health Planning and Development. All 351 acute care hospitals in California at which 1 or more adults underwent curative intent surgery between January 1, 2007, and December 31, 2011, with analyses finalized July 15, 2018, were included. A total of 138 799 adults undergoing surgery for colorectal, breast, lung, prostate, bladder, thyroid, kidney, endometrial, pancreatic, liver, or esophageal cancer within 6 months of diagnosis, with an American Joint Committee on Cancer stage of I to III at diagnosis, were included.
Main Outcomes and Measures
Measures included adjusted odds ratios and variance components from hierarchical mixed-effects logistic regression analyses of in-hospital mortality, 90-day readmission, and 90-day mortality, as well as hospital-specific risk-adjusted rates and risk-adjusted standardized rate ratios for hospitals with a mean annual surgical volume of 10 or more.
Results
Across 138 799 patients at the 351 included hospitals, 8.9% were aged 18 to 44 years and 45.9% were aged 65 years or older, 57.4% were women, and 18.2% were nonwhite. Among these, 1240 patients (0.9%) died during the index admission. Among 137 559 patients discharged alive, 19 670 (14.3%) were readmitted and 1754 (1.3%) died within 90 days. After adjusting for patient case-mix differences, evidence of statistically significant variation in risk across hospitals was identified, as characterized by the variance of the random effects in the mixed model, for all 3 metrics (P < .001). In addition, substantial variation was observed in hospital performance profiles: across 260 hospitals with a mean annual surgical volume of 10 or more, 59 (22.7%) had lower-than-expected rates for all 3 metrics, 105 (40.4%) had higher-than-expected rates for 2 of the 3, and 19 (7.3%) had higher-than-expected rates for all 3 metrics.
Conclusions and Relevance
Accounting for patient case-mix differences, there appears to be substantial between-hospital variation in in-hospital mortality, 90-day readmission, and 90-day mortality after cancer surgical procedures. Recognizing the multifaceted nature of hospital performance through consideration of mortality and readmission simultaneously may help to prioritize strategies for improving surgical outcomes.
Hospitals are under increasing scrutiny and financial pressure to publicly report quality-of-care measures.1 In 2012, the Hospital Readmissions Reduction Program, established by the Affordable Care Act, tied Medicare hospital reimbursement rates to excess hospital readmissions for patients undergoing select surgical procedures, including elective total hip and/or knee replacement surgery and coronary artery bypass graft surgery.2,3
Although current policies do not include cancer surgery, which is the mainstay treatment for most patients with solid tumor malignancies,4 the Centers for Medicaid & Medicare Services (CMS) and other payers are considering extending readmission reduction initiatives to include common, high-cost episodes. Previous studies have demonstrated high rates of readmission after common cancer operations and identified reduction as an opportunity to improve quality of care and efficiency.5-7 However, prior to developing effective quality improvement initiatives for cancer surgery, it is critical to understand between-hospital variation in quality-related outcomes and identify hospital characteristics that are associated with superior or inferior performance. This information could also inform patient decisions regarding where to undergo surgery.
Prior studies have described associations between hospital attributes, particularly case volume, and mortality outcomes of cancer surgery.8-11 In addition, studies have examined the association between hospital characteristics and readmission after cancer surgical procedures.12,13 However, to our knowledge, mortality and readmission have not been considered in tandem, which may be problematic because a hospital with low readmission rates and high postoperative mortality rates requires a different quality improvement approach than one with high readmission rates and low postoperative mortality rates.14,15 Motivated by these considerations, we quantify between-hospital variation in in-hospital mortality, postdischarge readmission, and postdischarge mortality for patients undergoing primary surgery for early-stage cancer in California.
We conducted a retrospective cohort study using data from the California Cancer Registry linked to hospital discharge records from all California Department of Public Health–licensed health care facilities, maintained by the California Office of Statewide Health Planning and Development. This study was approved by the Dana Farber Cancer Institute Institutional Review Board and the California Department of Public Health. Informed consent was waived for this retrospective, minimal-risk study of deidentified data. Throughout the study, we followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline.16
Adults identified in the California Cancer Registry who underwent cancer surgery between January 1, 2007, and December 31, 2011, for 1 of the 11 most common solid tumors (colorectal, breast, lung, prostate, bladder, thyroid, kidney, endometrial, pancreatic, liver, and esophageal) within 6 months of diagnosis were included. Only patients with an American Joint Committee on Cancer (AJCC) stage (6th edition) of I, II, or III at diagnosis were included.17 Patients with stage IV tumors at diagnosis or recurrent metastatic cancer were excluded because symptom palliation is typically the primary intent of surgery, rather than cure.
Outcomes and Performance Measures
We considered in-hospital mortality among all patients, and 90-day and 30-day events for both postdischarge readmission and mortality. Readmission was defined as any admission to a California acute care hospital, regardless of where the patient underwent surgery. Mortality was defined as all-cause mortality. For the postdischarge outcomes, we chose 90-day event rates in the main analysis because the CMS considers 90 days as the duration for delivery of comprehensive care for cancer operations18 and studies have advocated this window.19-22
For all 3 outcomes, we report estimates of hospital-specific risk-adjusted rates, based on models that adjust for patient and tumor covariates (see the Statistical Analysis section), as absolute measures of performance. We also report model-based risk-adjusted standardized rate (RASR) ratios for each outcome.23 These quantities provide a relative comparison between observed and expected rates for each hospital via internal standardization respect to the specific patients who underwent surgery at the hospital. Thus, a RASR ratio greater than 1.0 may be interpreted as reflecting higher-than-expected outcome rates for the patients who underwent cancer surgery at that hospital, after adjustment for patient-level covariates.
Hospital characteristics abstracted from the California Office of Statewide Health Planning and Development databases included ownership type (nonprofit, for-profit, or public), teaching status based on affiliation with a medical school (yes/no), and safety-net hospital status (yes/no) designated if 20% or more of surgical discharges were paid for by the Medi-Cal program. Mean annual hospital surgical volumes for the specified tumors were calculated on the basis of all years (≤5) during which at least 1 surgery was performed and were categorized as 1 to 10, 11 to 50, 51 to 100, 101 to 200, and more than 200. Finally, status as a critical access hospital (yes/no) was ascertained from the California Hospital Association24; status as a National Cancer Institute (NCI)-Designated Cancer Center (yes/no) was also ascertained.25
Patient and Tumor Characteristics
Patient demographics included age at surgery, sex, race (white, black, Asian, American Indian, other), and Hispanic ethnicity (yes/no), the primary insurance type/payer (Medicare, Medicaid, commercial, other indigent, self-pay/other/unknown), and whether the admission was scheduled at least 24 hours in advance. A modified Charlson-Deyo comorbidity score (range, 0-25; with higher scores indicating greater comorbidity burden) was calculated using inpatient claims from the California Office of Statewide Health Planning and Development for the 12 months prior to cancer surgery.26 Disposition at discharge was categorized as home without services, home with services, skilled nursing/immediate care, and other (including acute care, residential care facility, and other care). Patient socioeconomic status was characterized using the median household income and percentage of people of all ages in poverty in their 2010 census tract residence. Tumor characteristics included primary cancer type and AJCC stage.
All statistical analyses were performed using R,27 version 3.5.0 (R Foundation), and finalized July 15, 2018. Descriptive statistics were calculated for all characteristics. Following methods currently used by the American College of Surgeons Surgical Quality Improvement Program28 and the CMS,23 for each outcome we fit 2 sets of hierarchical logistic regression models with normally distributed, hospital-specific random effects.29 The first set solely included patient and tumor characteristics. The second set additionally included hospital-specific characteristics.
To formally evaluate between-hospital variation in risk, we used a likelihood ratio test for the variance component of the first set of hierarchical logistic regression models that solely adjust for patient and tumor characteristics.30 We used the estimated random-effects SD to quantify between-hospital variation in risk and compute the hospital odds ratio (OR), which compares the odds of the outcome for a patient treated at a hospital 1 SD below average quality (when the random effect is 0) to the corresponding odds for the same patient treated at a hospital 1 SD above average quality. Finally, based on the second set of models, we report adjusted ORs and 95% CIs for hospital-specific characteristics.
Estimates of hospital-specific risk adjusted outcome rates were computed as the mean model-based predicted risk across the patients who underwent surgery at the hospital, conditional on the estimated hospital-specific random effect. Hospital-specific RASR ratios were calculated by dividing the estimated hospital-specific risk-adjusted rate by the overall statewide standard for the patients treated at the hospital. The latter quantity was calculated by averaging the hospital-specific rate over the estimated distribution of the random effects. Consistent with current policy, these metrics were calculated on the basis of the fitted hierarchical models that included cancer and patient characteristics.23 Although these measures were computed for all 351 hospitals in the study, we only report those for the 260 hospitals with a mean annual surgical volume of 10 or more.
Between 2007 and 2011, 138 799 adults were diagnosed with 1 of the 11 tumors we consider at AJCC stages I to III and underwent curative intent surgery at 1 of 351 hospitals in California within 6 months of diagnosis. After surgery, 137 559 patients (99.1%) were discharged alive and 1240 patients (0.9%) died during the index admission (eFigure 1 in the Supplement). Among the 138 799 patients who underwent curative-intent surgery, most had surgery for colorectal, breast, or prostate cancer (20.1%, 21.9%, and 19.7%, respectively), and had an AJCC stage of I or II (78.3%) (Table 1). Among these patients, 8.9% were aged 18 to 44 years and 45.9% were aged 65 years or older. The median age at surgery was 63 years (interquartile range, 54-72 years), with most patients being white (81.8%) or Asian (10.9%) (18.2% were nonwhite) and 57.4% of patients being women. Approximately half of the patients had commercial insurance (48.9%), with most treated at nonprofit (84.0%), nonteaching (78.0%), and non–safety-net (94.1%) hospitals.
Of the 137 559 patients who were discharged alive, 19 670 (14.3%) had a readmission within 90 days of discharge (Table 1), and 1754 (1.3%) died within 90 days. At 30 days, these rates were 7.9% and 0.6%, respectively (eTable 1 in the Supplement).
Most hospitals were nonprofit (61.8% [217 of 351]), and only a minority had teaching hospital (7.7% [27 of 351]), safety-net hospital (9.1% [32 of 351]), critical access hospital (6.6% [23 of 351]), and NCI-designated cancer center (2.3% [8 of 351]) status (Table 2). Over the 5-year study period, most hospitals had a mean of 50 or fewer patients per year (57.8% [2013 of 351]), while 39 (11.1%) had a mean of 200 or more patients per year. None of the 92 hospitals with a mean annual surgical volume less than 10 had teaching, safety-net, or NCI-designated cancer center status.
Hierarchical Regression Modeling
Detailed results from the hierarchical logistic regression models are provided in eTables 2, 3, and 4 in the Supplement. After adjusting for patient characteristics, we found evidence of an association between ownership type and in-hospital mortality (P < .001) (Table 3). For-profit status of the hospital was associated with 38% higher odds of in-hospital mortality compared with nonprofit status (OR, 1.38; 95% CI, 1.14-1.67); public status of the hospital was associated with 21% higher odds of in-hospital mortality compared with nonprofit status (OR, 1.21; 95% CI, 0.95-1.53). Although we found no evidence of an association between mean annual surgical volume and either 90-day readmission or 90-day mortality, higher mean annual surgical volume was associated with decreased in-hospital mortality (Table 3). Teaching and safety-net status were both associated with increased 90-day readmission (OR, 1.13; 95% CI, 1.03-1.25, and OR, 1.14; 95% CI, 1.01-1.28, respectively), but not with 90-day mortality. The estimated OR associations between status as an NCI-designated cancer center and in-hospital mortality, 90-day readmission and 90-day mortality were 0.81 (95% CI, 0.60-1.09), 0.92 (95% CI, 0.80-1.07) and 0.76 (95% CI, 0.57-1.01), respectively, although none of these reached statistical significance.
After adjusting for patient case-mix, we found evidence of statistically significant variation in risk across hospitals for all 3 outcomes (P < .001 based on a likelihood ratio test for the variance component). Comparing a patient who underwent surgery at a hospital 1 SD below average quality (ie, when the hospital-specific random effect is 0) with a patient who underwent surgery at a hospital 1 SD above average quality, after adjusting for differences in patient characteristics between the hospitals, the estimated adjusted hospital OR for in-hospital mortality was 1.62 (95% CI, 1.33-1.98) (eTable 5 in the Supplement). Thus, estimated risk differences for 90-day readmission comparing 2 hospitals that are 2 SDs apart are substantially higher than those observed between teaching and nonteaching hospitals (OR, 1.13) as well as that between safety-net and non–safety-net hospitals (OR, 1.14) (Table 3). The corresponding hospital ORs for 90-day readmission and 90-day mortality were 1.45 (95% CI, 1.37-1.53) and 1.68 (95% CI, 1.44-1.95), respectively (eTable 5 in the Supplement).
Across the 260 hospitals with a mean annual surgical volume of 10 or more, model-based in-hospital mortality risk-adjusted rates varied from 0.0% to 4.1% (Figure, A). Furthermore, 90-day readmission and 90-day mortality risk-adjusted rates varied from 7.7% to 23.7% and from 0.1% to 4.2%, respectively. For 42 hospitals (16.1%), the 95% CI for the 90-day readmission risk-adjusted rate was entirely below the statewide overall mean; for 58 hospitals (22.3%), the 95% CI was above the statewide overall rate. Similar results were observed for in-hospital and 90-day mortality.
The distributions of RASR ratios for varied from 0.73 to 1.41 for in-hospital mortality, 0.76 to 1.46 for 90-day readmission, and 0.71 to 1.62 for 90-day mortality (Figure, B). Two-way and 3-way classification of hospitals by RASR ratios indicate a generally positive correlation, but also that there is substantial variation in the hospital-specific profiles (Figure, C and Table 4). Moreover, 7.3% of the hospitals with a mean annual patient volume of more than 10 (19 of 260) had poor performance as indicated by a RASR ratio greater than 1.0 (ie, a higher-than-expected rate) for all 3 outcomes, while 22.7% (59 of 260) had good performance for all 3 outcomes as indicated by a RASR ratio less than1.0 (ie, a lower-than-expected rate) (Table 4). The remaining hospitals had mixed performance, with 40.4% (105 of 260) having good performance for 2 outcomes and 29.6% (77 of 260) having good performance for only 1 outcome.
Hospital performance profiles varied across hospital characteristics (Table 4). Among 179 nonprofit hospitals with a mean annual patient volume more than 10, 26.8% (48) had good performance for all 3 outcomes; only 15.2% (7 of 46) and 11.4% (4 of 35) of for-profit and public hospitals, respectively, fell in this optimal category. Similarly, hospitals in this optimal category made up an increasingly large proportion of hospitals as mean annual volume increased (from 20.5% [23 of 112] of hospitals with a volume of 10-49 to 25.6% [10 of 39] of hospitals with a volume ≥200). While teaching and nonteaching hospitals had similar frequencies of suboptimal performers (ie, those with at ≥2 outcomes for which performance was poor), 40.7% (11 of 27) and 36.5% (85 of 233), respectively, they differed substantially in that 66.6% (18 of 27) of teaching hospitals had poor performance with respect to 90-day readmission, while only 45.5% (106 of 233) of nonteaching hospitals had poor performance for 90-day readmission. In contrast, 25.9% (7 of 27) of teaching hospitals and 39.0% (91 of 233) of nonteaching hospitals had poor performance with respect to 90-day mortality. Performance profiles stratified by status as a critical access hospital or as an NCI-designated cancer center are not presented owing to the small number of such hospitals (Table 2).
For postdischarge outcomes, although event rates at 30 days are approximately half those at 90 days, the variation in hospital-specific rates and RASR ratios across hospitals were no different from the 90-day findings (eTable 1 and eFigure 2 in the Supplement).
Although not without controversy,31 readmission and mortality are entrenched as hospital quality metrics.5,12,15,32-37 Although current federal policies focus on a relatively narrow set of conditions, whether readmission and mortality rates are relevant more broadly is the subject of debate.38-40 In this study, we found substantial and statistically significant variation in these outcomes, as well as in-hospital mortality, across acute care hospitals in California that perform cancer surgery. Furthermore, we found evidence of substantial variation in performance profiles across hospitals, with more than three-quarters of hospitals with higher-than-expected rates for at least 1 of in-hospital mortality, 90-day readmission, and 90-day mortality. Although the absolute rates were lower, findings regarding variation in 30-day readmission and mortality rates were similar.
A crucial feature of our analysis was to consider the 11 cancers simultaneously and, in particular, not to stratify by cancer type. We made this decision for 3 reasons. First, we do not believe that the development and implementation of separate quality improvement initiatives across cancer types represents a viable policy goal. Practically, separate cancer-specific initiatives would likely represent too great a burden on hospitals as well as on decision makers as they seek to reward or penalize high or low quality of care. Furthermore, sample size considerations for a strategy focused on separate initiatives would systematically exclude less-common cancers and low-volume hospitals from potentially benefitting. Second, variation in patient outcomes across cancer types, as well as tumor characteristics, is acknowledged and accounted for in our analyses through their inclusion as adjustment variables in the models. In addition, that there is variation across hospitals in the case-mix of cancer types is the key motivation for reporting the RASR ratio that essentially compares a hospital with itself. Finally, after adjusting for differences in patient case-mix, postoperative quality of care should arguably be agnostic to the procedure, especially after discharge.
Collectively, our results suggest that in-hospital mortality and postdischarge 90-day readmission and mortality are relevant and important targets for the development of tailored interventions and incentive policies regarding quality of care after curative intent surgery. Moreover, although some research has been done on developing and evaluating interventions for reducing readmissions,41,42 our results suggest that such interventions and policies should acknowledge the multidimensional nature of quality and target hospitals based on individual performance profiles. For example, policies that encourage careful examination of postdischarge monitoring to anticipate complications and to shift care to the outpatient setting may be appropriate for hospitals with high readmission rates but low mortality rates. In contrast, policies that focus on early recognition of potentially life-threatening postoperative complications, such as sepsis, pulmonary embolism, and dehydration, may be more appropriate for hospitals with high postdischarge mortality. For these hospitals, incentivizing focus on decreasing readmission rates could have the unintended effect of further increasing mortality. From the perspective of individual institutions, simultaneous assessment of RASR ratios across the 3 outcomes may help hospitals better understand their performance with respect to their cancer surgery populations and serve as a useful benchmark for quality improvement initiatives.
A study by Dimick et al38 showed that profiling hospital surgical performance on the basis of composite metrics including morbidity, length-of-stay, and reoperation rates can assist hospital performance benchmarking. Our approach did not consider a single composite metric because distinguishing between the 3 events may help hospitals prioritize areas for quality improvement and identify tailored strategies to optimize their performance. As CMS and other health insurers move toward consideration of defined global episodes and outcomes-based reimbursement, we suggest that quality assessments simultaneously consider the in-hospital experience of patients, which is under the direct control of the hospital, together with the vulnerable postdischarge period, which will require hospitals to develop risk-tailored strategies for the frequency of home monitoring and support.
Additional research is needed to better understand heterogeneity in performance profiles across hospitals, in particular why some hospitals perform well on 1 or 2 metrics but not on all 3. In line with other work,33,43 we found evidence that certain hospital characteristics are associated with both readmission and mortality, most notably hospital volume. Similar to results presented by Krumholz et al33 and Hollis et al,44 however, the inclusion of such factors into the hierarchical models only minimally helped explain between-hospital variation. This finding suggests that strategies for improving hospital performance should be customized on the basis of key hospital attributes43,45-47 as well as their individual performance profiles.
Our study has a number of limitations. First, we did not distinguish between planned and unplanned readmissions, nor did we restrict the analysis to readmissions to the same hospital at which patients were initially treated.48 This, however, is in line with the approach currently used by the CMS. Second, some hospitals may have closed or merged during the study period. However, of the 260 hospitals with a mean annual surgical volume of 10 or more, 93.4% contributed to all 5 years of the study period and 95.4% contributed in the final year. Thus, closure and merging would likely have had only minimal influence on characterization of variation in performance across hospitals. Third, our study focused on hospitals in California and we cannot guarantee that the findings generalize to other regions. However, our analyses complement recently published work by Whitney et al,49 who used the same data source to describe the high burden of hospitalization within a year of cancer diagnosis.
After accounting for patient case-mix differences, there is substantial between-hospital variation in in-hospital mortality, 90-day readmission, and 90-day mortality rates among patients undergoing cancer surgery. Policies aimed at improving quality of cancer care should target hospitals based on individual performance profiles.
Accepted for Publication: July 31, 2018.
Published: October 5, 2018. doi:10.1001/jamanetworkopen.2018.3038
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2018 Haneuse S et al. JAMA Network Open.
Corresponding Author: Sebastien Haneuse, PhD, Department of Biostatistics, Harvard T.H. Chan School of Public Health, 655 Huntington Ave, Boston, MA 02115 (shaneuse@hsph.harvard.edu).
Author Contributions: Drs Haneuse and Schrag had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Concept and design: Haneuse, Schrag.
Acquisition, analysis, or interpretation of data: Haneuse, Dominici, Normand.
Drafting of the manuscript: Haneuse.
Critical revision of the manuscript for important intellectual content: Dominici, Normand, Schrag.
Statistical analysis: Haneuse, Dominici, Normand.
Obtained funding: Dominici.
Administrative, technical, or material support: Schrag.
Supervision: Schrag.
Conflict of Interest Disclosures: Dr Haneuse reported grants from the National Cancer Institute during the conduct of the study. Dr Normand reported a patent 201810345624.5 pending. No other disclosures were reported.
Disclaimer: Dr Haneuse, a JAMA Network Open statistical editor, was not involved in any of the decisions regarding review of the manuscript or its acceptance.
5.Eskander
RN, Chang
J, Ziogas
A, Anton-Culver
H, Bristow
RE. Evaluation of 30-day hospital readmission after surgery for advanced-stage ovarian cancer in a Medicare population.
J Clin Oncol. 2014;32(36):4113-4119. doi:
10.1200/JCO.2014.56.7743PubMedGoogle ScholarCrossref 7.Zafar
SN, Shah
AA, Raoof
M, Wilson
LL, Wasif
N. Potentially preventable readmissions after complex cancer surgery: analysis of the national readmissions dataset.
J Clin Oncol. 2017;35(4)(suppl):109.
Google ScholarCrossref 13.Zheng
C, Habermann
EB, Shara
NM,
et al. Fragmentation of care after surgical discharge: non-index readmission after major cancer surgery.
J Am Coll Surg. 2016;222(5):780-789.e2.
PubMedGoogle ScholarCrossref 14.Krumholz
HM, Lin
Z, Keenan
PS,
et al. Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia.
JAMA. 2013;309(6):587-593. doi:
10.1001/jama.2013.333PubMedGoogle ScholarCrossref 16.von Elm
E, Altman
DG, Egger
M, Pocock
SJ, Gøtzsche
PC, Vandenbroucke
JP; STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies.
PLoS Med. 2007;4(10):e296. doi:
10.1371/journal.pmed.0040296PubMedGoogle ScholarCrossref 17.Greene
FL, Page
DL, Fleming
ID,
et al, eds. AJCC Cancer Staging Manual. 6th ed. New York: Springer-Verlag; 2002.
23.Ash
A, Fienberg
S, Louis
T, Normand
S, Stukel
T, Utts
J. Statistical issues in assessing hospital performance. In: Centers for Medicare & Medicaid Services, Washington DC; COPSS-CMS White Paper; 2012.
28.Cohen
ME, Ko
CY, Bilimoria
KY,
et al. Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus.
J Am Coll Surg. 2013;217(2):336-46.e1.
PubMedGoogle ScholarCrossref 29.McCulloch
CE, Searle
SR, Neuhaus
JM. Generalized, Linear and Mixed Models Hoboken. Hoboken, NJ: John Wiley & Sons; 2008.
37.Altobelli
E, Buscarini
M, Gill
HS, Skinner
EC. Readmission rate and causes at 90-day after radical cystectomy in patients on early recovery after surgery protocol.
Bladder Cancer. 2017;3(1):51-56. doi:
10.3233/BLC-160061PubMedGoogle ScholarCrossref 40.Staudenmayer
KL, Hawn
MT. The hospital readmission reduction program for surgical conditions: impactful or harmful?
Ann Surg. 2018;267(4):606-607.
PubMedGoogle ScholarCrossref 49.Whitney
RL, Bell
JF, Tancredi
DJ, Romano
PS, Bold
RJ, Joseph
JG. Hospitalization rates and predictors of rehospitalization among individuals with advanced cancer in the year after diagnosis.
J Clin Oncol. 2017;35(31):3610-3617.
PubMedGoogle ScholarCrossref