Preintervention period cohort (left) and intervention period cohort (right). FFS indicates fee for service.
eTable. Full Multivariable Model for Difference in Differences Analysis
Customize your JAMA Network experience by selecting one or more topics from the list below.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Jenq GY, Doyle MM, Belton BM, Herrin J, Horwitz LI. Quasi-Experimental Evaluation of the Effectiveness of a Large-Scale Readmission Reduction Program. JAMA Intern Med. 2016;176(5):681–690. doi:10.1001/jamainternmed.2016.0833
Feasibility, effectiveness, and sustainability of large-scale readmission reduction efforts are uncertain. The Greater New Haven Coalition for Safe Transitions and Readmission Reductions was funded by the Center for Medicare & Medicaid Services (CMS) to reduce readmissions among all discharged Medicare fee-for-service (FFS) patients.
To evaluate whether overall Medicare FFS readmissions were reduced through an intervention applied to high-risk discharge patients.
Design, Setting, and Participants
This quasi-experimental evaluation took place at an urban academic medical center. Target discharge patients were older than 64 years with Medicare FFS insurance, residing in nearby zip codes, and discharged alive to home or facility and not against medical advice or to hospice; control discharge patients were older than 54 years with the same zip codes and discharge disposition but without Medicare FFS insurance if older than 64 years. High-risk target discharge patients were selectively enrolled in the program.
Personalized transitional care, including education, medication reconciliation, follow-up telephone calls, and linkage to community resources.
We measured the 30-day unplanned same-hospital readmission rates in the baseline period (May 1, 2011, through April 30, 2012) and intervention period (October 1, 2012, through May 31, 2014).
We enrolled 10 621 (58.3%) of 18 223 target discharge patients (73.9% of discharge patients screened as high risk) and included all target discharge patients in the analysis. The mean (SD) age of the target discharge patients was 79.7 (8.8) years. The adjusted readmission rate decreased from 21.5% to 19.5% in the target population and from 21.1% to 21.0% in the control population, a relative reduction of 9.3%. The number needed to treat to avoid 1 readmission was 50. In a difference-in-differences analysis using a logistic regression model, the odds of readmission in the target population decreased significantly more than that of the control population in the intervention period (odds ratio, 0.90; 95% CI, 0.83-0.99; P = .03). In a comparative interrupted time series analysis of the difference in monthly adjusted admission rates, the target population decreased an absolute −3.09 (95% CI, −6.47 to 0.29; P = .07) relative to the control population, a similar but nonsignificant effect.
Conclusions and Relevance
This large-scale readmission reduction program reduced readmissions by 9.3% among the full population targeted by the CMS despite being delivered only to high-risk patients. However, it did not achieve the goal reduction set by the CMS.
Financial penalties from the Readmission Reduction Program of the Patient Protection and Affordable Care Act1 now affect more than half of US hospitals,2 making readmission reduction a top priority. Hospitals have adopted approaches found to be effective in small randomized clinical trials.1,3-5 However, the effectiveness of readmission reduction efforts outside the pilot or clinical trial setting remains uncertain: few evaluations of large-scale, system-wide care redesign have been performed.6,7 Randomized clinical trials that support specific programs have each included fewer than 400 patients in the intervention arm, have excluded large proportions of patients, have rarely been replicated, and have not reported on sustainability.3,4,8-12 Readmission reduction efforts have been largely unsuccessful6,12-14 and often rely on uncontrolled pre/post analyses that do not account for secular trends.15
Through the Community-based Care Transitions Program (CCTP), the Center for Medicare & Medicaid Services (CMS) reimburses participants for transitional care services.16 Hospitals with above average readmission rates that have partnered with community-based organizations are eligible for participation. The national readmission rate for Medicare fee-for-service (FFS) patients older than 64 years was 15.5% in the 2012 to 2013 period.17 From 2012 to 2014, the CCTP funded the Greater New Haven Coalition for Safe Transitions and Readmission Reductions (Co-STARR) program to intervene in high-risk patients with a goal of reducing all-cause Medicare readmissions by 20%. In this study, we use a quasi-experimental design to evaluate the effectiveness and sustainability of the Co-STARR program in reducing same-hospital readmissions during 2 years. Such rigorous evaluation of full-scale, real-world programs is essential to understanding the scalability and effectiveness of delivery system redesign.
Question How effective was the program in reducing readmissions among patients 65 and older with Medicare fee for service insurance?
Findings This quasi-experimental analysis of a readmission reduction program compared the change in readmissions over time among all eligible patients 65 and older with Medicare fee for service ("target" population) to the change in readmissions among patients 55 and older and not otherwise in the target population. The program was associated with a significant 9% decline in readmissions among the target population relative to controls during the 19 month intervention period compared to the baseline year, even though only 58% of target discharge patients received the intervention.
Meaning A comprehensive readmission reduction program can feasibly be implemented to scale and sustained over time and can provide small improvements in outcomes.
Co-STARR began in May 2012 as a partnership among Yale–New Haven Hospital (YNHH), an academic hospital with 990 beds; the Hospital of Saint Raphael’s, a community hospital with 550 beds; and the Agency on Aging of South Central Connecticut. In October 2012, the YNHH acquired the Hospital of Saint Raphael’s, retaining a total of 1541 beds on 2 campuses (the Saint Raphael campus and the York Street campus), which represent all the hospital beds in the city of New Haven. The YNHH has previously conducted large-scale systems-level improvements and is a Magnet-recognized hospital in honor of its high nursing standards.
The program was implemented hospital-wide and was fully supported by senior executive leaders, who made readmission reduction a hospital-wide quality improvement priority, included it in the annual performance incentive plan, and contributed financial support to the community partner when the CMS funds were delayed. The YNHH medical director of inpatient services (G.Y.J.) assumed leadership of the program as a full-time responsibility during the intervention period, supported by hospital leaders, who reassigned her prior responsibilities to others during the program.
We based the program on published transitional care strategies adapted to our local setting and tailored to the needs of each patient.3,18 We selectively enrolled high-risk discharge patients using a modified risk assessment tool developed by the Society of Hospital Medicine’s Project BOOST (Better Outcomes by Optimizing Safe Transitions).18 We used the BOOST risk factors of 10 or more routine medications on admission or use of high-risk medications (insulin, anticoagulants, oral hypoglycemic agents, dual antiplatelet therapy, digoxin, or narcotics); principal diagnosis of cancer, chronic obstructive pulmonary disease, stroke, heart failure, or diabetes mellitus; depressive symptoms or diagnosis of depression; physical limitations; poor patient support; need for palliative care; or previous nonelective 30-day readmission in the past 6-month period. We omitted the BOOST risk factor of poor health literacy because we were unable to conduct routine health literacy screening on every patient. Care coordinators on each hospital unit screened all target patients for high-risk criteria on admission. Patients with any risk factor were considered high risk. Because of software limitations, only screens with positive results were recorded; therefore, we could not distinguish between patients who screened negative for all risk factors and those who were not screened.
The likely discharge disposition of the patients was determined on a daily basis to facilitate planning. Our program was designed differently for each of these discharge dispositions.
Patients discharged home were followed up by transitional care consultants (TCCs) hired specifically for this program. In preliminary case reviews, we had noted that our older patients were not always connected to the full set of community services for which they were eligible. For this reason, our TCCs were social workers from the Area Agency on Aging who were knowledgeable about local resources. Approximately 4 TCCs were employed by the program at any given time with a target caseload of 40 patients per TCC. The TCCs received a daily list of high-risk target patients who were anticipated to be discharged home. The TCCs met with the patients and/or family during hospitalization to explain the program and enroll patients. The TCCs also performed bedside assessments of patients’ social, cognitive, functional status, and postdischarge needs.
After discharge, the TCCs made follow-up telephone calls, which were primarily geared toward ensuring that patients were engaged in their care. They focused conversations around patient understanding of the discharge instructions, medication management, follow-up with health care professionals in the community, support services, and clinical symptoms and signs that the patient should monitor. When they identified problems, they took action or coached patients to do so. The follow-up telephone calls took place at least once a week for 30 days after discharge and more often as needed. When necessary, the TCCs made home visits to further assess the patients’ social, cognitive, and functional needs. Standard practice for patients discharged home consists of a printed set of discharge instructions generated by the discharging team and 1 follow-up telephone call to assess any urgent postdischarge needs.
For enrolled patients discharged to a skilled nursing facility or long-term care facility, 3 dedicated care coordinators for the Co-STARR program called the post–acute care primary nurse within 48 hours of discharge. In these calls, the care coordinators discussed the care plan and addressed any questions. They also discussed goals of care if the patient had palliative care or potential hospice needs.
In preliminary case reviews, we had noted that patients who were readmitted more than 2 to 3 weeks after discharge often had experienced a secondary, suboptimal care transition from a postdischarge facility back to the community before readmission. Accordingly, we developed a third variation of our program for patients discharged to short-term inpatient rehabilitation in which we also focused on improving the safety of the secondary transition from rehabilitation back home. For these patients, care coordinators conducted the same telephone call with the receiving nurse as they did for patients discharged to permanent facility settings but continued to follow up weekly. On discharge from rehabilitation, the Co-STARR care coordinator referred the patient to the TCC to ensure the transition from rehabilitation to community was well managed. By comparison, the standard of care for nonenrolled patients discharged to a facility was to send the patient with printed discharge instructions but without a telephone call and continued follow-up.
During the first 5 months of the intervention period, we developed a basic orientation program to educate program staff about interventions and strategies known to improve care transitions and logistics and operations of the program. We taught staff how to screen patients for readmission risk factors through electronic medical record review, how to approach patients, how to assess their psychosocial and medical needs, and how to empower patients and families to attend postdischarge follow-up appointments, manage medications, and identify and manage symptoms. Staff met on a weekly basis to troubleshoot and manage issues with the program. The staff also participated in quarterly learning collaboratives offered through the national CCTP.
Enrollment in Co-STARR lasted 180 days, although activity was concentrated on the first discharge. Patients could be enrolled again after 180 days if they had another qualifying hospitalization. Patients provided verbal consent for participation, and the evaluation was approved by the Yale School of Medicine Institutional Review Board, which granted a Health Insurance Portability and Accountability Act waiver for the evaluation.
Because the first 5 months of the intervention period were largely devoted to hiring, training, and pilot testing of screening and intervention protocols, we defined the intervention period as including patients discharged from October 1, 2012, through May 31, 2014, when the CMS ended our program. We defined the preintervention period as the 12 months before the intervention (May 1, 2011, through April 30, 2012). To account for a run-in period, we excluded patients discharged from May 1 through September 30, 2012. The data was analyzed from December 2014 through October 2015.
We included all discharges of patients targeted by the CMS in our analysis of the program effect—analogous to an intent-to-treat analysis. This approach reduces the bias created by selectively enrolling high-risk target discharge patients without randomization. Thus, our target population in the baseline and intervention period included all discharges of patients older than 64 years with Medicare FFS insurance discharged alive and to eligible locations (ie, not another acute care facility, psychiatric hospital, correctional facility, or hospice) who lived in target zip codes. Note that the target population is therefore a heterogeneous mix of discharges of patients who were enrolled on that admission, those who had been enrolled in Co-STARR in the previous 180 days, and those who were not enrolled because they screened low risk, refused, or were not screened.
We could not prospectively create a high-risk control group because the mandate of the program was to enroll as many high-risk patients as possible. We had initially planned to identify a high-risk control group post hoc for a matched analysis; however, because we selectively enrolled high-risk patients and because more than half of all target discharges were enrolled in the program, we did not have a sufficient number of unenrolled high-risk discharge patients in the intervention period to do so. The high enrollment rate also created concerns that the program may have influenced practice for nonenrolled target patients. Moreover, we did not have risk assessments for all nonenrolled patients and could not reliably determine which nonenrolled patients were high risk. We could not identify a comparable group to the enrolled patients in the preintervention period because we were not then conducting high-risk screening. Therefore, we defined as our control population all discharges of patients older than 54 years to eligible locations and living in the target zip codes who otherwise did not meet inclusion criteria for the target population (ie, did not have Medicare FFS insurance if older than 64 years). Discharges of the same patient could be included in the control and target populations if insurance status changed over time. We also subsequently conducted a secondary analysis restricting our control population to patients 64 years and older; we did not a priori make this our primary control group because most patients older than 64 years were already included in the target population.
We did not have patient-level all-hospital readmission data. Accordingly, our primary outcome was same-hospital, unplanned, 30-day readmission. Data from the CMS indicate that in the 2011 to 2012 period a total of 83% of readmissions after discharge from YNHH returned to YNHH. In the 2013 to 2014 period, after the hospital merger, 89% of Medicare readmissions returned to YNHH. We used the CMS planned readmission algorithm to identify and exclude planned readmissions.19
The primary exposure variable of interest was the intervention period. We also included the following as covariates: age, sex, race/ethnicity, principal diagnosis (grouped using the Agency for Healthcare Research and Quality Clinical Classification Software20), the Elixhauser comorbidity variables (identified during the index admission or any admission in the prior 6 months),21 number of admissions in prior 6 months, hospital campus, discharge disposition (home or to facility), and Rothman Index Score (a severity of illness score based on vital signs, laboratory tests, and nursing assessments for which higher scores indicate better health status and lower readmission risk).22,23 Missing Rothman scores were imputed using multiple imputation with 30 imputations.
We conducted 2 complementary, quasi-experimental analyses to assess the effect of the program: difference in differences and interrupted time series (ITS).
To determine whether there was an overall effect of the program, we estimated a patient-level logistic regression model that included all target and control discharges, all risk adjusters listed above, a variable for elapsed months to account for secular trends, and indicators for intervention period, target population, and their interaction. The dependent variable was unplanned readmission within 30 days. By testing for an interaction between intervention period and target population, we assessed whether there was a difference in the change in readmission rate over time between the 2 populations (difference in differences). Note that the difference-in-difference design does not require that the control and intervention groups have similar baseline characteristics but rather assumes that both groups would have experienced similar changes in outcomes over time without the program.
We also conducted an ITS analysis. This approach has the advantage of being able to distinguish an effect of the program from a difference in underlying secular trends in the control and target populations (which could produce a misleadingly significant difference-in-differences result) and can also help to determine whether the program effect was sustained. We calculated monthly adjusted readmission rates using a linear probability model that included all discharges from the target and control populations and all risk variables. This model also included indicators for each month and interactions of these with an indicator for the control population. We centered all risk factors on their overall means and suppressed the intercept to avoid omitting any month indicators. We graphed these monthly rates for the 2 groups over time and used the estimated monthly rates for target and control populations to calculate a monthly difference between the 2 populations (Dt). We then determined whether there was any overall decrease in adjusted monthly readmission rates in the intervention period and whether there was a time trend effect caused by the program by estimating a model of the monthly difference Dt as follows:
Dt = β0 + βt × Time + βI × Intervention
+ βtxI × (Time – Intervention Month) × Intervention + εt.
In this formula, βt reflects the overall effect of time (measured in elapsed months), βI reflects the effect of the program on the difference between the target and control populations, and βtxI reflects any change in time trend effect caused by the program.24 This model was estimated using an ARIMA(8,0,0) error structure, with an 8-month autoregressive term. We selected the best autoregressive model by comparing Akaike information criteria across values of the autocorrelation parameter for a model that contained only the dependent variable and calendar time.
All patient-level models used robust (sandwich) estimators to adjust SEs for clustering by patient. Data management and analyses were performed with SAS statistical software, version 9.4 (SAS Institute Inc), and STATA statistical software, version 14 (StataCorp).
In the preintervention period, there were 12 969 discharges of patients with Medicare FFS insurance 65 years and older living in our target zip codes; of these, 624 (4.8%) died during hospitalization or were discharged to an ineligible location, leaving 12 345 discharges (95.2%) in the preintervention target population cohort (Figure 1). During the intervention period, there were 19 694 discharges of patients with Medicare FFS insurance 65 years and older living in our target zip codes. Of these, 1471 (7.5%) died during hospitalization, were transferred to another acute care facility, or left against medical advice, leaving a target population of 18 223 discharges in the intervention period (92.5%). All these discharges were included in the analysis. Of these, 12 482 (68.5%) were screened as high risk during admission. We enrolled 10 621 target population discharges (58.3%): 7021 (38.5%) were of patients enrolled into Co-STARR during that hospitalization, and 3600 (19.8%) were discharges of patients who had been enrolled within the previous 180 days. Of the 12 482 high-risk discharge patients, we enrolled 9229 (73.9%). Patients received a mean of 4.3 interventions (range, 0-16) each.
Our control population comprised 11 775 discharges in the preintervention period (468 [3.8%] excluded for in-hospital death or discharge to ineligible location) and 20 077 discharges in the postintervention period (997 [4.7%] excluded for in-hospital death or discharge to ineligible location). Our restricted control population of those older than 64 years comprised 5116 discharges in the preintervention period and 8976 discharges in the postintervention period.
As reported in Table 1, the target population was significantly different from control population in all variables but clinically most different in terms of age (older target population), sex (more women in the target population), race (more whites in the target population), and discharge disposition (more often discharged to a facility in target population).
The adjusted readmission rate decreased from 21.5% to 19.5% in the target population and from 21.1% to 21.0% in the control population, a relative reduction of 9.3%. The number needed to treat (NNT) to avoid 1 readmission was 50.
The difference-in-differences analysis revealed that the odds of readmission in the target population decreased significantly more than that of the control population in the intervention period (odds ratio, 0.90; 95% CI, 0.83-0.99; P = .03) (Table 2; full model results are shown in the eTable in the Supplement). Using the restricted control group limited to those older than 64 years, the odds ratio was similar but nonsignificant (odds ratio, 0.94; 95% CI, 0.84-1.05; P = .27).
The ITS difference analysis revealed that the readmission rate in the target population, after accounting for concurrent changes in the control population, decreased −3.09 (95% CI, −6.47 to 0.29; P = .07). The absolute decrease relative to the restricted control group older than 64 years was −2.39 (95% CI, −4.47 to −0.31; P = .02); in addition, in this comparison, the rate of decrease also accelerated by −0.28 per month relative to controls (95% CI, −0.52 to −0.03; P = .03) (Table 3).
Figure 2 illustrates the adjusted monthly readmission rates in the target and control populations during the full study period and the overall mean monthly preintervention and intervention period rates in both populations. The program costs were approximately $1.5 million per year ($134 for every target discharge); therefore, the program cost approximately $7000 to avoid 1 readmission in the larger population.
We conducted a large-scale intervention, reaching more than half of the population targeted by the CMS, sustained the work for 2 years, and used 2 quasi-experimental approaches and 2 control groups to rigorously evaluate the effect of the intervention in the context of national decreases in admission and readmission rates. Our analysis revealed a fairly consistent and sustained but small, beneficial effect of the intervention on the target population as a whole.
Our findings are consistent with other readmission reduction efforts.25 In a systematic review of 43 studies,6 only 6 included more than 500 intervention patients. Among these studies, all but 1 achieved an absolute reduction of 2.5% or less. Collectively, these studies highlight one of the greatest challenges of systemic readmission reduction efforts: achieving a substantial reduction in readmissions for all patients may require reaching so many patients as to be prohibitively resource intensive or producing improbably large decreases in readmissions among the subset of those reached. The CMS goals of reducing all readmissions by 20% may be overambitious. Indeed, an early report26 of the overall outcomes of the CCTP program is not particularly encouraging. In preliminary analyses, only 4 of the initial 48 CCTP sites achieved an early, significant reduction in readmission rates of all Medicare patients.
The NNT of 50, which is quite high, emphasizes this conundrum. We might have been able to achieve a lower NNT if we had conducted a more intensive intervention among even higher-risk patients,27 but we would not in that case have been able to reach as many patients and might still not have achieved better hospital-wide outcomes. Nonetheless, even this NNT may be cost-effective for Medicare. The mean Medicare payment for a hospitalization in 2012 was $12 200, which compares favorably to our estimated $7000 program cost to avoid 1 readmission.28 Although this program was jointly conducted between a community-based organization and a hospital, it did not fundamentally alter community-based resources. Alternate strategies that involve more comprehensive community-based interventions may be necessary to achieve more substantial improvements. Of note, despite these concerns, the YNHH has elected to continue to self-fund the program, albeit on a smaller scale.
The improvements in readmission rates we observed are notable because our real-world intervention experienced many challenges. We did not adopt any existing intervention with perfect fidelity, we had difficulty consistently providing interventions to all enrolled patients, reaching 300 to 500 patients per month was taxing, and our intervention was hampered by high staff turnover, a hospital merger, and leadership changes at our community partner organization. Moreover, our intervention may not have been able to materially influence socioeconomic factors that might be contributing to readmissions. Such barriers are common, and efforts to extend small randomized clinical trial–proven interventions to scale should be mindful of operational challenges.
Our study has several limitations. First, as a single-site study, it may not be generalizable. We measured outcomes in the entire target population, even though only 58.3% were enrolled in the program. This approach likely diluted the measured effect, resulting in a conservative estimate of intervention effect, but accurately reflects the overall population effect of a program that targets high-risk patients.27 Second, because of the hospital merger, our same-hospital Medicare readmissions increased by 7.7% in the intervention period, differentially improving our ability to identify readmissions and likely obscuring the effect of the intervention. Third, this is an observational study that cannot establish causality and has risk of bias caused by unmeasured confounders. Our control population was necessarily defined differently than the intervention population with regard to age and insurance status. Nevertheless, our difference-in-difference and difference-in-trend analyses should have accounted for any systematic differences in risk that persisted over time. In addition, simultaneous changes, such as an increase in observation volume and a decrease in admission volume, may have played a role in the results. These changes might have had positive or negative effects on overall readmission rates. Some readmissions may have been replaced by observation stays, reducing readmission rates, but the acuity of index admissions may have increased as less ill patients were diverted to observation status, potentially increasing the overall readmission risk of the population. We noted, for instance, an increase in in-hospital mortality in the target population in the intervention period (Figure 1). Although the difference-in-difference analyses should have helped account for such secular trends, it is possible that these changes differentially affected Medicare FFS patients. It is also possible that Medicare Advantage or commercially insured patients in the control group were receiving payer-driven readmission reduction interventions during the study period, although if so this should have been a conservative bias, reducing any apparent treatment effect. Fourth, the coincident hospital merger at the start of the intervention may have improved outcomes, although we would have expected such improvements to be similar in the control population.
We found that a large-scale readmission reduction program could feasibly be implemented and sustained despite a host of operational challenges and that it was associated with a small decrease in readmission rates in the full population, even though many were not directly touched by the intervention. Our results also highlight the importance of quasi-experimental designs in assessing large-scale practice changes when randomization is not feasible.
Accepted for Publication: February 19, 2015.
Corresponding Author: Leora I. Horwitz, MD, MHS, New York University School of Medicine, 550 First Ave, Translational Research Bldg, Room 607, New York, NY 10016 (email@example.com).
Published Online: April 11, 2016. doi:10.1001/jamainternmed.2016.0833.
Author Contributions: Ms Doyle and Dr Herrin had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Jenq, Belton, Herrin, Horwitz.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Jenq, Belton, Herrin, Horwitz.
Critical revision of the manuscript for important intellectual content: Jenq, Doyle, Belton, Herrin.
Statistical analysis: Jenq, Belton, Herrin.
Obtained funding: Jenq, Belton, Horwitz.
Administrative, technical, or material support: Jenq, Doyle, Belton, Horwitz.
Study supervision: Jenq, Belton, Horwitz.
Conflict of Interest Disclosures: Dr Horwitz reported receiving funding under contract from the CMS to develop hospital quality measures, including measures of hospital readmissions. No other disclosures were reported.
Funding/Support: This evaluation was supported by the Robert E. Leet and Clara Guthrie Patterson Trust Awards Program in Clinical Research (Dr Horwitz). The Greater New Haven Coalition to Reduce Readmissions was funded by CMS award CT-0811-0024. The CMS reviewed and approved the initial submission of the manuscript.
Role of the Funder/Sponsor: The funding source (or sources) had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the Patterson Trust or the CMS.
Additional Contributions: We thank the staff of the Agency on Aging of South Central Connecticut for conducting the program and Ronald Gibson at Yale New Haven Hospital for providing the data. No compensation was provided for the contributions.
Create a personal account or sign in to: