Under Section 1115 of the Social Security Act, the Secretary of Health and Human Services may waive Medicaid requirements and permit states to pilot experimental models of health care delivery. In 2015, these programs accounted for more than $100 billion in federal expenditures. To date, 43 states have secured 55 active waivers to test various Medicaid program changes, including work requirements, benefit restrictions, or managed long-term services and support systems.1-3 To promote transparency and accountability of these publicly funded “demonstration” programs, the Patient Protection and Affordable Care Act (ACA) mandated in 2012 that states receiving waivers publish both annual progress reports and periodic program evaluations.4 In 2014, the Centers for Medicare & Medicaid Services (CMS) provided explicit guidance on the expected content of these evaluation reports and established an office to monitor their quality in 2015.3 Our objective was to examine states’ compliance with both ACA demonstration program annual reporting and CMS guidelines for program evaluations.
Using publicly available CMS administrative records, we conducted a 2-part cross-sectional study of state Medicaid 1115 waiver demonstration programs. In the first part, we used CMS administrative records to identify all demonstration programs active during 2011-2013 and 2016-2018. These years correspond to before ACA reporting requirements and after CMS reporting guidance, respectively. In any given year, we excluded programs in their first year of approval because insufficient time had elapsed for required reporting. Among remaining programs, we identified the presence or absence of a publicly available annual report in each year by searching the CMS.gov website, on which the agency is required by law to publish annual reports and evaluations.4 If none was identified, we also searched state Medicaid websites. We determined the overall percentage of program-years with publicly available annual reports for both 2011-2013 and 2016-2018, and compared percentages with χ2 testing.
The second part focused on the 2016-2018 publicly available annual reports. We determined a report to be a program evaluation according to whether it was identified among the CMS administrative records for a respective state as an “evaluation.” By reviewing each evaluation, we determined the presence or absence of the following report components outlined in the CMS guidance5: executive summary, background, hypothesis, methods, results, conclusions, interpretations and policy implications, and lessons learned and recommendations. For each component, we used descriptive statistics to summarize the total percentage of evaluation reports with each component present. This study follows the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guideline reporting guideline for cross-sectional studies.
From 2011-2013, we identified 45 active demonstration programs across 32 states. Based on years of program activity, a maximum of 126 annual reports from these programs would be expected; however, only 6 (4.8%) were publicly available. From 2016-2018, we identified 36 active demonstration programs across 24 states. Based on years of program activity, a maximum of 93 annual reports from these programs would be expected; 54 annual reports (58%) were publicly available (P < .001) (Table).
From among publicly available annual reports between 2016 and 2018, we identified 20 that were program evaluations, representing 36 demonstration programs (24 states). The median number of components reported per program evaluation was 5 (interquartile range, 2-6). The most commonly reported program evaluation components were results (n = 18; 90%) and general background information (n = 15; 75%), whereas interpretations and policy implications and lessons learned and recommendations were least commonly reported (n = 4 for both; 20%). None of the 11 evaluations that were submitted to CMS for renewal reported interpretations and policy implications.
Despite an increase in the public availability of annual reports for Section 1115 demonstration programs after the ACA and CMS’s evaluation guidance, more than 40% were not publicly available. Moreover, when demonstration program evaluations were made publicly available, they consistently failed to report key evaluation components.
There are limitations to consider, including our inability to account for special terms and conditions negotiated between CMS and individual states that may dictate requirements for reporting frequency. In addition, we did not contact federal or state officials when annual reports or evaluations were not identified. However, given the importance of Medicaid Section 1115 waivers to pilot innovative models of health care delivery, and the amount of federal funding involved, enforcement of existing CMS rules to ensure accountability is needed, particularly for program renewals. Availability of program reports and evaluations allows the broader public to better understand the effect of these programs on Medicaid beneficiaries.
Accepted for Publication: August 18, 2020.
Published: October 26, 2020. doi:10.1001/jamanetworkopen.2020.22035
Open Access: This is an open access article distributed under the terms of the CC-BY License. © 2020 Lopez L III et al. JAMA Network Open.
Corresponding Author: Leo Lopez III, MD, 333 Cedar St, New Haven, CT 06510 (firstname.lastname@example.org).
Conflict of Interest Disclosures: Dr Ross reports receiving research support through Yale University from Johnson & Johnson to develop methods of clinical trial data sharing; from Medtronic, Inc and the Food and Drug Administration (FDA) to develop methods for postmarket surveillance of medical devices (U01FD004585); from the FDA to establish a Yale–Mayo Clinic Center for Excellence in Regulatory Science and Innovation program (U01FD005938); from the Medical Device Innovation Consortium to support collaboration in the National Evaluation System for health Technology; from the Blue Cross Blue Shield Association to better understand medical technology evaluation; from the Centers for Medicare & Medicaid Services to develop and maintain performance measures that are used for public reporting (HHSM-500-2013-13018I); from the Agency for Healthcare Research and Quality (R01HS022882); from the National Heart, Lung, and Blood Institute of the National Institutes of Health (NIH) (R01HS025164); and from the Laura and John Arnold Foundation to establish the Good Pharma Scorecard at Bioethics International and to establish the Collaboration for Research Integrity and Transparency at Yale. No other disclosures were reported.
Funding/Support: Dr Silvestri is currently employed by NYC Health + Hospitals but completed this work at Yale University while part of the National Clinician Scholars Program at Yale, in which Dr Lopez is also enrolled; both were supported in part through Clinical and Translational Science Awards grant TL1 TR001864 from the National Center for Advancing Translational Science, a component of the NIH.
Role of the Funder/Sponsor: The NIH had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; or decision to submit the manuscript for publication.
Disclaimer: The contents of this work are solely the responsibility of the authors and do not necessarily reflect the official views of NYC Health + Hospitals, Yale University, or the NIH.
US Government Accountability Office. Medicaid Demonstrations Evaluations Yielded Limited Results, Underscoring Need for Changes to Federal Policies and Procedures
. 2018. Accessed January 26, 2019. https://www.gao.gov/assets/690/689506.pdf
Federal Register; Department of Health and Human Services; Centers for Medicare & Medicaid Services. Medicaid program: review and approval process for Section 1115 demonstrations: applications, review, and reporting process for waivers for state innovation: final rules. Accessed August 12, 2020. https://www.govinfo.gov/content/pkg/FR-2012-02-27/html/2012-4354.htm