Customize your JAMA Network experience by selecting one or more topics from the list below.
Bridging research to policy can be a noble but daunting task. We all hope that the body of research we develop, if disseminated well, will inform policies for children. We often confront barriers to this, such as incongruence of research and policy time frames, misalignment of stakeholders, and disagreement on goals of objectives, but what does one do when the gravity of an issue requires an urgent policy response but the quality of evidence is lacking? Such was my experience as a White House appointee on the Commission to Eliminate Child Abuse and Neglect Fatalities. The commission, created through the Protect Our Kids Act of 2012, was charged to develop a national strategy for reducing child abuse fatalities. After visiting state officials, community leaders, and families across the country over 2 years, the commission concluded that an innovative cross-sector public health approach was needed. However, arriving at that conclusion was not simple.1
The commission’s task to weigh the strength of the evidence for preventing child abuse fatalities seemed straightforward. The statistics on child abuse fatalities were compelling; 4 to 8 children die of abuse and neglect every day in this country, and for every fatality of an infant younger than 1 year, 10 other infants require hospitalization.2
As we weighed the individual stories of these children and the anecdotes of what went wrong, it became apparent that little empirical evidence exists about what interventions could best prevent child abuse fatalities. Many clinical trials, such as those featuring trauma treatment, parenting skill building, or parental and child behavioral health screening and treatment, reduced child welfare involvement or improved parenting practices but did not measure abuse fatalities. Families who are repeatedly reported to the child welfare system share some characteristics with those who ultimately kill their children, but fatal child abuse has its own epidemiology. Fatalities principally occur in younger children3 and often through unique mechanisms (eg, shaken baby syndrome, murder-suicides within families, infanticide, or neglect fatalities related to poor supervision or parent intoxication). The sobering reality is that when one distills the evidence to focus solely on fatalities, there is only 1 clinical trial4 of an early infancy home visitation program that has reported a reduction in abuse-related mortality, albeit with small numbers. The prevention trials we reviewed generally had insufficient sample sizes to examine fatalities.
The truth is that public policy frequently requires leaders to make their best assessment in the absence of definitive evidence. Even when randomized clinical trials are available, their generalizability to real-world practice at scale may be questionable. Implementation science theory supports intervention context—community and organizational resources, networks and processes, and patient/client efficacy and engagement—as a critically important influencer of outcomes in the implementation of evidence-based models.5 For example, while some early trial evidence found that home visitation might reduce child abuse,6 other trial and implementation evidence has been mixed.7-9 Regardless, the government is funding and growing a national program of home visiting (ie, Maternal, Infant, and Early Childhood Home Visiting) with an aim of maltreatment prevention based on a minority of old trials. In fact, a US Preventive Services Task Force systematic review10 concluded that the evidence that home visiting reduces child abuse is weak. This is not to say that home visiting programs are ineffective—they affect a range of maternal and child outcomes—but it is not clear that these programs can help reduce child abuse.
How does one develop policy when evidence is lacking? I pondered that a lot during commission hearings and came to realize that there was a power in the cumulative testimony we were receiving across the country. We heard of substance abuse treatment programs in Hillsborough County, Florida, that reduced cosleeping deaths associated with intoxication by providing expectant mothers with cardboard bassinets. We encountered a public health program in Wisconsin that offered voluntary services to families who were reported to child welfare systems but not substantiated for maltreatment. Child welfare social workers in Vermont routinely visited families alongside domestic violence counselors. Coordinated care organizations in Oregon’s Medicaid program worked with child welfare professionals to identify risk and refer families for treatment as early as possible. Military authorities in Colorado Springs shared data proactively with civilian child welfare professionals to identify high-risk military families who were using both military and civilian service systems. As we traveled across the country, numerous communities anecdotally reported fewer fatalities after integrating their service delivery systems with single-case management, reducing barriers for families to access services, and working collaboratively across disciplines to mitigate risk.
Because the testimonials were anecdotes, not rigorous scientific investigation, the question is whether they should have been disregarded. I would argue not. The richness of scientific evidence may not have been a problem when deliberating vaccine policy, but vaccine policy may be the exception rather than the rule. When it comes to most child health policy, it would be a mistake to disregard community testimony that lacks P values. One could argue that, distilled down, the commission conducted a rich qualitative experiment in which we received local testimony and analyzed common themes among a purposively sampled small set of communities. Given every community that reported a reduction in fatalities revealed some connectivity across public systems (eg, public health and child welfare), I would argue that there is an associated P value, even if it is not measurable.
Ultimately, in light of the consistency of testimony we received across the country, the commission concluded that local communities were best positioned to respond to the crisis of child abuse fatalities. The central recommendation in the commission’s report calls for states to develop public health approaches to reduce fatalities through plans that share accountability across multiple systems that provide services to children locally. At the same time, we recommended that states should be provided more flexibility in how they can tap federal funding to invest in upstream prevention. States would then be required to measure the value of these investments by reporting child abuse fatalities more consistently over time.
Did we get it right? Only time will tell. But the lesson about the value of scientific evidence was instructive. Translating scientific evidence to policy remains the goal for the work we do, but we need to place this effort in context against the pressures and urgency of a policy arena, which is often asked to deliberate in a gray area. It often requires an open mind that is willing to embrace the uncertainty of anecdote in a real and systematic way.
Corresponding Author: David Rubin, MD, MSCE, PolicyLab, Children’s Hospital of Philadelphia, 34th Street and Civic Center Boulveard, Attn: CHOP North 3535 Market, Ste 1544, Philadelphia, PA 19104 (firstname.lastname@example.org).
Published Online: August 8, 2016. doi:10.1001/jamapediatrics.2016.1945.
Conflict of Interest Disclosures: None reported.
Rubin D. Developing Policy When Evidence Is Lacking. JAMA Pediatr. 2016;170(10):929–930. doi:10.1001/jamapediatrics.2016.1945
Artificial Intelligence Resource Center