[Skip to Navigation]
Viewpoint
October 2016

Developing Policy When Evidence Is Lacking

Author Affiliations
  • 1PolicyLab, Children’s Hospital of Philadelphia, Philadelphia, Pennsylvania
  • 2Department of Pediatrics, Perelman School of Medicine, University of Pennsylvania, Philadelphia
JAMA Pediatr. 2016;170(10):929-930. doi:10.1001/jamapediatrics.2016.1945

Bridging research to policy can be a noble but daunting task. We all hope that the body of research we develop, if disseminated well, will inform policies for children. We often confront barriers to this, such as incongruence of research and policy time frames, misalignment of stakeholders, and disagreement on goals of objectives, but what does one do when the gravity of an issue requires an urgent policy response but the quality of evidence is lacking? Such was my experience as a White House appointee on the Commission to Eliminate Child Abuse and Neglect Fatalities. The commission, created through the Protect Our Kids Act of 2012, was charged to develop a national strategy for reducing child abuse fatalities. After visiting state officials, community leaders, and families across the country over 2 years, the commission concluded that an innovative cross-sector public health approach was needed. However, arriving at that conclusion was not simple.1

The commission’s task to weigh the strength of the evidence for preventing child abuse fatalities seemed straightforward. The statistics on child abuse fatalities were compelling; 4 to 8 children die of abuse and neglect every day in this country, and for every fatality of an infant younger than 1 year, 10 other infants require hospitalization.2

As we weighed the individual stories of these children and the anecdotes of what went wrong, it became apparent that little empirical evidence exists about what interventions could best prevent child abuse fatalities. Many clinical trials, such as those featuring trauma treatment, parenting skill building, or parental and child behavioral health screening and treatment, reduced child welfare involvement or improved parenting practices but did not measure abuse fatalities. Families who are repeatedly reported to the child welfare system share some characteristics with those who ultimately kill their children, but fatal child abuse has its own epidemiology. Fatalities principally occur in younger children3 and often through unique mechanisms (eg, shaken baby syndrome, murder-suicides within families, infanticide, or neglect fatalities related to poor supervision or parent intoxication). The sobering reality is that when one distills the evidence to focus solely on fatalities, there is only 1 clinical trial4 of an early infancy home visitation program that has reported a reduction in abuse-related mortality, albeit with small numbers. The prevention trials we reviewed generally had insufficient sample sizes to examine fatalities.

The truth is that public policy frequently requires leaders to make their best assessment in the absence of definitive evidence. Even when randomized clinical trials are available, their generalizability to real-world practice at scale may be questionable. Implementation science theory supports intervention context—community and organizational resources, networks and processes, and patient/client efficacy and engagement—as a critically important influencer of outcomes in the implementation of evidence-based models.5 For example, while some early trial evidence found that home visitation might reduce child abuse,6 other trial and implementation evidence has been mixed.7-9 Regardless, the government is funding and growing a national program of home visiting (ie, Maternal, Infant, and Early Childhood Home Visiting) with an aim of maltreatment prevention based on a minority of old trials. In fact, a US Preventive Services Task Force systematic review10 concluded that the evidence that home visiting reduces child abuse is weak. This is not to say that home visiting programs are ineffective—they affect a range of maternal and child outcomes—but it is not clear that these programs can help reduce child abuse.

How does one develop policy when evidence is lacking? I pondered that a lot during commission hearings and came to realize that there was a power in the cumulative testimony we were receiving across the country. We heard of substance abuse treatment programs in Hillsborough County, Florida, that reduced cosleeping deaths associated with intoxication by providing expectant mothers with cardboard bassinets. We encountered a public health program in Wisconsin that offered voluntary services to families who were reported to child welfare systems but not substantiated for maltreatment. Child welfare social workers in Vermont routinely visited families alongside domestic violence counselors. Coordinated care organizations in Oregon’s Medicaid program worked with child welfare professionals to identify risk and refer families for treatment as early as possible. Military authorities in Colorado Springs shared data proactively with civilian child welfare professionals to identify high-risk military families who were using both military and civilian service systems. As we traveled across the country, numerous communities anecdotally reported fewer fatalities after integrating their service delivery systems with single-case management, reducing barriers for families to access services, and working collaboratively across disciplines to mitigate risk.

Because the testimonials were anecdotes, not rigorous scientific investigation, the question is whether they should have been disregarded. I would argue not. The richness of scientific evidence may not have been a problem when deliberating vaccine policy, but vaccine policy may be the exception rather than the rule. When it comes to most child health policy, it would be a mistake to disregard community testimony that lacks P values. One could argue that, distilled down, the commission conducted a rich qualitative experiment in which we received local testimony and analyzed common themes among a purposively sampled small set of communities. Given every community that reported a reduction in fatalities revealed some connectivity across public systems (eg, public health and child welfare), I would argue that there is an associated P value, even if it is not measurable.

Ultimately, in light of the consistency of testimony we received across the country, the commission concluded that local communities were best positioned to respond to the crisis of child abuse fatalities. The central recommendation in the commission’s report calls for states to develop public health approaches to reduce fatalities through plans that share accountability across multiple systems that provide services to children locally. At the same time, we recommended that states should be provided more flexibility in how they can tap federal funding to invest in upstream prevention. States would then be required to measure the value of these investments by reporting child abuse fatalities more consistently over time.

Did we get it right? Only time will tell. But the lesson about the value of scientific evidence was instructive. Translating scientific evidence to policy remains the goal for the work we do, but we need to place this effort in context against the pressures and urgency of a policy arena, which is often asked to deliberate in a gray area. It often requires an open mind that is willing to embrace the uncertainty of anecdote in a real and systematic way.

Back to top
Article Information

Corresponding Author: David Rubin, MD, MSCE, PolicyLab, Children’s Hospital of Philadelphia, 34th Street and Civic Center Boulveard, Attn: CHOP North 3535 Market, Ste 1544, Philadelphia, PA 19104 (rubin@email.chop.edu).

Published Online: August 8, 2016. doi:10.1001/jamapediatrics.2016.1945.

Conflict of Interest Disclosures: None reported.

References
1.
Commission to Eliminate Child Abuse and Neglect Fatalities.  Within our reach: a national strategy to eliminate child abuse and neglect fatalities. https://eliminatechildabusefatalities.sites.usa.gov/files/2016/03/CECANF-final-report.pdf. Accessed July 7, 2016.
2.
Leventhal  JM, Martin  KD, Gaither  JR.  Using US data to estimate the incidence of serious physical abuse in children.  Pediatrics. 2012;129(3):458-464.PubMedGoogle ScholarCrossref
3.
US Department of Health and Human Services, Administration for Children and Families, Children’s Bureau.  Child maltreatment 2014. http://www.acf.hhs.gov/programs/cb/resource/child-maltreatment-2014. Accessed July 7, 2016.
4.
Olds  DL, Kitzman  H, Knudtson  MD, Anson  E, Smith  JA, Cole  R.  Effect of home visiting by nurses on maternal and child mortality: results of a 2-decade follow-up of a randomized clinical trial.  JAMA Pediatr. 2014;168(9):800-806.PubMedGoogle ScholarCrossref
5.
Damschroder  LJ, Aron  DC, Keith  RE, Kirsh  SR, Alexander  JA, Lowery  JC.  Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science.  Implement Sci. 2009;4:50.PubMedGoogle ScholarCrossref
6.
Olds  DL, Henderson  CR  Jr, Chamberlin  R, Tatelbaum  R.  Preventing child abuse and neglect: a randomized trial of nurse home visitation.  Pediatrics. 1986;78(1):65-78.PubMedGoogle Scholar
7.
Matone  M, O’Reilly  AL, Luan  X, Localio  AR, Rubin  DM.  Emergency department visits and hospitalizations for injuries among infants and children following statewide implementation of a home visitation model.  Matern Child Health J. 2012;16(9):1754-1761.PubMedGoogle ScholarCrossref
8.
Tufts Interdisciplinary Evaluation Research.  The Massachusetts Healthy Families Evaluation-2 (MHFE-2): a randomized controlled trial of a statewide home visiting program for young parents. final report to the Children’s Trust of Massachusetts. http://ase.tufts.edu/tier/documents/2015_MHFE2finalReport.pdf. Accessed July 7, 2016.
9.
Meghea  CI, You  Z, Roman  LA.  A statewide Medicaid enhanced prenatal and postnatal care program and infant injuries.  Matern Child Health J. 2015;19(10):2119-2127.PubMedGoogle ScholarCrossref
10.
Selph  SS, Bougatsos  C, Blazina  I, Nelson  HD.  Behavioral interventions and counseling to prevent child abuse and neglect: a systematic review to update the US Preventive Services Task Force recommendation.  Ann Intern Med. 2013;158(3):179-190.PubMedGoogle ScholarCrossref
×