Customize your JAMA Network experience by selecting one or more topics from the list below.
To describe the process and outcomes of local institutional review board (IRB) review for 2 Pediatric Research in Office Settings (PROS) studies.
Pediatric Research in Office Settings conducted 2 national studies concerning sensitive topics: (1) Child Abuse Recognition Experience Study (CARES), an observational study of physician decision making, and (2) Safety Check, a violence prevention intervention trial. Institutional review board approval was secured by investigators' sites, the American Academy of Pediatrics, and practices with local IRBs. Practices were queried about IRB rules at PROS enrollment and study recruitment.
Pediatric Research in Office Settings practices in 29 states.
Eighty-eight PROS practices (75 IRBs).
Local IRB presence.
Main Outcome Measures
Local IRB presence, level of PROS assistance, IRB process, study participation, data collection completion, and minority enrollment.
Practices requiring additional local IRB approval agreed to participate less than those that did not (CARES: 33% vs 52%; Safety Check: 41% vs 56%). Of the 88 practices requiring local IRB approval, 55 received approval, with nearly 50% needing active PROS help, many requiring consent changes (eg, contact name additions, local IRB approval stamps), and 87% beginning data collection. Median days to obtain approval were 81 (CARES) and 109 (Safety Check). Practices requiring local IRB approval were less likely to complete data collection but more likely to enroll minority patients.
Local IRB review was associated with lower participation rates, substantial effort navigating the process (with approval universally granted without substantive changes), and data collection delays. When considering future reforms, the national human subject protections system should consider the potential redundancy and effect on generalizability, particularly regarding enrollment of poor urban children, related to local IRB review.
Over the past 2 decades, primary care practice-based research networks (PBRNs) have arisen to conduct multisite studies addressing community-based practice and aimed at improving the effectiveness of primary care.1 Concurrently, problems in the protection of human subjects at a few large research universities have resulted in increased scrutiny of research by local institutional review boards (IRBs), which are charged with assuring that studies are ethically conducted and meet the federal requirements governing the conduct of investigations involving human subjects (the Common Rule).2 Although the regulations themselves are national, the interpretation and implementation of these rules are left to local discretion. As a result, PBRN research typically requires that multiple IRB applications be submitted for local review, even when the studies are of minimal risk and have received prior approval by the investigators' IRBs.
Pediatric Research in Office Settings (PROS), the American Academy of Pediatrics (AAP) national PBRN, includes more than 1800 practitioners in more than 700 practices located in all 50 states, Canada, Puerto Rico, and the District of Columbia. Subject enrollment and data collection by multiple practitioners in geographically dispersed practices presents challenging IRB logistics for all PROS studies. Local IRBs can vary considerably on important issues, even when assessing identical clinical trial protocols.3 Institutional review board review processes at different sites may be divergent enough to prevent national clinical research projects from obtaining approval in a timely and efficient manner,4 which can, in turn, compromise the size and diversity of the study's sample as well as introduce geographic bias among participating practices and patients. Variations in IRB requirements have also been shown to impact response rates (eg, length of time between IRB submission and approval) and generalizability of health services research studies.5,6
Little has been done to research the problem or determine solutions. The National Institutes of Health (NIH), the Office of Human Research Protections, and other key groups convened workshops on alternative models of IRB review in 2005 and 20067,8 and the NIH Roadmap has cited the need for PBRNs to facilitate large-scale studies in a more timely fashion.9 Practice-based research networks have attempted collaboration to improve the IRB process (eg, the Agency for Healthcare Research and Quality hosts a yearly 3-day conference that routinely includes sessions dedicated to IRB issues). One PBRN, the American Academy of Family Physicians National Research Network, has begun discussions of an alternative model of central IRB review.10
Little empirical research has systematically explored the process and outcomes of review by local IRBs. This study reports the process and outcomes of local IRB review for 2 large federally funded PROS studies, both of which had first received IRB approval from at least 2 IRBs, the principal investigator's institution(s) and the AAP IRB. Analysis was directed at determining: (1) local IRB prevalence among participating practices; (2) level of PROS staff assistance required to apply to local IRBs; (3) local IRB process (eg, submission/approval time, consent changes); and (4) the impact on study participation, data collection completion, and minority patient enrollment by practices covered by local IRBs.
Two PROS studies were included in our assessment. One study was observational and the second included a counseling intervention and distribution of materials (firearm locks, kitchen timers). Pediatric Research in Office Settings completed the Child Abuse Recognition Experience Study (CARES), an observational study of physician decision making,11 and Safety Check, a violence prevention intervention trial.12 The CARES subjects were the providers themselves, and the study required consent only by practitioners enrolled in the study. Safety Check included both patient and provider outcomes and required consent from families and participating practitioners. Both of these studies were considered minimal risk. The AAP IRB served as the responsible IRB for unaffiliated practices.
The aim of CARES was to describe factors influencing the diagnosis and reporting of child physical abuse. Participants provided data concerning themselves, their practice environments, and their management of 40 consecutive injured children using pocket-sized encounter cards. No patient data were identifiable either on the encounter card or in a database. A subsample of practitioner participants provided more detailed information through surveys, telephone interviews, and medical record audits.
Practitioners gave written consent prior to beginning data collection. Institutional review board approval, including waiver of parental consent, was given by the AAP and multiple investigator institutions (Tufts New England Medical Center [Boston, Massachusetts] and Children's Memorial Hospital and University of Illinois at Chicago).
Safety Check tested the effectiveness of an office-based violence prevention intervention on patient behavior over time using a cluster randomized controlled trial design. Pediatric Research in Office Settings practices that agreed to participate were randomly assigned and initiated patient recruitment of 30 consecutive patients for either an office-based violence prevention intervention or reading promotion control arm (educational handout on literacy promotion provided). Consent was obtained from practitioners after they had been randomly assigned to a group.
Eligible parents were asked to consent to join the study in the office at the time of check in. Parents of children 6 years and older who agreed to participate and signed the consent were asked to tell their children about the study and encouraged to get their children's ideas about the survey questions. Assent from age-appropriate children was obtained at this time. Institutional review board approval for the study was obtained from the AAP and the principal investigator's institution (Wake Forest University School of Medicine [Winston-Salem, North Carolina]).
Prior to study participation, practices were asked to complete and return an IRB Assurance Form that indicated whether they required local IRB review. Unaffiliated practices were covered by the AAP IRB, which also required that participating practitioners complete human subjects training and sign unaffiliated investigator agreements. Practices owned by, or affiliated with, entities with their own IRBs (eg, universities and hospitals) were required to obtain local approval before enrolling in a study. The local IRB approval process was facilitated by the PROS staff, who developed a comprehensive “IRB Packet” containing all necessary study information based on the AAP IRB application in electronic form for use in preparing the local application, which was mailed to the practices requiring local IRB approval. Within a week of the mailing, PROS staff contacted these practices to determine if someone on site was able to complete the application or if they wanted assistance from PROS staff. At the request of local practitioners, PROS staff facilitated and coordinated the application process for the practice and communicated directly with either the IRB administrator or another contact person until final approval was secured at each local site.
Pediatric Research in Office Settings staff tracked practice recruitment data, IRB specifics (eg, correspondences, level of aid, changes to protocol and/or consent), data collection (eg, completion status, number enrolled), and practice and patient demographic data (eg, practice setting, patient race/ethnicity).
Time to local IRB approval was calculated as the number of days from PROS sending the comprehensive IRB Packet to practices until the date of IRB approval, as indicated on the IRB letter.
Pediatric Research in Office Settings staff offered different levels of assistance in completing the local IRB process, ranging from sending the IRB Packet to actively completing the application and communicating directly with the IRB administrator or another contact person (estimated at 15 hours/application).
The type and number of changes required for the informed consent by a local IRB were identified by comparing the originally approved consent by the AAP IRB with the locally approved consent.
Practice/practitioner participation was measured by calculating the number of patients enrolled to determine data collection completion. Beginning data collection included enrolling at least 1 patient into the respective study. Minority patients were defined as being Hispanic/Latino, African American, American Indian or Alaska Native, Native Hawaiian or other Pacific Islander, or Asian.
Three research assistants and 2 project managers reviewed the consents for changes and tracked participation status, local IRB need, level of assistance, and data collection status in an extensive database. Frequencies and other descriptive statistics were calculated for number of practices requiring local IRB approval, assistance in completing the IRB process, changes to consent, and IRB approval time. χ2 Tests were conducted to determine the significance of observed differences in study participation, data collection completion, minority patient enrollment, and practice setting between participating practices that obtained local IRB approval vs those that were covered by the AAP IRB.
Table 1 provides detail for each study regarding participation rates according to whether local IRB approval was necessary. Eighty-eight of the 838 (includes duplicates) practices that were approached regarding participation in CARES and/or Safety Check required additional local IRB approval. A slightly higher percentage of Safety Check practices agreed to participate compared with CARES practices (41% vs 33%).
The 88 participating practices requiring local IRB approval were located in 29 states and associated with 75 different IRBs. The majority of the participating practices were located in urban areas (68%) and connected to medical schools or hospitals/clinics (48%). Of these 88 practices, 33 (CARES = 12; Safety Check = 21) dropped out of the study and 55 (CARES = 25; Safety Check = 30) received local approval. χ2 Analyses indicate that for both studies the requirement of additional local IRB approval made a difference in practice study participation (CARES: χ2 = 11.73; P = .001; Safety Check: χ2 = 7.82; P < .01).
Local IRB approval took on average 81 days for CARES and 109 days for Safety Check (Table 2), with the shortest approval time being 13 days for CARES and 25 days for Safety Check. Longer approval time was related to the number of requested changes and follow-up questions by the local IRB. Examples of other factors related to an extended approval time were practice changes (eg, staff turnover), holidays, timing of IRB meetings, and delay in producing/submitting the IRB application.
The Safety Check practices were more likely to require active application help (57% vs 36%) (Table 2), possibly reflecting the higher complexity of an intervention (vs observational) study design.
Table 2 also indicates that Safety Check practices required more changes to the informed consents (24 vs 8), which included mostly additions of contact names and approval stamps. The eAppendix includes examples of requested changes according to study. None of the IRBs requested revisions that questioned either the scientific merit of the studies or whether their protocols were ethical.
Of the 55 practices receiving local IRB approval, 87% began data collection (CARES = 22; Safety Check = 26). Overall, 64% of the CARES practices and 60% of the Safety Check practices completed data collection. Table 3 indicates that significant differences in completing data collection were only found for CARES between practices requiring local IRB approval and their counterparts (CARES: χ2 = 26.72; P < .001; Safety Check: χ2 = 0.56; P = .45).
Analyses also indicate differences in patient populations and practice setting (eg, urban, suburban, rural) depending on need for local IRB approval. The CARES sites requiring local IRB approval enrolled more minorities as part of their patient sample compared with sites covered by the AAP IRB (39% vs 23%; P < .001). Minority patients were also more frequently enrolled in Safety Check practices that required local IRB approval (51% vs 36%; P < .001). In addition, practices requiring local IRB approval were more likely to be located in an urban setting than those covered by the AAP IRB (CARES: 76% vs 28%; P < .001; Safety Check: 69% vs 34%; P < .001).
This study suggests that the need for local IRB approval appears to be an impediment to participation in PBRN-based research, may discourage the inclusion of minority and urban patients, and seems to result in little if any significant change in the research protocols. Similar effects were seen in these 2 studies, although they differed greatly in design, one an observational study of provider behavior and the other a randomized controlled intervention trial that assessed both provider and patient outcomes. The latter study required a longer time for IRB approval and a larger dedication of study resources in the form of staff time for active assistance. In no case did any IRB determine that either study posed more than minimal risk to subjects.
Many poor and minority patients receive primary care at university- or hospital-affiliated sites. These are the sites that typically require local IRB review, even if other IRBs have previously reviewed and endorsed the protocol. Moreover, practices requiring additional local IRB review were more likely to terminate participation in the research study. These findings raise concern for limited participation from diverse patient populations. For the 2 PROS studies reviewed herein, significant additional resources in both time and money were needed to achieve an adequate balance of urban sites and minority patients to reduce these disparities. In many studies, additional funding is not available for such efforts, allowing a significant portion of American children to be underrepresented in clinical research. Pediatric Research in Office Settings studies, like other PBRN-conducted multisite studies, offer specific benefits, such as enhanced external validity by increasing the generalizability of research results and rapid patient recruitment. However, as previously suggested by Gold and Dewa,13 delays resulting from an unnecessary multisite ethics review system can influence the timeliness of this research, which ultimately affects the subsequent health care policy and decision-making process.
To achieve the goal of facilitating large-scale PBRN studies effectively and efficiently, and to ensure that research results have high external validity, IRB processes need to be streamlined, particularly in the case of minimal-risk studies. The local IRBs that approved practices to participate in CARES and Safety Check undoubtedly spent time and effort in reviewing these applications, resulting in financial costs. Although local control of the human subjects process must be maintained, several steps could be taken to improve the process. These include (1) consideration by local IRBs to the existence of multiple prior IRB approvals, with the text of those reviews made available to local IRBs; (2) standing arrangements between local and central IRBs ceding detailed scrutiny of proposals to the central body, with final review by the local IRB; and (3) creation of local IRB expertise in PBRN studies to facilitate more rapid local review. Federal initiatives should be undertaken to identify innovative protocol review policies and processes for multisite network trials that reduce redundancy of efforts. The NIH Roadmap has identified this issue as well. In fact, the Clinical and Translational Science Awards sites are working together to identify alternative IRB models that would speed multi–Clinical and Translational Science Awards institutional site implementation of clinical research protocols.14
This report covers 2 recent national PBRN studies involving child health. Our participating practices and study sample may not be representative because the practices were chosen from a domain of volunteer practices in a PBRN. However, research has shown that patients seen in these practices do not differ significantly from national samples of patients.15 Studies involving other conditions and other populations may not encounter the same experience. The additional effort and cost of local IRB reviews were not quantified in this study beyond the tracking of time taken to review the protocol. In future studies, quantitating the burden of these efforts relative to changes made in the protocol is warranted.
When considering future reforms, the national human subject protections system should consider the potential unproductive redundancy and effect on generalizability, especially regarding enrollment of poor urban children related to local IRB review, and particularly in PBRN studies that are deemed minimal risk. As per the NIH Roadmap, this is the time to create efficiency in the research process to develop and apply scientific results to improve health in a timely manner. The time for creating a streamlined IRB process is now.
Correspondence: Stacia A. Finch, MA, American Academy of Pediatrics, Pediatric Research in Office Settings, 141 Northwest Point Blvd, Elk Grove Village, IL 60007 (firstname.lastname@example.org).
Accepted for Publication: May 6, 2009.
Author Contributions: Ms Finch had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Finch, Wasserman, and Sege. Acquisition of data: Finch, Barkin, Wasserman, and Dhepyasuwan. Analysis and interpretation of data: Finch, Wasserman, Slora, and Sege. Drafting of the manuscript: Finch, Barkin, and Sege. Critical revision of the manuscript for important intellectual content: Finch, Barkin, Wasserman, Dhepyasuwan, Slora, and Sege. Statistical analysis: Finch. Obtained funding: Finch, Barkin, Wasserman, Slora, and Sege. Administrative, technical, and material support: Finch, Wasserman, and Dhepyasuwan. Study supervision: Barkin, Wasserman, Slora, and Sege.
Financial Disclosure: None reported.
Funding/Support: CARES was supported by grant R01 HS10746 from the Agency for Healthcare Research and Quality, and Safety Check was supported by grant R01 HD42260 from the National Institute of Child Health and Human Development/Agency for Healthcare Research and Quality, the Robert Wood Johnson Generalist Faculty Scholars Program, and AAP Friends of Children Fund. In addition, PROS receives core funding from the Health Resources and Services Administration Maternal and Child Health Bureau and the AAP.
Disclaimer: The views presented herein are those of the authors and not the organizations where they work.
Additional Contributions: We especially appreciate the efforts of the PROS practices and practitioners. The pediatric practices or individual practitioners who participated in this study and received local IRB approval are listed here by AAP Chapter: California-1: Sierra Park Pediatrics (Mammoth Lakes), Palo Alto Medical Foundation (Palo Alto); California-2: UCLA West Los Angeles Office (Los Angeles), Loma Linda University Health Care (Moreno Valley); Florida: Family Health Center East and Oviedo Children's Health Center (Orlando), Sacred Heart Pediatric Care Center (Pensacola); Iowa: Children's Hospital Physicians (Des Moines); Illinois: Stroger Hospital of Cook County (Chicago), Yacktman Children's Pavillion (Park Ridge); Massachusetts: Tri-River Family Health Center (Uxbridge), Mary Lane Pediatric Associates (Ware), South County Pediatrics (Webster); Maine: Kennebec Pediatrics (Augusta); Michigan: Children's Hospital of Michigan (Detroit); Missouri: Children's Mercy Hospital Pediatric Care Center (Kansas City); North Carolina: Carolinas Medical Center (Charlotte), Eastover Pediatrics (Charlotte), Elizabeth Pediatrics (Charlotte), Randolph Pediatrics Associates (Charlotte), Matthews Children's Clinic, PA (Matthews); North Dakota: Altru Clinic (Grand Forks), Trinity–MAC (Minot); New Hampshire: Exeter Pediatric Associates (Exeter); New York-1: SUNY Upstate Medical University (Syracuse), Wayne Medical Group (Williamson); New York-3: Pediatric Primary Care, Montefiore Medical Center (Bronx), Cardinal McCloskey Services (Bronx), Pediatric Practice, Bronx-Lebanon Hospital (Bronx), Westchester Avenue Medical and Dental Center (Bronx), Bronx-Lebanon Pediatric Clinic, Third Avenue (Bronx), Saint Barnabas Hospital (Bronx); New Mexico: Presbyterian Family Healthcare, Rio Bravo (Albuquerque), Northside Pediatrics (Albuquerque), University of New Mexico Hospital (Albuquerque); Ohio: Medical University of Ohio at Toledo (Toledo); Oklahoma: Oklahoma State University Center for Health Sciences (Tulsa); Pennsylvania: St. Chris Care at Northeast Pediatrics (Philadelphia); Rhode Island: Northstar Pediatrics (Providence), Rainbow Pediatrics (Providence); South Carolina: Children's Clinic (Greenville); Tennessee: East Tennessee State University Physicians and Associates (Johnson City); Texas: Parkland Health & Hospital System (Dallas), Child Wellness Center (Horizon City); Utah: University of Utah Healthcare (Park City), University of Utah Health Sciences Center (Salt Lake City), IHC Memorial Health Center (Salt Lake City); Vermont: University Pediatrics, UHC Campus (Burlington), University Pediatrics (Williston).
Finch SA, Barkin SL, Wasserman RC, Dhepyasuwan N, Slora EJ, Sege RD. Effects of Local Institutional Review Board Review on Participation in National Practice-Based Research Network Studies. Arch Pediatr Adolesc Med. 2009;163(12):1130–1134. doi:10.1001/archpediatrics.2009.206
Browse and subscribe to JAMA Network podcasts!
Create a personal account or sign in to: