[Skip to Navigation]
Sign In
May 2005

Impact of a Quality Improvement Program on Care and Outcomes for Children With Asthma

Author Affiliations

Author Affiliations: National Initiative for Children’s Health Care Quality, Boston, Mass (Dr Homer and Mss Horvitz, Peterson and Heinrich); Department of Pediatrics (Dr Homer), Clinical Research Program (Drs Forbes and Wypij), and Department of Cardiology (Dr Wypij), Children’s Hospital, Boston; Department of Pediatrics, Harvard Medical School, and Department of Biostatistics, Harvard School of Public Health, Boston (Dr Wypij).

Arch Pediatr Adolesc Med. 2005;159(5):464-469. doi:10.1001/archpedi.159.5.464

Objective  To test a quality improvement intervention, a learning collaborative based on the Institute for Healthcare Improvement’s Breakthrough Series methodology, specifically intended to improve care and outcomes for patients with childhood asthma.

Design  Randomized trial in primary care practices.

Setting  Practices in greater Boston, Mass, and greater Detroit, Mich.

Participants  Forty-three practices, with 13 878 pediatric patients with asthma, randomized to intervention and control groups.

Intervention  Participation in a learning collaborative project based on the Breakthrough Series methodology of continuous quality improvement.

Main Outcome Measures  Change from baseline in the proportion of children with persistent asthma who received appropriate medication therapy for asthma, and in the proportion of children whose parent received a written management plan for their child’s asthma, as determined by telephone interviews with parents of 631 children.

Results  After adjusting for state, practice size, child age, sex, and within-practice clustering, no overall effect of the intervention was found.

Conclusions  This methodologically rigorous assessment of a widely used quality improvement technique did not demonstrate a significant effect on processes or outcomes of care for children with asthma. Potential deficiencies in program implementation, project duration, sample selection, and data sources preclude making the general inference that this type of improvement program is ineffective. Additional rigorous studies should be undertaken under more optimal settings to assess the efficacy of this method for improving care.

Widespread gaps exist in health care quality in the United States.1 The development and dissemination of guidelines have proven insufficient by themselves to change care.2,3 One strategy has been to develop more intensive programs intended to change clinical behavior.4,5 These efforts can best be viewed in the context of diffusion of innovation, which provides a theoretical framework to guide quality improvement activity.6 One of the most fully developed of these comprehensive programs is termed continuous quality improvement, an improvement approach adapted from industry to the health care setting.7,8

Multiple reports on 1 continuous quality improvement intervention, the Breakthrough Series developed by the Institute for Healthcare Improvement, Cambridge, Mass,9,10 have been published for a variety of clinical topics,11-17 but no randomized, controlled trial has been reported to date. Here, we report a trial of a continuous quality improvement intervention, participation in a learning collaborative project based on the Breakthrough Series method, undertaken with primary care practices caring for patients with childhood asthma. The aim of the project was to implement a quality improvement intervention in a large number of primary care sites and evaluate the impact on key processes and outcomes. Childhood asthma represents a good test of the effectiveness of this intervention, in that it is a condition commonly encountered in primary care settings. Strong evidence links a number of key care processes (such as the use of anti-inflammatory medications) with improved outcomes, but deficits persist in the quality of care provided to children with this condition.18-21


Study approval was granted from each participating practice's respective institutional review board and from the New England institutional review board for the survey firm.

Recruitment and randomization of practices

Practices were recruited in 2 geographic areas within which those providing more than 10 visits per year for childhood patients with asthma were eligible. Greater Detroit (Mich) practices were affiliated with the Henry Ford Health System and recruited by its chair of the Department of Pediatrics. Greater Boston (Mass) practices were recruited through presentations and contacts by opinion leaders of health plan and hospital-affiliated practice networks, and by direct mailings to all pediatricians within approximately 25 miles of Boston. All eligible practices agreeing to participate were included. After stratification by state and size of practice, half of the practices were randomly assigned to receive the intervention in 2001, and the remainder to receive it in 2002. The latter delayed intervention group served as the control practices in 2001.

Participating practices and the faculty who coached them were aware of their group assignment. Parents and their interviewers were unaware of randomization status.


Intervention practices participated in a learning collaborative project for 12 months. Each was requested to send a 3-member multidisciplinary team (composed of a physician, a nurse, and a front office staff person) to three 1-day learning sessions during the course of the collaborative. In January 2001, intervention practices were asked to collect baseline data to identify “performance gaps” (the difference between current and desirable performance) in their practice. At the first learning session in February, teams were taught a comprehensive method to proactively care for patients with asthma using the Chronic Care Model22,23 and concepts of quality improvement including the Model for Improvement (a specific approach to quality improvement that emphasizes the use of small, incremental tests of change).24 They were provided materials and information based on the guidelines from the National Asthma Education and Prevention Program, Bethesda, Md, and tools to support implementation of these practices (such as encounter forms and an electronic patient registry). During the next 10 months, coaching and support was provided through 2 additional learning sessions, biweekly conference calls, an active e-mail list, and periodic performance feedback based on expert review of monthly project team reports.

Practices randomized to intervention in 2002 continued usual care during 2001.

Outcome measures

We hypothesized that the intervention would result in improvements in process and outcome measures in intervention vs control sites. Prior to the study, we defined the following as primary study measures: any written asthma management plan reported as received by a parent in the past 12 months, daily use of inhaled steroids in the past 4 weeks, and daily use of controller medications in the past 4 weeks. The following were defined as secondary measures: any asthma hospitalization and any emergency department visit for asthma in the past 12 months, any asthma attack in the past 12 months, and parent report of how limited their child had been from asthma in the previous 2 weeks from very strenuous activities (such as running fast or playing hard). Other measures included patient experience of care and parent-reported functional status.25

Data collection

Lists of potentially eligible patients were obtained for each practice. Children aged 2 to 16 years with at least 1 visit with an International Classification of Diseases, Ninth Revision (ICD-9) code of asthma or chronic asthmatic bronchitis between January 1, 1998, and December 31, 2000, and without another complicating respiratory condition, were potentially eligible. Henry Ford Health System lists were provided by its central information service, while each Massachusetts practice produced its own list.

Telephone interviews of samples of parents of children with asthma were conducted for each study practice at baseline and at the end of the intervention. A quota for interviews, ranging from 5 to 25, was allocated to each practice depending on practice size. Samples were drawn randomly from each practice list until the practice interview quota was met.

The baseline interview contained screening questions to further establish eligibility. Eligible children were those whose parent informant had been told by a physician that the child had asthma, but not another chronic lung condition, and who intended to continue receiving care at the study practice for the coming year. Eligible children also met 1 or more of the following criteria regarding medication use in the past 12 months: 2 or more uses of oral steroids, 2 or more refills of a bronchodilator, use of a corticosteroid inhaler, or use of cromolyn sodium (the study began prior to widespread use of montelukast).

Statistical methods

We conducted an intention-to-treat analysis in which all intervention and control practices were included as randomized. The effect of intervention was assessed by comparing change from baseline in the intervention children with change from baseline in the control children. Specifically, a group × time interaction was used to test whether there was an effect of intervention over and above the effect of time on the controls.

Power calculations were based on tests of proportions and means, and accounted for within-practice clustering of patients.26,27 At baseline, 34% of patients were expected to be using anti-inflammatory medicine and 50% were expected to have a written asthma management plan. Clinically meaningful differences were defined as increases of at least 20% for proportions and at least 0.3 standard deviations for normally distributed continuous outcomes. A sample size of 600 subjects was found to have 90% power to detect clinically meaningful differences for binary or continuous outcomes when there were mild (ρ = 0.1) levels of correlations of responses within a practice. With moderate (ρ = 0.3) levels of correlation, larger differences (increases of 30% for proportions and 0.5 standard deviations for continuous outcomes) were required to detect significant differences between the groups with 90% power.

Linear and logistic regression were used to analyze the parent interview data. To account for within-practice correlations, generalized estimating equation methods28 were used in all analyses. Regression models adjusted for the effects of state, practice size category, age, and sex.


Practices and characteristics

Forty-three practices were randomized: 22 to the intervention group and 21 to control (Figure 1). Massachusetts practices were a mix of hospital clinics, independent community health centers, and private practices. In Michigan, all sites were a part of the Henry Ford Health System; 1 was hospital-based. As can be seen in Table 1, there were no significant differences in practice characteristics between intervention and comparison groups.

Figure 1. 
Recruitment process. Asterisk indicates that the 2 practices who dropped out after randomization did not submit patient lists for the parent survey.

Recruitment process. Asterisk indicates that the 2 practices who dropped out after randomization did not submit patient lists for the parent survey.

Table 1. 
Characteristics of Practices by Intervention Status*
Characteristics of Practices by Intervention Status*

After randomization, but before the first learning session, 1 intervention practice dropped out, and did not participate in the intervention or supply patient names for the parent interview. An additional intervention practice closed in the middle of the intervention, and another dropped out after the second learning session.

Implementation of intervention

Although practices were expected to participate fully in the intervention, actual participation varied considerably. Attendance at the 3 learning sessions declined progressively from the first to the third in both states (eg, 34 participants at the first session in Boston; 24 at the third). On average, only 42% of the practices submitted performance data and 39% reported their progress in any given month of the intervention, with fewer practices reporting in the later months of the intervention.

Telephone interview data

The 43 practices identified a total of 13 878 pediatric patients with asthma who may have been eligible for this study. Baseline interviews were completed with 631 households. The steps in participant recruitment are documented in Figure 2. No significant demographic differences were identified between children in intervention and comparison groups (Table 2). Unexpectedly, at baseline, 53% of the children in the intervention group had a written asthma management plan, compared with 37% of the children in the control group (P<.001). We controlled for that difference when considering the effect of intervention on having a management plan. The groups were not different at baseline with respect to any other measure.

Figure 2. 
Parent interviews.

Parent interviews.

Table 2. 
Demographic Characteristics of Children in the Baseline Parent Interview by Intervention Status*
Demographic Characteristics of Children in the Baseline Parent Interview by Intervention Status*

A follow-up interview was attempted with each of the 631 households who completed the baseline interview; 490 second interviews (78%) were completed March through June, 2002.

Table 3 presents the outcome measures from the parent interviews. After adjusting for state, practice size, child age, sex, and within-practice clustering, we found no overall effect of intervention. Daily use of a controller medication increased slightly in both groups. The proportion of children with any asthma attacks in the past 12 months decreased significantly (P<.001) in both the intervention and control groups, as did the proportion of children with an asthma emergency department visit (P<.001). We found no differences in patient experience of care or functional outcomes (data not shown). Adjustment for other factors, including child race, household income, or whether participants were Medicaid insured or not, did not appreciably affect these results. Within-practice correlations were modest in magnitude (ranging from 0.0-0.06 for different analyses).

Table 3. 
Primary and Secondary Outcome Measures for Children in the Parent Interview by Intervention Group*
Primary and Secondary Outcome Measures for Children in the Parent Interview by Intervention Group*

Impact among engaged participants

To evaluate the possibility that the lack of an intervention effect could be due to limited engagement by some of the practices, the analysis was rerun including data from only practices that attended all 3 learning sessions (a minimal indicator of engagement). When compared with outcomes for children followed in the control sites, parent reports for 6 of the 7 outcomes indicated no significant effect of intervention. The percentage of children requiring an emergency department visit was reduced more in the children in the intervention group than in the control group (P = .01). The proportion of intervention group requiring an emergency department visit dropped from 36% to 22% for the entire intervention group (Table 3), but dropped from 51% to 22% when retaining only the children in practices attending all the learning sessions. This restriction removed all but 9 of 22 (41%) intervention practices.


This study is the first of which we are aware to report a randomized controlled trial of a quality improvement intervention based on the Breakthrough Series method. We assessed the impact of this intervention on several key processes and intermediate outcomes for children with asthma and consistently found that our intervention did not exert a substantial positive effect in the intervention group beyond that found in the control practices.

Several possible explanations exist for our negative findings. A number of nonpractice level factors which can influence care and outcomes, such as insurance coverage or access barriers, were outside the scope of this work. Alternatively, it is possible that it is hard to find an effect owing to study factors. The sampling of parents from practices may have been incomplete or biased, particularly in the Boston sites because, while we specified the criteria for selecting children with asthma, the quality of the information systems in each practice varied and we did not audit their identification. Because of delays in the screening process, many “baseline” surveys took place well after the start of the intervention. The data did not show any trends associated with the timing of the baseline survey, suggesting this was not a large source of bias.

The risk of some degree of contamination between intervention and control practices was real. In Detroit, all of the practices were within a single organization. In the Boston area, many intervention practices were in the same practice network as control practices, and some were even owned by the same entity.

Physicians and staff at practices other than those who participated in the learning collaborative project likely cared for the sample of children whose parents were surveyed. Although the learning is intended to spread over time, it may take longer and be somewhat attenuated. The duration of the intervention itself was somewhat shorter than would be ideal—we believe it requires 15 to 18 months to see changes, particularly in the health status outcomes. Logistical constraints connected with the grant award precluded our ability to extend the project period.

Although all of the practices agreed to participate in some manner, many were not fully invested in the process of improvement. Attendance at learning sessions was less than desirable or typical in such initiatives. Undertaking the collaborative at 2 separate sites also minimized the interaction among practices, a process that we feel accelerates learning.

While the Breakthrough Series process is widely used, its extension into primary care practices, particularly practices in the private sector, is still relatively innovative. Mills and Weeks29 note that teams that are successful at quality improvement are characterized by (1) alignment of the team with organizational strategic priorities, (2) strong team leadership and team functioning skills, and (3) front-line staff support. It is unclear whether these describe typical primary care teams. It is also possible that teams willing to participate in a study such as this may be systematically different from teams from organizations that commit to improvement—especially given the need to accept randomization.

In addition, the underlying turmoil in the health care system in these 2 communities may have affected the ability of these sites to focus on improvement. Many of the Boston sites had been affiliated with CareGroup, an integrated delivery system that was undergoing severe financial distress at the time of this study. Similarly, in Detroit, the Henry Ford Health System lost a contract to provide services through Medicaid just prior to starting the intervention, a change that decreased pediatrician morale and compensation, affecting intervention practices’ ability to concentrate on quality improvement efforts. Improving care for children with asthma, particularly in the context of a study, may not have had sufficient salience to overcome these difficult organizational challenges.


This study affirms that changing clinical practice for children with a chronic condition is not simple. In the course of this intervention, we encountered the obstacles that confront researchers and impede efforts to improve care: changing practice networks, financially challenged managed care plans, and changes in staffing and contracts for providers. All of these affect both studies and quality improvement.

The effectiveness of the approach used in this study was not affirmed, but limitations in design and implementation preclude making the general inference that this type of improvement program is ineffective. Interventions with more engaged teams, extending for longer periods of time, are likely to be more effective. Programs that can better incorporate or facilitate the adoption of registries and information technologies may also demonstrate greater effectiveness.

Changing clinical care remains a high priority to meet the aims specified by the Institute of Medicine, Washington, DC. This particular learning collaborative project did not demonstrably improve care in participating practice sites. Additional efforts are needed to design, implement, and evaluate programs that will transform clinical care and improve outcomes.

Correspondence: Charles J. Homer, MD, MPH, Chief Executive Officer, National Initiative for Children’s Health Care Quality, 375 Longwood Ave, 3rd Floor, Boston, MA 02215 (chomer@nichq.org).

Back to top
Article Information

Accepted for Publication: January 20, 2005.

Funding/Support: This project was supported by grant R01 HS10411 from the Agency for Healthcare Research and Quality, Rockville, Md, and by the National Heart, Lung and Blood Institute, Bethesda, Md. Neither of the funding sources had any role in the study design; in the collection, analysis and interpretation of data; in the writing of this article; or in the decision to submit it for publication.

Acknowledgment: We thank the collaborative’s faculty (Charles Barone, MD, Daniel Hyman, MD, MMM, and John R. Meurer, MD) and the project Advisory Committee (James Glauber, MD, Donald Goldmann, MD, and Kevin Weiss, MD), without whom this study would not have been possible.

Institute of Medicine, Committee on Quality of Health Care in America, Crossing the Quality Chasm: A New Health System for the 21st Century.  Washington, DC Institute of Medicine2001;
Lomas  JAnderson  GMDomnick-Pierre  KVayda  EDEnkin  MWHannah  WJ Do practice guidelines guide practice? the effect of a consensus statement on the practice of physicians.  N Engl J Med 1989;3211306- 1311PubMedGoogle ScholarCrossref
Cabana  MDRand  CSPowe  NR  et al.  Why don't physicians follow clinical practice guidelines? a framework for improvement.  JAMA 1999;2821458- 1465PubMedGoogle ScholarCrossref
Grimshaw  JMShirran  LThomas  R  et al.  Changing provider behavior: an overview of systematic reviews of interventions.  Med Care 2001;39 ((suppl 2)) II2- 45PubMedGoogle ScholarCrossref
Oxman  ADThomson  MADavis  DAHaynes  RB No magic bullets: a systematic review of 102 trials of interventions to improve professional practice.  CMAJ 1995;1531423- 1431PubMedGoogle Scholar
Rogers  E Diffusion of Innovations.  New York, NY Free Press1995;
Berwick  D Continuous improvement as an ideal in health care.  N Engl J Med 1989;32053- 56PubMedGoogle ScholarCrossref
Laffel  GBlumenthal  M The case for using industrial quality management science in health care organizations.  JAMA 1989;2622869- 2873PubMedGoogle ScholarCrossref
Institute for Healthcare Improvement, The Breakthrough Series: IHI’s Collaborative Model for Achieving Breakthrough Improvement.  Boston, Mass Institute for Healthcare Improvement2003;
Kilo  CM A framework for collaborative improvement: lessons from the Institute for Healthcare Improvement’s Breakthrough Series.  Qual Manag Health Care 1998;61- 13PubMedGoogle ScholarCrossref
Flamm  BLBerwick  DMKabcenell  A Reducing cesarean section rates safely: lessons from a “breakthrough series” collaborative.  Birth 1998;25117- 124PubMedGoogle ScholarCrossref
Leape  LLKabcenell  AIGandhi  TKCarver  PNolan  TWBerwick  DM Reducing adverse drug events: lessons from a breakthrough series collaborative.  Jt Comm J Qual Improv 2000;26321- 331PubMedGoogle Scholar
Kosseff  ALNiemeier  S SSM Health Care clinical collaboratives: improving the value of patient care in a health care system.  Jt Comm J Qual Improv 2001;275- 19PubMedGoogle Scholar
Schiff  GDWisniewski  MBult  JParada  JPAggarwal  HSchwartz  DN Improving inpatient antibiotic prescribing: insights from participation in a national collaborative.  Jt Comm J Qual Improv 2001;27387- 402PubMedGoogle Scholar
Bartlett  JCameron  PCisera  M The Victorian emergency department collaboration.  Int J Qual Health Care 2002;14463- 470PubMedGoogle ScholarCrossref
Montoye  CKMehta  RHBaker  PL  et al. GAP Steering Committee, A rapid-cycle collaborative model to promote guidelines for acute myocardial infarction.  Jt Comm J Qual Saf 2003;29468- 478PubMedGoogle Scholar
Landon  BEWilson  IBMcInnes  K  et al.  Effects of a quality improvement collaborative on the outcome of care of patients with HIV infection: the EQHIV study.  Ann Intern Med 2004;140887- 896PubMedGoogle ScholarCrossref
National Heart, Lung, and Blood Institute, National Asthma Education and Prevention Program Expert Panel Report 2: Guidelines for the Diagnosis and Management of Asthma—Update on Selected Topics 2002.  Bethesda, Md National Institutes of Health2002;NIH publication 97-4051
Braganza  SSharif  IOzuah  PO Documenting asthma severity: do we get it right?  J Asthma 2003;40661- 665PubMedGoogle ScholarCrossref
Lozano  PGrothaus  LCFinkelstein  JAHecht  JFarber  HJLieu  TA Variability in asthma care and services for low-income populations among practice sites in managed Medicaid systems.  Health Serv Res 2003;381563- 1578PubMedGoogle ScholarCrossref
Warman  KLSilver  EJMcCourt  MPStein  RE How does home management of asthma exacerbations by parents of inner-city children differ from NHLBI guideline recommendations?  Pediatrics 1999;103422- 427PubMedGoogle ScholarCrossref
Wagner  EHAustin  BTDavis  CHindmarsh  MSchaefer  JBonomi  A Improving chronic illness care: translating evidence into action.  Health Aff (Millwood) 2001;2064- 78PubMedGoogle ScholarCrossref
Glasgow  REFunnell  MMBonomi  AEDavis  CBeckham  VWagner  EH Self-management aspects of the improving chronic illness care breakthrough series: implementation with diabetes and heart failure teams.  Ann Behav Med 2002;2480- 87PubMedGoogle ScholarCrossref
Langley  GNolan  KNolan  TNorman  CProvost  L The Improvement Guide: A Practical Approach to Enhancing Organizational Performance.  San Francisco, Calif Jossey-Bass Publishers1996;
Asmussen  LOlson  LMGrant  ENFagan  JWeiss  KB Reliability and validity of the Children's Health Survey for Asthma.  Pediatrics 1999;104e71PubMedGoogle ScholarCrossref
Donner  ABirkett  NBuck  C Randomization by cluster: sample size requirements and analysis.  Am J Epidemiol 1981;114906- 914PubMedGoogle Scholar
Friedman  LFurberg  CDeMets  DL Fundamentals of Clinical Trials, 3rd ed.  New York, NY pringer-Verlag1996;
 SAS/STAT Software: Changes and Enhancements Through Release 6.11.  Cary, NC SAS Institute Inc1996;231- 315
Mills  PDWeeks  WB Characteristics of successful quality improvement teams: lessons from five collaborative projects in the VHA.  Jt Comm J Qual Saf 2004;30152- 162PubMedGoogle Scholar