[Skip to Navigation]
Sign In
Figure 1. Study Flow
Figure 1. Study Flow

Admissions and admissions analyzed for individual study phases (pairs of practices) do not add to total admissions and total admissions analyzed over entire trial because some patients were admitted to intensive care units (ICUs) during transitions between phases and could therefore be considered in both phases during the same admission.
aAdmissions analyzed refers to the number of admissions with available data for determining eligibility for and delivery of the targeted care practice.

Figure 2. Summary of Effects of Intervention Across All Targeted Care Practices During the Trial
Figure 2. Summary of Effects of Intervention Across All Targeted Care Practices During the Trial

Numbers are shown for first and last months of the 4-month trial for each targeted practice for intervention vs control groups. Numerators are number of patients or patient-days for which the targeted care practice was delivered; denominators are total eligible patients or patient-days during the month of study. Weight refers to the contribution of each practice to the overall estimate of the intervention's effect. DVT indicates deep vein thrombosis; CRBSI, catheter-related bloodstream infection; SBT, spontaneous breathing trial.

Figure 3. Change Over Time in Adoption Rates of Targeted Practices, Adjusted for Effects of Clustering Within ICUs, During the Trial
Figure 3. Change Over Time in Adoption Rates of Targeted Practices, Adjusted for Effects of Clustering Within ICUs, During the Trial

Each point in the graphs represents the adoption rates (adjusted for clustering) for all eligible patient-days or patients during the previous month of study. Numbers under the x-axis corresponding to each month of the trial are denominators of patients or patient-days, as appropriate, that were available to analyze performance during that month for intervention and control groups. The ratio of odds ratios (ORs) (with 95% confidence intervals [CIs]) describing improvement over time in intervention vs control intensive care units (ICUs) is shown in each graph (see “Methods” section of text for explanation). DVT indicates deep vein thrombosis; CRBSI, catheter-related bloodstream infection; SBT, spontaneous breathing trial.

Table 1. Components of the Quality Improvement Intervention
Table 1. Components of the Quality Improvement Intervention
Table 2. Process-of-Care Indicators for Each Targeted Care Practice
Table 2. Process-of-Care Indicators for Each Targeted Care Practice
Table 3. Characteristics of Participating ICUs and Patients During Trial
Table 3. Characteristics of Participating ICUs and Patients During Trial
Table 4. Results of Active Intervention on Adoption of Targeted Care Practices During the Trial
Table 4. Results of Active Intervention on Adoption of Targeted Care Practices During the Trial
1.
Halpern NA, Pastores SM, Greenstein RJ. Critical care medicine in the United States 1985-2000: an analysis of bed numbers, use, and costs.  Crit Care Med. 2004;32(6):1254-125915187502PubMedGoogle ScholarCrossref
2.
Rubenfeld GD, Caldwell E, Peabody E,  et al.  Incidence and outcomes of acute lung injury.  N Engl J Med. 2005;353(16):1685-169316236739PubMedGoogle ScholarCrossref
3.
Angus DC, Linde-Zwirble WT, Lidicker J, Clermont G, Carcillo J, Pinsky MR. Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.  Crit Care Med. 2001;29(7):1303-131011445675PubMedGoogle ScholarCrossref
4.
Pronovost PJ, Rinke ML, Emery K, Dennison C, Blackledge C, Berenholtz SM. Interventions to reduce mortality among patients treated in intensive care units.  J Crit Care. 2004;19(3):158-16415484176PubMedGoogle ScholarCrossref
5.
Pronovost P, Needham D, Berenholtz S,  et al.  An intervention to decrease catheter-related bloodstream infections in the ICU.  N Engl J Med. 2006;355(26):2725-273217192537PubMedGoogle ScholarCrossref
6.
Ferrer R, Artigas A, Levy MM,  et al; Edusepsis Study Group.  Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.  JAMA. 2008;299(19):2294-230318492971PubMedGoogle ScholarCrossref
7.
MacLehose RR, Reeves BC, Harvey IM, Sheldon  TA, Russell IT, Black AM. A systematic review of comparisons of effect sizes derived from randomised and non-randomised studies.  Health Technol Assess. 2000;4(34):1-15411134917PubMedGoogle Scholar
8.
Rubenfeld GD, Cooper C, Carter G, Thompson BT, Hudson LD. Barriers to providing lung-protective ventilation to patients with acute lung injury.  Crit Care Med. 2004;32(6):1289-129315187508PubMedGoogle ScholarCrossref
9.
Grimshaw JM, Shirran L, Thomas R,  et al.  Changing provider behavior: an overview of systematic reviews of interventions.  Med Care. 2001;39(8):(suppl 2)  II2-II4511583120PubMedGoogle Scholar
10.
Curtis JR, Cook DJ, Wall RJ,  et al.  Intensive care unit quality improvement: a “how-to” guide for the interdisciplinary team.  Crit Care Med. 2006;34(1):211-21816374176PubMedGoogle ScholarCrossref
11.
Kahn JM, Fuchs BD. Identifying and implementing quality improvement measures in the intensive care unit.  Curr Opin Crit Care. 2007;13(6):709-71317975395PubMedGoogle ScholarCrossref
12.
Bach PB, Carson SS, Leff A. Outcomes and resource utilization for patients with prolonged critical illness managed by university-based or community-based subspecialists.  Am J Respir Crit Care Med. 1998;158(5 pt 1):1410-14159817687PubMedGoogle ScholarCrossref
13.
Scales DC, Dainty K, Hales B,  et al.  An innovative telemedicine knowledge translation program to improve quality of care in intensive care units: protocol for a cluster randomized pragmatic trial.  Implement Sci. 2009;4:519220893PubMedGoogle ScholarCrossref
14.
Campbell MK, Elbourne DR, Altman DG. CONSORT Group.  CONSORT statement: extension to cluster randomised trials.  BMJ. 2004;328(7441):702-70815031246PubMedGoogle ScholarCrossref
15.
Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials.  J Chronic Dis. 1967;20(8):637-6484860352PubMedGoogle ScholarCrossref
16.
Zwarenstein M, Treweek S, Gagnier JJ,  et al;  CONSORT Group; Pragmatic Trials in Healthcare Group.  Improving the reporting of pragmatic trials: an extension of the CONSORT statement.  BMJ. 2008;337:a239019001484PubMedGoogle ScholarCrossref
17.
Hauschke D, Pigeot I. Establishing efficacy of a new experimental treatment in the “gold standard” design.  Biom J. 2005;47(6):782-78616450851PubMedGoogle ScholarCrossref
18.
Berglund G, Bolund C, Gustafsson UL, Sjödén PO. Is the wish to participate in a cancer rehabilitation program an indicator of the need? comparisons of participants and non-participants in a randomized study.  Psychooncology. 1997;6(1):35-469126714PubMedGoogle ScholarCrossref
19.
Grimshaw J, Eccles M, Thomas R,  et al.  Toward evidence-based quality improvement: evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies, 1966-1998.  J Gen Intern Med. 2006;21:(suppl 2)  S14-S2016637955PubMedGoogle Scholar
20.
Lee SK, Aziz K, Singhal N,  et al.  Improving the quality of care for infants: a cluster randomized controlled trial.  CMAJ. 2009;181(8):469-47619667033PubMedGoogle ScholarCrossref
21.
Madden JM, Graves AJ, Zhang F,  et al.  Cost-related medication nonadherence and spending on basic needs following implementation of Medicare Part D.  JAMA. 2008;299(16):1922-1928Google ScholarCrossref
22.
Cooper H, Hedges L, Valentine J. The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
23.
Hedges L, Olkin I. Statistical Methods for Meta-analysis. Orlando, FL: Academic Press; 1985
24.
Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses.  BMJ. 2003;327(7414):557-56012958120PubMedGoogle ScholarCrossref
25.
Boeije H. A purposeful approach to the constant comparative method in the analysis of qualitative interviews.  Qual Quant. 2002;36:391-409Google ScholarCrossref
26.
Murray D. Design and Analysis of Group-Randomized Trials. New York, NY: Oxford University Press; 1998
27.
Bergstrom N, Demuth PJ, Braden BJ. A clinical trial of the Braden Scale for Predicting Pressure Sore Risk.  Nurs Clin North Am. 1987;22(2):417-4283554150PubMedGoogle Scholar
28.
Ontario Critical Care LHIN Leadership Table.  Inventory of Critical Care Services: An Analysis of LHIN-Level CapacitiesToronto: Ontario Ministry of Health and Long-term Care; 2007. http://www.health.gov.on.ca/english/providers/program/critical_care/docs/report_cc_inventory.pdf. Accessed December 21, 2010
29.
Haydar Z, Gunderson J, Ballard DJ, Skoufalos A, Berman B, Nash DB. Accelerating best care in Pennsylvania: adapting a large academic system's quality improvement process to rural community hospitals.  Am J Med Qual. 2008;23(4):252-25818658097PubMedGoogle ScholarCrossref
30.
O’Grady MA, Gitelson E, Swaby RF,  et al.  Development and implementation of a medical oncology quality improvement tool for a regional community oncology network: the Fox Chase Cancer Center Partners initiative.  J Natl Compr Canc Netw. 2007;5(9):875-88217977500PubMedGoogle Scholar
31.
Patel MR, Chen AY, Roe MT,  et al.  A comparison of acute coronary syndrome care at academic and nonacademic hospitals.  Am J Med. 2007;120(1):40-4617208078PubMedGoogle ScholarCrossref
32.
Brindis RG, Spertus J. The role of academic medicine in improving health care quality.  Acad Med. 2006;81(9):802-80616936485PubMedGoogle ScholarCrossref
33.
Nedza SM. A call to leadership: the role of the academic medical center in driving sustainable health system improvement through performance measurement.  Acad Med. 2009;84(12):1645-164719940565PubMedGoogle ScholarCrossref
34.
Martin CM, Doig GS, Heyland DK, Morrison T, Sibbald WJ.Southwestern Ontario Critical Care Research Network.  Multicentre, cluster-randomized clinical trial of algorithms for critical-care enteral and parenteral therapy (ACCEPT).  CMAJ. 2004;170(2):197-20414734433PubMedGoogle Scholar
35.
Doig GS, Simpson F, Finfer S,  et al; Nutrition Guidelines Investigators of the ANZICS Clinical Trials Group.  Effect of evidence-based feeding guidelines on mortality of critically ill adults: a cluster randomized controlled trial.  JAMA. 2008;300(23):2731-274119088351PubMedGoogle ScholarCrossref
36.
Kahn JM, Scales DC, Au DH,  et al; American Thoracic Society Pay-for-Performance Working Group.  An official American Thoracic Society policy statement: pay-for-performance in pulmonary, critical care, and sleep medicine.  Am J Respir Crit Care Med. 2010;181(7):752-76120335385PubMedGoogle ScholarCrossref
37.
Center for Medicare & Medicaid Services.  Physician Quality Reporting Initiative. http://www.cms.gov/pqri/. Accessed December 23, 2010
38.
Khanduja K, Scales DC, Adhikari NK. Pay for performance in the intensive care unit—opportunity or threat?  Crit Care Med. 2009;37(3):852-85819237887PubMedGoogle ScholarCrossref
Caring for the Critically Ill Patient
January 26, 2011

A Multifaceted Intervention for Quality Improvement in a Network of Intensive Care Units: A Cluster Randomized Trial

Author Affiliations

Author Affiliations: Interdepartmental Division of Critical Care (Drs Scales, Fowler, and Adhikari) and Department of Health Policy, Management, and Evaluation, Faculty of Medicine (Dr Zwarenstein), University of Toronto, Departments of Critical Care Medicine (Drs Scales, Pinto, Fowler, and Adhikari), Quality and Patient Safety (Ms Hales), and Medicine (Dr Fowler), and Sunnybrook Research Institute (Drs Scales, Zwarenstein, Fowler, and Adhikari), Sunnybrook Health Sciences Centre, Institute for Clinical Evaluative Sciences (Drs Scales and Zwarenstein), and Rescu, Li Ka Shing Knowledge Institute, St Michael's Hospital (Dr Dainty), Toronto, Ontario, Canada.

JAMA. 2011;305(4):363-372. doi:10.1001/jama.2010.2000
Abstract

Context Evidence-based practices improve intensive care unit (ICU) outcomes, but eligible patients may not receive them. Community hospitals treat most critically ill patients but may have few resources dedicated to quality improvement.

Objective To determine the effectiveness of a multicenter quality improvement program to increase delivery of 6 evidence-based ICU practices.

Design, Setting, and Participants Pragmatic cluster-randomized trial among 15 community hospital ICUs in Ontario, Canada. A total of 9269 admissions occurred during the trial (November 2005 to October 2006) and 7141 admissions during a decay-monitoring period (December 2006 to August 2007).

Intervention We implemented a videoconference-based forum including audit and feedback, expert-led educational sessions, and dissemination of algorithms to sequentially improve delivery of 6 practices. We randomized ICUs into 2 groups. Each group received this intervention, targeting a new practice every 4 months, while acting as control for the other group, in which a different practice was targeted in the same period.

Main Measure Outcomes The primary outcome was the summary ratio of odds ratios (ORs) for improvement in adoption (determined by daily data collection) of all 6 practices during the trial in intervention vs control ICUs.

Results Overall, adoption of the targeted practices was greater in intervention ICUs than in controls (summary ratio of ORs, 2.79; 95% confidence interval [CI], 1.00-7.74). Improved delivery in intervention ICUs was greatest for semirecumbent positioning to prevent ventilator-associated pneumonia (90.0% of patient-days in last month vs 50.0% in first month; OR, 6.35; 95% CI, 1.85-21.79) and precautions to prevent catheter-related bloodstream infection (70.0% of patients receiving central lines vs 10.6%; OR, 30.06; 95% CI, 11.00-82.17). Adoption of other practices, many with high baseline adherence, changed little.

Conclusion In a collaborative network of community ICUs, a multifaceted quality improvement intervention improved adoption of care practices.

Trial Registration clinicaltrials.gov Identifier: NCT00332982

Despite expensive life-sustaining technologies,1mortality and complication rates in critically ill patients remain high.2,3 Such patients should therefore receive all evidence-based and cost-effective interventions that improve outcomes.4 Previous large-scale implementations of such interventions have focused on a single practice5 and have not been randomized,6 thus limiting causal inferences and generalizabilty.7

Changing clinical behavior to improve quality of care is difficult.8 Outside the intensive care unit (ICU), multifaceted interventions targeting different barriers to behavior change, including educational outreach, audit and feedback, and reminders, appear more effective than single interventions.9 These interventions generally target physician behavior, but in the ICU diverse clinicians in an interprofessional team provide care to patient populations that are defined by geographical location in the hospital rather than by a particular disease.10,11 Furthermore, nonacademic hospitals face larger barriers to implementing evidence-based care because of heavier individual clinician workloads and fewer personnel devoted to collaborative continuing educational activities.12

We designed and delivered a quality improvement intervention to 15 community ICUs in Ontario, Canada, and conducted a cluster-randomized pragmatic trial to determine whether this intervention could increase their adoption of 6 evidence-based care practices. For each practice, we hypothesized that patients admitted to ICUs receiving this quality improvement intervention would be more likely to receive it than patients admitted to control ICUs not concurrently implementing the same quality improvement intervention for that practice. The study was funded as a demonstration project by the Critical Care Secretariat of the Ontario Ministry of Health and Long-Term Care to improve quality of care and foster system integration. The study was approved by the research ethics boards of all participating hospitals. All waived the requirement for obtaining individual patient consent.

Methods
Study Design

A detailed description of our methods has been published.13 Randomization occurred at the level of the ICU14 to minimize contamination. The design was pragmatic and conducted in community hospital ICUs rather than tertiary academic ICUs, and included a wide range of facilities operating under usual care conditions.15 The quality improvement intervention was designed specifically to target the entire ICU team and to be feasible in a broad range of ICUs.16

Participating ICUs

The participating ICUs were of variable size (range of staffed beds, 4-19) and located within 15 geographically dispersed Ontario community hospitals (representing 15.5% of community hospitals and 19.9% of community hospital ICU beds in Ontario).13 One medical-surgical ICU from each hospital was involved in the study. The ICUs were selected for participation in the demonstration project by the Ontario Ministry of Health and Long-Term Care.

Randomization and Study Flow

The 15 ICUs were randomly allocated into 2 groups by a statistician using a computer-generated randomization scheme, with stratification by ICU size (≤10 vs >10 staffed beds). The trial ran from November 1, 2005, to October 31, 2006, during which the 2 groups of ICUs were randomly assigned to receive active interventions to improve adoption of the different care practices (eFigure 1). During each phase of the trial, each group of ICUs received the active behavior change intervention targeting one care practice and simultaneously acted as a control group for the other group of ICUs that received the active behavior change intervention targeting a different care practice.17 This avoided randomizing a group of ICUs to no intervention, which could have been demoralizing to the participating ICUs.18

The trial consisted of 3 phases, each lasting 4 months. The following 6 practices, chosen based on a prestudy survey of ICU directors,13 were paired to minimize the potential for quality improvement efforts targeting one practice to influence process measures related to the other practice. Pair 1 was prevention of ventilator-associated pneumonia (VAP) and prevention of deep vein thrombosis (DVT); pair 2 was sterile precautions for central venous catheter insertion to prevent catheter-related bloodstream infections and daily spontaneous breathing trials to decrease duration of mechanical ventilation; and pair 3 was early enteral nutrition and daily assessment of risk for developing decubitus (pressure) ulcers. The sequence of applying these pairs was determined randomly using a computer-generated allocation scheme before the start of the trial and was concealed from the participating ICUs until the start of each phase. Although blinding within ICUs was not possible, clinicians working in each group of ICUs were blinded to the care practices being targeted in the other (control) group of ICUs.

Between December 1, 2006, and August 31, 2007, each group of ICUs received interventions targeting the care practices that they had not received during the trial, thus ensuring that all ICUs received interventions for all 6 of the practices (eFigure 1). We continued to collect process data on performance in all ICUs during this period to monitor for decay in adoption of the active interventions to which they were originally assigned during the trial.

Behavior Change Interventions in the Active Intervention Group

For each targeted practice, we developed a multifaceted quality improvement strategy (Table 1) including educational outreach, audit and feedback, and reminders.19 We generated a bibliography of relevant literature and summarized relevant guidelines into easy-to-read formats. Local champions in each ICU provided educational rounds and conducted their own educational activities using these materials. Process-of-care indicators for each practice were recorded daily and summarized in monthly reports, with each ICU receiving a report that identified its own performance and allowed for deidentified comparisons with other ICUs that were also actively targeting the same practice. We provided examples of preprinted order sets for each evidence-based care practice that ICUs could modify and use. We also provided reminder materials such as posters and lapel buttons for each practice.

Telecommunication

We used the Ontario Telemedicine Network13 videoconferencing infrastructure to conduct the intervention, including live interactive educational sessions from content experts for each targeted care practice, monthly network meetings, and training sessions for site educators. The interactive educational sessions were recorded and available for subsequent Web-based access.

Data Collection

Trained data collectors assessed the process-of-care indicators (Table 2) for all patients in all ICUs using handheld wireless electronic devices that connected to a central database via a local server. Each participating ICU selected a data collector, typically either a nurse or a ward clerk not providing patient care. All received data collection training from the central coordinating office. We defined the delivery of each practice for a particular day by the presence of one process-of-care indicator and no contraindications to receiving the practice. Data were encrypted for privacy and collected once daily from Monday through Friday. Weekend and holiday data were either collected in real time or on the following working day, depending on site resources. The coordinating center conducted a site inspection and audit of data collection at each ICU during the trial.

Outcomes

During each 4-month phase of the trial, we determined the difference in the change in proportion of patients receiving each targeted care practice in the intervention ICUs compared with the same practice in control ICUs. This effect measure was calculated separately for each targeted care practice. We focused on comparing rates of change between intervention and control ICUs because the study interventions were expected to change behavior over time (and not instantaneously) and because ICU performance at the end of each phase must be adjusted for performance at the beginning.

We first calculated an odds ratio (OR) for improvement over time, separately in intervention ICUs and in control ICUs, using the proportion of eligible patients receiving each care practice during each month of each 4-month phase, adjusted for clustering within centers.20 The unit of analysis was the individual patient or patient-day, depending on the practice. For each intervention, we then calculated the ratio of these ORs for improvement over time (OR [intervention]/OR [control]).21

The primary outcome of the trial was the summary ratio of ORs (with 95% confidence interval [CI]) for all practices, calculated by pooling the ratios of ORs for individual practices. The underlying assumption is that the quality improvement intervention was the same throughout the trial, but might have different effects on rates of adoption depending on the targeted care practice. Using this method, ratios of ORs were aggregated on the logarithm scale, with each logarithm (ratio of ORs) weighted by the inverse of its variance; each variance was adjusted to account for heterogeneity in effect estimates among interventions using a random-effects approach, which generally provides wider CIs when heterogeneity is present.22,23 For each pooled analysis, heterogeneity is reported using I2, the proportion of variation due to between-practice variation rather than chance.24

We conducted in-depth qualitative interviews of clinicians from participating ICUs to understand their perceptions of the study's effect on local practice and the effectiveness of individual components of the intervention. We recruited these individuals by invitation letters sent to all 15 ICUs and then used purposive sampling of respondents to obtain representation from roles in the ICU team. A semistructured interview guide was developed to facilitate the interviews. The interview transcripts were coded by 2 individuals, and major themes were identified using constant comparative analysis.25

Data Analysis

Data were analyzed using SAS, version 9.1 (SAS Institute Inc, Cary, North Carolina) and R, version 2.7 (R Foundation for Statistical Computing, Vienna, Austria). All tests were 2-sided with P ≤ .05 denoting statistical significance. The OR for receiving a particular care practice was calculated in both intervention and control groups using generalized linear mixed methods with random effects (logit link, random intercept, and random slope with robust sandwich estimate for variance) to account for the hierarchical nature (clustering within centers) of the data.26 We present crude results for each practice; all ORs and results shown in Figures are adjusted for clustering using this model. We tested the random slope of the model using the Akaike information criterion; if not significant, we did not incorporate a random slope in the primary analysis. The change in proportion of eligible patients receiving each care practice was analyzed by testing for the effects of group (intervention vs control), time (during 4 months of intervention), and the interaction between group and time. We used the interaction between group and time to estimate the ratio of the ORs of improvement over time in the intervention group vs the control group. For each targeted care practice, we conducted sensitivity analyses using generalized estimating equations, which led to similar interpretations in all cases. The details of our secondary and exploratory analyses are described in the eAppendix and eTable 1.

We expected to enroll 2000 patients per 4-month intervention phase. Assuming an average cluster size per phase of 250 patients and an intracluster (between-center) correlation coefficient (ρ) of 0.2 (variance inflation factor = 1 + (n − 1) × ρ = 50; power = 80%; α = .05), we anticipated adequate power to detect a 20% absolute increase in use of a targeted practice when baseline adherence was 25%, a 30% increase when baseline adherence was 50%, or a 22% increase when baseline adherence was 75%.

Results

All 15 community hospital ICUs completed the study, totaling 9269 ICU admissions during the trial (November 1, 2005, to October 31, 2006) (Figure 1 and Table 3) and 7141 ICU admissions during the decay-monitoring period (December 1, 2006, to August 31, 2007).

Summary Effects of Quality Improvement Activity (Primary Outcome)

Considering all hospitals and targeted care practices, patients in ICUs receiving active intervention were more likely to receive the targeted care practice than those in contemporaneous control ICUs receiving an active intervention for a different practice (summary ratio of ORs, 2.79; 95% CI, 1.00-7.74; P = .05). The overall effects are shown in Figure 2 and eFigure 2, and the effects on individual care practices are summarized in Table 4 and Figure 3.

Prevention of VAP and Prophylaxis Against DVT (Pair 1)

Prevention of VAP. There were 1624 admissions to 7 ICUs that received interventions to increase use of semirecumbent positioning to prevent VAP during the trial. Data were collected on the majority (n = 1417 [87.2%]) of patients in the intervention ICUs, accounting for 1151 mechanical ventilation days (Table 3). Few mechanical ventilation days (n = 37 [3.3%]) were not eligible for semirecumbent positioning, predominantly due to presence of spine or pelvic injury.

The overall rate of adherence to semirecumbent positioning in the intervention ICUs improved from 49.8% of 297 eligible patient-days in the first month to 89.6% of 260 eligible patient-days during the last month vs from 80.1% of 497 to 90.2% of 569 eligible patient-days in the control ICUs. The OR for receiving semirecumbent positioning during an eligible patient-day in the last month of the study (compared with the first month) was 6.35 (95% CI, 1.85-21.79; P = .007) in intervention ICUs and 2.04 (95% CI, 0.82-5.07; P = .12) in control ICUs. Improvements in intervention ICUs were similar to control ICUs (ratio of ORs, 3.12; 95% CI, 0.79-12.41; P = .11). During the decay-monitoring period, the adherence to semirecumbent positioning remained high in intervention ICUs (96.4% of patient-days during the final 3 months of the decay-monitoring period).

Prophylaxis Against DVT. There were 1600 admissions to 8 ICUs receiving active strategies to increase use of anticoagulant prophylaxis (unfractionated heparin or low-molecular-weight heparin) against DVT (Table 3). No data were collected on 82 patients, leaving 1518 (94.9%) patients for analysis, of whom 1391 had data collected on at least 1 of the first 2 consecutive days of ICU admission. Nearly half (n = 570 [41.0%]) had a prespecified acceptable contraindication to anticoagulation prophylaxis (eAppendix and eTable 2) during the first 48 hours of ICU admission. Considering all patient-days, a contraindication was recorded on 1388 days (22.2%).

Most (96.9%) of the 194 eligible patients admitted to an ICU for at least 2 consecutive days during the first month received anticoagulant prophylaxis within 48 hours of admission, and the observed rate remained high during the last month (97.5% of 202 patients in intervention ICUs and 93.5% of 184 patients in control ICUs). Overall, there was no change in the proportion of eligible patients receiving DVT prophylaxis among intervention ICUs (OR, 1.28; 95% CI, 0.67-2.45; P = .46) or among control ICUs (OR, 0.52; 95% CI, 0.20-1.30; P = .16); the rate of improvement was similar (ratio of ORs, 2.49; 95% CI, 0.80-7.70; P = .11). Sensitivity analysis restricted to the first 24 hours of ICU admission showed similar results. During extended follow-up, the rates of DVT prophylaxis during the first 2 days of ICU remained high in intervention ICUs (97.0% during the final 3 months of the decay-monitoring period).

Prevention of Catheter-Related Bloodstream Infections and Spontaneous Breathing Trials (Pair 2)

Prevention of Catheter-Related Bloodstream Infections. There were 1546 admissions to 7 ICUs receiving active interventions to reduce catheter-related bloodstream infections during the trial period, and data were collected from 1361 (88.0%). During the 4-month period, 180 (range, 5-48 per ICU) central venous catheters were inserted in intervention ICUs and 329 (range, 2-79 per ICU) in control ICUs. Completion of the catheter insertion checklist used to monitor adherence to the sterile catheter insertion bundle was imperfect but was not significantly different between groups (54.9% overall; details in electronic supplement).

The overall rate of adherence to all 7 components of the catheter insertion bundle improved from 10.0% of 30 eligible catheter insertions (with collection forms completed) during the first month to 70.6% of 34 during the last month in intervention ICUs vs 31.0% of 42 in the first month to 51.7% of 29 in the last month in control ICUs. The OR for receiving all 7 components of the bundle during the last month compared with the first month in intervention ICUs was 30.06 (95% CI, 11.00-82.17; P < .001). In contrast, there was no improvement in bundle adherence for the control group during the same period (OR, 1.71; 95% CI, 0.74-3.99; P = .21). The rate of improvement in actively targeted ICUs was significantly better than the rate of change in control ICUs (ratio of ORs, 17.55; 95% CI, 4.72-65.26; P < .001). During extended follow-up, the adherence to the bundle remained high in the intervention group (89.0% during the final 3 months of the decay-monitoring period).

Daily Spontaneous Breathing Trials. During the same period, 1601 patients were admitted to 8 ICUs receiving active strategies to increase use of spontaneous breathing trials, and data were collected on 1548 (96.7%). After excluding mechanical ventilation days that were associated with presence of tracheostomy (n = 729 [33.3%]), 1455 mechanical ventilation days remained available for analysis of daily spontaneous breathing trials.

Successful extubation or performance of a spontaneous breathing trial occurred during 626 (84.0%) of 744 eligible patient-days of mechanical ventilation. The most common reasons a patient-day was deemed ineligible for a spontaneous breathing trial were high positive end-expiratory pressure (71.1% of ineligible patient-days), use of continuous sedation infusion (58.4%), and hypoxemia (as defined by low ratio of PaO2 to FIO2; 46.8%).

Rates of spontaneous breathing trials during eligible mechanical ventilation days remained similar during the 4-month period (78.8% of 118 days during the first month and 85.1% of 255 during the last month in intervention ICUs; 90.9% of 143 days during the first month and 89.6% of 182 during the last month in control ICUs). The OR for receiving a spontaneous breathing trial during the last vs the first month of the study phase was 1.35 (95% CI, 0.44-4.12; P = .57) in intervention ICUs and 1.31 (95% CI, 0.34-4.97; P = .67) in control ICUs. There was no overall difference in this rate of improvement (ratio of ORs, 1.04; 95% CI, 0.21-5.03; P = .96). During extended follow-up, there was sustained use of daily spontaneous breathing trials (87.0% of eligible patient-days during the final 3 months of the decay-monitoring period).

Decubitus Ulcer Risk Assessment and Provision of Early Enteral Nutrition (Pair 3)

Decubitus Ulcer Risk Assessments. Fifteen hundred twenty-eight patients were admitted to 7 ICUs receiving active interventions promoting daily assessments of patients' risk of developing decubitus ulcers during the trial period. Data were collected on 1467 patients (96.0%) and 4182 (91.0%) of 4596 patient-days. The rate of completed Braden risk assessment tools27 was 68.1% of 620 patient-days during the first month and 73.2% of 1282 patient-days during the last month of intervention. Comparing assessment completion rates during the last month with those achieved in the first month, there was no difference in intervention ICUs (73.2% of 1282 days in last month vs 68.1% of 620 in first month; OR, 6.54; 95% CI, 0.50-85.63; P = .14) or in control ICUs (56.9% of 1382 days in last month vs 54.0% of 850 in first month; OR, 0.82; 95% CI, 0.16-4.17; P = .79) and no difference between intervention and control ICUs (ratio of ORs, 8.01; 95% CI, 0.51-126.91; P = .14). During extended follow-up, adherence with completing decubitus ulcer risk assessments remained high (92.3% of eligible patient days during the final 3 months of the decay-monitoring period).

Provision of Early Enteral Nutrition. Fifteen hundred thirteen patients were admitted to ICUs receiving active quality improvement interventions targeting the provision of early enteral nutrition, considered to be initiation of any enteral formula (regular diet or tube feeds) within the first 48 hours of ICU admission. Data were collected on 1464 patients (96.8%), of whom 1311 had data collected on at least 1 of the first 2 consecutive days of ICU admission. After considering appropriate contraindications, 1003 patients (76.5%) were potentially eligible to receive early enteral nutrition. We observed no improvements in this practice in ICUs receiving active interventions from the first month (95.6% of 247 eligible patients) to the last month (96.1% of 254 eligible patients; OR, 1.16; 95% CI, 0.42-3.20; P = .77). Similarly, no improvements were observed over time in control ICUs (96.7% of 303 eligible patients in last month vs 96.2% of 210 in first month; OR, 1.77; 95% CI, 0.69-4.51; P = .21), and rates of improvements were similar comparing active and control ICUs (ratio of ORs, 0.65; 95% CI, 0.16-2.61; P = .52). These findings were similar in sensitivity analyses evaluating the provision of enteral nutrition within 24 or 72 hours. During extended follow-up, overall adherence remained high (95.6% of eligible patients during the final 3 months of the decay-monitoring period).

Potential Mechanisms and Effect Modifiers

Perceptions From Frontline Clinicians. We conducted 32 interviews with a cross-section of ICU team members (3 physicians, 27 nurses, 1 respiratory therapist, and 1 dietician) from 12 of the 15 ICUs. Thematic analyses of these interviews revealed that (1) regular audit and feedback of performance including de-identified results from other hospitals was a key improvement driver through “friendly competition”; (2) participating in a large quality improvement project tended to increase within-ICU communication and elicit support from hospital leadership; (3) telecommunication was a useful education medium, although it was often still difficult for ICU staff to leave the beside to attend sessions; (4) direct relationships between ICUs in each group resulting from the telecommunication networking were not as valued or evident; (5) the focus on process of care measures, rather than outcome measures, was appreciated because of the heterogeneity of patients; (6) in some cases, internal improvements had created a higher baseline adoption rate (“We were already working on that when the project started”); and (7) direct audit and feedback of process measures, evidence-based summaries, and availability of the central coordinating office seemed to be the most important components of the quality improvement intervention.

Effect Modification by Organizational Factors. We conducted several post hoc exploratory analyses to identify ICU-level effect modifiers of our intervention, considering the 3 care practices whose delivery improved the most during the trial. For semirecumbent positioning, 3 factors were associated with improved adoption (ratio of OR for last vs first month when factor present vs OR for last vs first month when factor absent): dedicated intensivist staffing (ratio of ORs, 7.42; 95% CI, 3.02-18.20; P < .001), more than 10 staffed ICU beds (ratio of ORs, 4.84; 95% CI, 1.11-21.12; P = .04), and no prior involvement in data collection for quality purposes (ratio of ORs, 8.39; 95% CI, 3.32-21.25; P < .001). No organizational factor was associated with significant improvements in use of the catheter insertion bundle or decubitus ulcer risk assessments.

Effect of Intervention Within Individual ICUs. Changes in adherence to care practices in individual ICUs are shown in eFigure 3. Many ICUs (intervention and control) had high performance at baseline; improvements were most apparent within ICUs with low baseline adherence to the targeted practices.

Comment

Our cluster-randomized pragmatic trial with active controls demonstrates that a multifaceted quality improvement intervention including education, reminders, and audit and feedback through a collaborative telecommunication network improved the delivery of evidence-based care practices in community ICUs. The improvements were greatest for practices to prevent catheter-related bloodstream infections and ventilator-associated pneumonia.

We focused on improving the quality of care for patients admitted to ICUs in community hospitals rather than academic hospitals. Community ICUs admit the majority of critically ill patients28 and have fewer resources for implementing quality improvement initiatives.29-31 Our videoconferencing network is one model for helping health care workers in geographically dispersed community hospitals to improve quality by accessing resources usually restricted to academic hospitals.32,33

To our knowledge, this is the first cluster-randomized controlled trial of a collaborative knowledge translation program that used a telecommunication strategy to organize a quality improvement network. This approach facilitated communication among geographically dispersed sites by providing regular virtual “face-to-face” interactions. While our intervention led to moderate improvements in quality of care, the infrastructure also helped to successfully engage and organize geographically separated ICUs to participate in education activities and to collect data related to quality of care.

Our post hoc analyses suggest that our intervention had its greatest effect on ICUs with low baseline adherence to specific practices, suggesting that similar large-scale quality improvement initiatives might target such ICUs and practices. We were unable to identify ICU organizational factors that consistently modified the effect of our intervention, and future research could examine the interaction between other ICU cultural and organizational features and the effectiveness of quality improvement strategies. Thematic analyses of our interviews of frontline staff suggested that audit-feedback reports containing deidentified summaries of other intervention hospitals' performance, provision of evidence-based literature summaries, and availability of the central coordinating office were perceived to be the most valuable components of our intervention. Respondents also observed that involvement in the network influenced local ICU culture by enhancing within-ICU communication and eliciting greater support from hospital leadership.

Previous large-scale studies of networks targeting ICU quality improvement5,6 have typically used before-after study designs, rendering them vulnerable to spurious causal inferences due to secular trends over time. One cluster-randomized trial of multifaceted strategies for quality improvement in neonatal ICUs in Canada found a reduction in bronchopulmonary dysplasia.20 In Canada, Australia, and New Zealand, cluster-randomized trials of interventions to implement nutrition algorithms using education sessions, reminders, and academic detailing improved use of enteral nutrition.34,35

Our study had several strengths compared with these studies. First, our intervention was a comprehensive quality improvement package that targeted multiple disparate care practices rather than a single quality measure. One potential risk of a single quality improvement intervention is that clinicians may focus on the quality indicator under study and thereby neglect other important quality indicators.36 We observed no decrease in adherence during the decay-monitoring period, when individual ICUs shifted their focus to new quality indicators. Second, the active control group ensured that all ICUs were always engaged in quality improvement activities and avoided perceptions of unfairness that could have arisen from randomizing individual ICUs to no quality improvement strategy. Third, the design ensured that all ICUs would receive active strategies targeting all 6 care practices by the end of the decay-monitoring period and allowed for assessment of decay in adherence to practices in ICUs receiving active interventions for these practices during the trial. Fourth, the cluster-randomized design helped adjust for unit-level factors that might affect utilization of care practices in individual patients and protected against inferences based on secular trends rather than the study intervention.

We focused on process measures rather than clinical outcomes because appropriately powered studies had previously demonstrated efficacy of each care practice. We also believed that studying the implementation of process-of-care measures would be highly relevant to practicing clinicians, given mandates to implement and publicly report such measures by accreditation organizations.37,38

Our study also had limitations. Although the trial included more than 9000 ICU admissions, the effective sample size of eligible patients for each study phase was smaller and was further reduced by adjustment for between-cluster variation and the infrequent nature of some targeted practices. It is possible that longer intervention phases and inclusion of more study centers would have narrowed the CIs. The observation of clinical practice for data collection may have changed behavior both in control and intervention ICUs. In particular, care practices requiring direct observation (eg, use of semirecumbent positioning) could be vulnerable to improvement simply because of increased monitoring. It is possible that such Hawthorne effects improved adherence in control ICUs and thus reduced the effect of the intervention. Similarly, for care practices measured using data from the medical chart (eg, DVT prophylaxis), we are unable to determine whether our intervention improved actual practice, documented practice, or both. Finally, we observed ceiling effects for some practices, rendering further improvements difficult. For example, rates of DVT prophylaxis among eligible patients exceeded 90% at baseline in most participating ICUs. We chose practices based on a prestudy survey of ICU directors, but the survey underestimated actual performance for some interventions.

In conclusion, we found that a collaborative network of ICUs linked by a telecommunication infrastructure improved the adoption of care practices. However, improved performance among all practices was not uniform. Future large-scale quality improvement initiatives should choose practices based on measured rather than reported care gaps, consider site-specific (vs aggregated) needs assessments to determine target care practices, and conduct baseline audits to focus on poorly performing ICUs, which have the greatest potential for improvement.

Back to top
Article Information

Corresponding Author: Damon C. Scales, MD, PhD, Sunnybrook Health Sciences Centre, 2075 Bayview Ave, D108, Toronto, ON M4N 3M5, Canada (damon.scales@utoronto.ca).

Published Online: January 19, 2011. doi:10.1001 /jama.2010.2000

Author Contributions: Drs Scales and Pinto had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Study concept and design: Scales, Dainty, Hales, Fowler, Adhikari, Zwarenstein.

Acquisition of data: Scales, Dainty.

Analysis and interpretation of data: Scales, Dainty, Pinto, Fowler, Adhikari.

Drafting of the manuscript: Scales.

Critical revision of the manuscript for important intellectual content: Scales, Dainty, Hales, Pinto, Fowler, Adhikari, Zwarenstein.

Statistical analysis: Scales, Pinto.

Obtained funding: Scales, Dainty.

Administrative, technical, or material support: Scales, Dainty, Hales.

Study supervision: Scales, Zwarenstein.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Scales reports receiving a stipend from the Ontario Ministry of Health and Long-Term Care (MOHLTC) for his role as physician lead, MOHLTC Ontario ICU Clinical Best Practices Project, and he currently holds a New Investigator Award from the Canadian Institutes of Health Research. Ms Hales reports receiving salary support from her institution that was derived in part by the funding source for this study. Dr Fowler reports that he held a career scientist award from the Ontario Ministry of Health and Long-Term Care during the conduct of this study and is a Clinician Scientist of the Heart and Stroke Foundation of Canada. No other authors reported any financial conflicts of interest.

Funding/Support: This study was funded by the MOHLTC Critical Care Transformation Strategy as the MOHLTC Ontario ICU Clinical Best Practices Demonstration Project.

Role of the Sponsor: The funding body approved the design of the study but had no role in the conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, or approval of the manuscript.

Disclaimer: The opinions, results, and conclusions reported in this article are those of the authors and are independent from the funding source.

Additional Contributions: We are greatly indebted to William J. Sibbald, MD, MPH, who originally conceived of this study and contributed greatly to its design and implementation. Dr Sibbald died on September 14, 2006, during the conduct of this study. We also acknowledge the following individuals for their input into the design of this study: Stephen Lapinsky, MB, BCh, MSc, Mount Sinai Hospital, Sherman Quan, University Health Network, Kevin Thorpe, PhD, Li Ka Shing Knowledge Institute, and Nathalie Danjoux Meth, MSc, Ontario Ministry of Health and Long-Term Care, Health Capital Investment Branch (none was compensated); content experts who gave presentations over the telecommunication network as part of the active intervention: John Muscedere, MD, Kingston General Hospital, William Geerts, MD, Sunnybrook Health Sciences Centre, Niall Ferguson, MD, MSc, University Health Network, Chris Hayes, MD, MSc, MEd, St Michael's Hospital, Daren Heyland, MD, MSc, Kingston General Hospital, John Drover, MD, Kingston General Hospital, David Keast, MD, MSc, Aging Rehabilitation and Geriatric Care Research Centre, Parkwood Hospital, and Myron Steinmann, RRT, BEd, London Health Sciences Centre (each received an honorarium); Linda Rozmovits, DPhil, for conducting semistructured interviews of project participants (received compensation); Taz Sinuff, MD, PhD, Sunnybrook Health Sciences Centre, for coding of interview transcripts (received no compensation); and the staff of the MOHLTC Critical Care Secretariat, especially Bernard Lawless, MD, MHSc, St Michael's Hospital, and Robert McKay, formerly with the MOHLTC (neither was compensated by the study). We wish to thank the staff of all participating ICUs, particularly these individuals who acted as local champions during the study (none was compensated): Toronto East General Hospital: Marilyn Lee, Carmine Simone, MD; North York General Hospital: Donna McRitchie, MD, Karen Johnson, Jo-Ann Correa, Marina Bitton, Catherine Badeau; Scarborough Hospital, General Campus: Howard Clasky, MD, Carol Shelton, Denise Edman; Scarborough Hospital, Birchmount Campus: David Rose, MD, Sandy Finkelstein, MD; Sault Area Hospital: Greg Berg, MD, Jack Willet, Mary Runde, Paul Nanne, Gigi Farrell; Northumberland Hills Hospital: Pat Busch, Brenda Weir; Muskoka Algonquin Healthcare, Huntsville site: Kathleen vom Scheidt, Laura Speelman, Paul Shisko; Orillia Soldiers’ Memorial Hospital: Diane Sofarelli, Tamara Smith, John MacFadyen, MD; Lakeridge Health Corp: Jonathan Eisenstat, MD, Margaret Campkin; Royal Victoria Hospital: Giulio DiDiodato, MD, MPH, Kari Simpson Adams; Lake of the Woods District Hospital: Donna Makowsky; West Parry Sound Health Center: Amanda Hill; Sudbury Regional Hospital: Peter Zalan, MD, David Boyle, MD, Kari Kostiw, Claire Gignac, Joanne Collin; Peterborough Regional Health Centre: David McMillan, MD, Lisa Milligan, Susan Dunford-Pickard; North Bay General Hospital: Sue Lebeau, Lori Bell; and Sunnybrook Health Sciences Centre (coordinating center): Dinah Manicat-Francis, Leasa Knechtel, Debra Carew, Svetlana Bojilov.

References
1.
Halpern NA, Pastores SM, Greenstein RJ. Critical care medicine in the United States 1985-2000: an analysis of bed numbers, use, and costs.  Crit Care Med. 2004;32(6):1254-125915187502PubMedGoogle ScholarCrossref
2.
Rubenfeld GD, Caldwell E, Peabody E,  et al.  Incidence and outcomes of acute lung injury.  N Engl J Med. 2005;353(16):1685-169316236739PubMedGoogle ScholarCrossref
3.
Angus DC, Linde-Zwirble WT, Lidicker J, Clermont G, Carcillo J, Pinsky MR. Epidemiology of severe sepsis in the United States: analysis of incidence, outcome, and associated costs of care.  Crit Care Med. 2001;29(7):1303-131011445675PubMedGoogle ScholarCrossref
4.
Pronovost PJ, Rinke ML, Emery K, Dennison C, Blackledge C, Berenholtz SM. Interventions to reduce mortality among patients treated in intensive care units.  J Crit Care. 2004;19(3):158-16415484176PubMedGoogle ScholarCrossref
5.
Pronovost P, Needham D, Berenholtz S,  et al.  An intervention to decrease catheter-related bloodstream infections in the ICU.  N Engl J Med. 2006;355(26):2725-273217192537PubMedGoogle ScholarCrossref
6.
Ferrer R, Artigas A, Levy MM,  et al; Edusepsis Study Group.  Improvement in process of care and outcome after a multicenter severe sepsis educational program in Spain.  JAMA. 2008;299(19):2294-230318492971PubMedGoogle ScholarCrossref
7.
MacLehose RR, Reeves BC, Harvey IM, Sheldon  TA, Russell IT, Black AM. A systematic review of comparisons of effect sizes derived from randomised and non-randomised studies.  Health Technol Assess. 2000;4(34):1-15411134917PubMedGoogle Scholar
8.
Rubenfeld GD, Cooper C, Carter G, Thompson BT, Hudson LD. Barriers to providing lung-protective ventilation to patients with acute lung injury.  Crit Care Med. 2004;32(6):1289-129315187508PubMedGoogle ScholarCrossref
9.
Grimshaw JM, Shirran L, Thomas R,  et al.  Changing provider behavior: an overview of systematic reviews of interventions.  Med Care. 2001;39(8):(suppl 2)  II2-II4511583120PubMedGoogle Scholar
10.
Curtis JR, Cook DJ, Wall RJ,  et al.  Intensive care unit quality improvement: a “how-to” guide for the interdisciplinary team.  Crit Care Med. 2006;34(1):211-21816374176PubMedGoogle ScholarCrossref
11.
Kahn JM, Fuchs BD. Identifying and implementing quality improvement measures in the intensive care unit.  Curr Opin Crit Care. 2007;13(6):709-71317975395PubMedGoogle ScholarCrossref
12.
Bach PB, Carson SS, Leff A. Outcomes and resource utilization for patients with prolonged critical illness managed by university-based or community-based subspecialists.  Am J Respir Crit Care Med. 1998;158(5 pt 1):1410-14159817687PubMedGoogle ScholarCrossref
13.
Scales DC, Dainty K, Hales B,  et al.  An innovative telemedicine knowledge translation program to improve quality of care in intensive care units: protocol for a cluster randomized pragmatic trial.  Implement Sci. 2009;4:519220893PubMedGoogle ScholarCrossref
14.
Campbell MK, Elbourne DR, Altman DG. CONSORT Group.  CONSORT statement: extension to cluster randomised trials.  BMJ. 2004;328(7441):702-70815031246PubMedGoogle ScholarCrossref
15.
Schwartz D, Lellouch J. Explanatory and pragmatic attitudes in therapeutical trials.  J Chronic Dis. 1967;20(8):637-6484860352PubMedGoogle ScholarCrossref
16.
Zwarenstein M, Treweek S, Gagnier JJ,  et al;  CONSORT Group; Pragmatic Trials in Healthcare Group.  Improving the reporting of pragmatic trials: an extension of the CONSORT statement.  BMJ. 2008;337:a239019001484PubMedGoogle ScholarCrossref
17.
Hauschke D, Pigeot I. Establishing efficacy of a new experimental treatment in the “gold standard” design.  Biom J. 2005;47(6):782-78616450851PubMedGoogle ScholarCrossref
18.
Berglund G, Bolund C, Gustafsson UL, Sjödén PO. Is the wish to participate in a cancer rehabilitation program an indicator of the need? comparisons of participants and non-participants in a randomized study.  Psychooncology. 1997;6(1):35-469126714PubMedGoogle ScholarCrossref
19.
Grimshaw J, Eccles M, Thomas R,  et al.  Toward evidence-based quality improvement: evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies, 1966-1998.  J Gen Intern Med. 2006;21:(suppl 2)  S14-S2016637955PubMedGoogle Scholar
20.
Lee SK, Aziz K, Singhal N,  et al.  Improving the quality of care for infants: a cluster randomized controlled trial.  CMAJ. 2009;181(8):469-47619667033PubMedGoogle ScholarCrossref
21.
Madden JM, Graves AJ, Zhang F,  et al.  Cost-related medication nonadherence and spending on basic needs following implementation of Medicare Part D.  JAMA. 2008;299(16):1922-1928Google ScholarCrossref
22.
Cooper H, Hedges L, Valentine J. The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
23.
Hedges L, Olkin I. Statistical Methods for Meta-analysis. Orlando, FL: Academic Press; 1985
24.
Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses.  BMJ. 2003;327(7414):557-56012958120PubMedGoogle ScholarCrossref
25.
Boeije H. A purposeful approach to the constant comparative method in the analysis of qualitative interviews.  Qual Quant. 2002;36:391-409Google ScholarCrossref
26.
Murray D. Design and Analysis of Group-Randomized Trials. New York, NY: Oxford University Press; 1998
27.
Bergstrom N, Demuth PJ, Braden BJ. A clinical trial of the Braden Scale for Predicting Pressure Sore Risk.  Nurs Clin North Am. 1987;22(2):417-4283554150PubMedGoogle Scholar
28.
Ontario Critical Care LHIN Leadership Table.  Inventory of Critical Care Services: An Analysis of LHIN-Level CapacitiesToronto: Ontario Ministry of Health and Long-term Care; 2007. http://www.health.gov.on.ca/english/providers/program/critical_care/docs/report_cc_inventory.pdf. Accessed December 21, 2010
29.
Haydar Z, Gunderson J, Ballard DJ, Skoufalos A, Berman B, Nash DB. Accelerating best care in Pennsylvania: adapting a large academic system's quality improvement process to rural community hospitals.  Am J Med Qual. 2008;23(4):252-25818658097PubMedGoogle ScholarCrossref
30.
O’Grady MA, Gitelson E, Swaby RF,  et al.  Development and implementation of a medical oncology quality improvement tool for a regional community oncology network: the Fox Chase Cancer Center Partners initiative.  J Natl Compr Canc Netw. 2007;5(9):875-88217977500PubMedGoogle Scholar
31.
Patel MR, Chen AY, Roe MT,  et al.  A comparison of acute coronary syndrome care at academic and nonacademic hospitals.  Am J Med. 2007;120(1):40-4617208078PubMedGoogle ScholarCrossref
32.
Brindis RG, Spertus J. The role of academic medicine in improving health care quality.  Acad Med. 2006;81(9):802-80616936485PubMedGoogle ScholarCrossref
33.
Nedza SM. A call to leadership: the role of the academic medical center in driving sustainable health system improvement through performance measurement.  Acad Med. 2009;84(12):1645-164719940565PubMedGoogle ScholarCrossref
34.
Martin CM, Doig GS, Heyland DK, Morrison T, Sibbald WJ.Southwestern Ontario Critical Care Research Network.  Multicentre, cluster-randomized clinical trial of algorithms for critical-care enteral and parenteral therapy (ACCEPT).  CMAJ. 2004;170(2):197-20414734433PubMedGoogle Scholar
35.
Doig GS, Simpson F, Finfer S,  et al; Nutrition Guidelines Investigators of the ANZICS Clinical Trials Group.  Effect of evidence-based feeding guidelines on mortality of critically ill adults: a cluster randomized controlled trial.  JAMA. 2008;300(23):2731-274119088351PubMedGoogle ScholarCrossref
36.
Kahn JM, Scales DC, Au DH,  et al; American Thoracic Society Pay-for-Performance Working Group.  An official American Thoracic Society policy statement: pay-for-performance in pulmonary, critical care, and sleep medicine.  Am J Respir Crit Care Med. 2010;181(7):752-76120335385PubMedGoogle ScholarCrossref
37.
Center for Medicare & Medicaid Services.  Physician Quality Reporting Initiative. http://www.cms.gov/pqri/. Accessed December 23, 2010
38.
Khanduja K, Scales DC, Adhikari NK. Pay for performance in the intensive care unit—opportunity or threat?  Crit Care Med. 2009;37(3):852-85819237887PubMedGoogle ScholarCrossref
×