Longo DR, Hewett JE, Ge B, Schubert S. The Long Road to Patient SafetyA Status Report on Patient Safety Systems. JAMA. 2005;294(22):2858-2865. doi:10.1001/jama.294.22.2858
Author Affiliations: Department of Family and Community Medicine (Dr Longo and Ms Schubert) and Biostatistics Group (Drs Hewett and Ge), School of Medicine, University of Missouri–Columbia.
Context Since the Institute of Medicine (IOM) reports on medical errors and quality, national attention has focused on improving patient safety through changes in “systems” of care. These reports resulted in a new paradigm that, rather than centering on individual errors, focuses on the “systems” necessary to facilitate and enhance quality and protect patients.
Objectives To assess the status of hospital patient safety systems since the release of the IOM reports and to identify changes over time in 2 states that collaborated on a patient safety project funded by the Agency for Healthcare Research and Quality.
Design, Setting, and Participants Survey of all acute care hospitals in Missouri and Utah at 2 points in time, in 2002 and 2004, using a 91-item comprehensive questionnaire (n = 126 for survey 1 and n = 128 for survey 2). To assess changes over time, we also studied the cohort of 107 hospitals that responded to both surveys.
Main Outcome Measures Responses to the 91-question survey as well as changes in responses to the survey questions over an 18-month period. Seven latent variables were constructed to represent the most important patient safety constructs studied: computerized physician order entry systems, computerized test results, and assessments of adverse events; specific patient safety policies; use of data in patient safety programs; drug storage, administration, and safety procedures; manner of handling adverse event/error reporting; prevention policies; and root cause analysis. For each hospital, the 7 latent variables were summed to give an overall measure of the patient safety status of the hospital.
Results Development and implementation of patient safety systems is at best modest. Self-reported regression in patient safety systems was also found. While 74% of hospitals reported full implementation of a written patient safety plan, nearly 9% reported no plan. The area of surgery appears to have the greatest level of patient safety systems. Other areas, such as medications, with a long history of efforts in patient safety and error prevention, showed improvements, but the percentage of hospitals with various safety systems was already high at baseline for many systems. Some findings are surprising, given the overall trends; for example, while a substantial percentage of hospitals have medication safety systems, only 34.1% reported full implementation at survey 2 of computerized physician order entry systems for medications, despite the growth of computer technology in general and in hospital billing systems in particular.
Conclusions The current status of hospital patient safety systems is not close to meeting IOM recommendations. Data are consistent with recent reports that patient safety system progress is slow and is a cause for great concern. Efforts for improvement must be accelerated.
The 1998 Institute of Medicine (IOM) National Roundtable on Health Care Quality and subsequent IOM reports ushered in a period of extensive research about the quality of the US health care system. The IOM reported that “serious and widespread problems occur in small and large communities alike, in all parts of the country, with approximately equal frequency in managed care and fee-for-service care.”1 In To Err Is Human, the IOM provided in-depth analyses of a wide range of patient safety problems and underscored the need for improvement.2 Subsequently, in Crossing the Quality Chasm, the IOM called for “fundamental change . . . to close the quality gap and save lives,” and proposed a national initiative to “provide a strategic direction for redesigning the health care system of the 21st century.”3,4 These documents indicate that successful implementation of change in the nation’s overall health care system requires change in specific patient safety systems at the hospital level.
In the current study, we conceptualize “systems” and “system problems” consistent with the philosophy and practical techniques that emerged in the 1990s as health care researchers and practitioners acknowledged the need for quality improvement and sought solutions to identified deficiencies. Many borrowed or adapted the framework used in the industrial sector, especially in the aviation and nuclear industries. The work of Perrow5 and Sagan,6 addressing “safety systems” in medicine, found followers in Galletly and Mushet,7 Chassin,8 Berwick,9 and the IOM National Roundtable1; ultimately, this approach influenced the IOM’s 2 quality-of-care reports and recommendations cited above. The “systems approach” is based on 2 fundamental premises described by Casarett and Helms10: “First, it assumes that the work environment can shape behavior, and can make certain kinds of errors more likely. . . . Second, a systems approach assumes that a certain number of errors are inevitable.” Given this background, we define patient safety systems as the various policies, procedures, technologies, services, and numerous interactions among them necessary for the proper functioning of hospital care. If implemented, these systems influence hospital environment, behavior, and actions; reduce the probability of error; and improve the probability of safety. The systems studied were selected through a comprehensive literature review and focus groups of hospital clinicians. The “inventory” nature of our study initially produced a comprehensive listing of variables, given the many technological and human components and interactions that comprise patient care. We then used factor analysis to identify from this inventory latent variables, constructs composed of a number of variables that in combination measure and summarize groupings of related data items. This recategorization simplifies (insofar as possible given the complexity of hospital care) the interpretation of our data and assists in describing clinically relevant concepts with a reasonable number of categories most important to patient safety.
We examine hospitals’ patient safety systems consistent with the framework of the IOM reports and the prevailing view of the patient safety literature,11 that “preventing errors and improving safety for patients require a systems approach in order to modify the conditions that contribute to errors. . . . The problem is not bad people; the problem is that the system needs to be made safer.”2 (Leape and Berwick recently referred to this as a “mantra in health care.”11) The report stresses a systems approach, using the definition first proposed by Cook12: “Safety is a characteristic of systems and not of their components. Safety is an emergent property of systems.” Leape and Berwick add, “In order for this property to arise, healthcare organizations must develop a systems orientation to patient safety, rather than one that finds and attaches blame to individuals.”2,11Crossing the Quality Chasm’s recommendation 8.6, on “safety as a system property,” states that “patients should be safe from injury caused by the care system. Reducing risk and ensuring safety require greater attention to systems that help prevent and mitigate errors.”4
The Agency for Healthcare Research and Quality (AHRQ) responded to the IOM recommendations with an aggressive research agenda to generate new models and systems to improve care. As part of an AHRQ patient safety grant, Missouri and Utah collaborated in a study to examine hospital discharge data systems as a resource for patient safety improvement. Both states’ health departments have statutory authority to collect electronic individual discharge records from all licensed health care facilities and to disseminate analytical results to the facilities and the public. The administrative data are used to monitor population morbidity and health care issues such as access, quality, and cost. The 3-year window of the larger grant presented an ideal opportunity to study patient safety systems through a comprehensive inventory of safety systems and activities. Results were provided back to the hospitals for use as a benchmark, consistent with the continuous quality approach, to compare their own systems with the results of our study.
We surveyed hospitals at 2 points in time. This survey-resurvey approach provided an assessment of the state of the art of patient safety systems and an examination of changes from 2002 to 2004. While others have begun to discuss and examine changes since the IOM report,13- 15 our study is, to our knowledge, the only in-depth analysis of hospitals’ safety systems over time,16 and begins to fill the evidence gap in understanding systematic changes in safety 5 years later.11
Development of our survey instrument followed generally accepted procedures, including an in-depth literature review, 6 focus groups conducted in Missouri and Utah, and instrument pretesting. Focus group participants identified a comprehensive list of patient safety systems that should be found in hospitals, especially in light of the IOM reports and subsequent national patient safety efforts, and provided input to assist in framing questions consistent with current concepts and terminology. Current reference material from the IOM and other nationally recognized groups on patient safety was made available to the focus groups so consensus could be reached on terminology and definitions. Focus group members included health care professionals responsible for patient safety, clinicians, and administrators, representing hospitals stratified by major characteristics such as location (rural vs nonrural) and organizational size. Focus group sessions were conducted consistent with the approach of Stewart and Shamdasani.17
A draft survey instrument was developed from issues identified by the focus groups. Our literature analysis indicates that the terminology and concepts, as well as the exact wording of our questions, are well established in the literature over a substantial time; each question is supported by 12 or more articles. Given our approach of identifying important concepts in the literature and then discussing these in depth with the focus groups, plus the fact that these concepts were identified in all focus groups, our questions have at least face validity and are reliable patient safety system markers.
Survey items with a dichotomous response indicated presence or absence of a particular characteristic. Ordinal response items used 7 levels, adapted from the validated Joint Commission on Accreditation of Healthcare Organizations (JCAHO) 7-point scale. The focus groups believed strongly that this approach took into account the steps, sequence, and reality of how hospital systems are developed. The 7 levels are as follows:
There has been no activity to initiate/create this policy characteristic.
There has been no activity to implement this policy characteristic.
This policy characteristic has been discussed for possible implementation but not implemented.
This policy characteristic has been partially implemented in some or all areas of the hospital.
This policy characteristic is fully implemented in some areas of the hospital.
This policy characteristic is fully implemented throughout the hospital.
This policy characteristic is fully implemented throughout the hospital and evaluated for effectiveness.
Items on the survey instrument were grouped into 5 areas of interest: plans, policies, and programs; leadership and environment; data and computerization; surgery; and medication. A few questions that go beyond our systems approach, such as questions regarding use of a “patient safety officer” and a designated patient safety budget, were added at the request of the focus groups because they represent important adjuncts or resources that many hospitals find important to ensure that systems are implemented. The literature also supports these.
Seven hospital representatives pretested and reviewed the instrument and suggested changes, resulting in 91 questions. The instrument and informed consent and survey protocol were approved by the University of Missouri Health Sciences Institutional Review Board. An invitation to participate in the Web-based survey was e-mailed to the chief executive officer of each acute care hospital in Missouri and Utah, with endorsement letters from each state’s health department and state hospital association. Chief executive officers were asked to have the survey completed by the individual most knowledgeable about the patient safety program. We do not know the extent to which these “official” reports reflect staff views of the actual situation in participating hospitals; this is a potential bias that other studies should examine.
We used descriptive statistics to examine the status of hospital patient safety systems at 2 points in time: survey 1 in June through September 2002 and survey 2 in September 2003 through March 2004. Then, given the vast array of variables, we created constructs, or latent variables, to better understand the overall and most important constructs that constitute patient safety systems. The latent variables were developed using factor analysis, which identifies the latent structure or dimensions of a set of variables and is useful with a large data set because it reduces a larger number of variables, especially where that number precludes modeling all the measures individually.18 The latent variables were used to answer 2 questions: (1) Did changes occur in the latent variables from survey 1 to survey 2? The Wilcoxon signed rank test was used for this analysis; and (2) What demographic variables contributed to change in the latent variables from survey 1 to survey 2? Multiple regression models were used to determine this, with the change in each of the latent variables used as the dependent variables. For all statistical analyses, SAS software, version 9.1.3 (SAS Institute Inc, Cary, NC) was used. P ≤.05 was considered statistically significant.
Response rates were 76.8% (n = 126) for survey 1 and 78.0% (n = 128) for survey 2. For hospitals responding to both surveys (the “cohort” group), the response rate was 65.2%, with no difference between respondents and nonrespondents. Table 1 shows distribution of bed size categories and hospital management type, which are similar to that of hospitals nationally. We chose these variables for comparison because they are considered a proxy for other important hospital characteristics in studies of hospital organizational issues.20- 22 As such, they provide an indication of similarity between our study hospitals and hospitals nationally.23- 28
Generalizability of the information is further supported if state differences are accounted for by urban-rural differences. To address this issue, regression models were estimated in which each of the 7-level questions serves as a dependent variable in a model and the potential predictor variables are state, urban-rural status, bed size, and affiliation with a multihospital system. For all but 1 of the 7-level questions, only 1 of the 2 variables (state or urban-rural) was a significant predictor. The Mantel-Haenszel test was used to investigate the dichotomous questions. After adjusting for possible urban-rural differences, there were state differences for only 3 of the 22 dichotomous questions. For almost all questions, state differences are accounted for by urban-rural differences. Thus, from 2 perspectives, we believe that hospitals in other states can view the combined data from these states as potential benchmarks for conducting an inventory of individual or groups of hospitals.
Frequency distributions present all categorical responses, including dichotomous (Table 2) and 7-level questions (eTable 1). Responses to 7-level questions were collapsed into 3 levels before frequency distributions were calculated. Responses of 1 (no activity to initiate/create), 2 (no activity to implement), or 3 (policy discussed but not implemented) became new level 1, “no implementation.” Responses of 4 (partially implemented in some or all areas) or 5 (fully implemented in some areas) became new level 2, “partial implementation.” Responses of 6 (fully implemented throughout hospital) or 7 (fully implemented and evaluated) became new level 3, “full implementation” (eTable 1).
In the pretest, participants were asked to delete any question that did not provide unique information necessary to obtain a complete, comprehensive profile of hospital patient safety systems. Thus, each of the 91 remaining questions was believed to provide important information. While this is a large number of variables, it is far less than the approximately 1200 variables collected in JCAHO accreditation surveys or the numerous variables collected by state health department licensure inspections, and it reflects the complexity and comprehensiveness of hospital patient safety. The focus groups and project advisory group felt strongly that our comprehensive inventory was necessary to provide useful patient safety benchmark data to hospitals.
Table 2 displays the frequency of hospitals responding as either having or not having a specific characteristic. eTable 1 displays the frequency of policy implementation based on the collapsed 7-level scale.
In examining hospitals’ patient safety plans, policies, and programs, we found that the majority of hospitals have a patient safety committee (95.1% and 96.1% in surveys 1 and 2, respectively) and conduct trend analyses on incidents (95.9% and 98.4%). Among hospitals that require root cause analysis to identify the underlying cause of problems following a near miss, nearly all (98.8% and 98.9%) reported that they ensure that actions are taken based on root cause analysis findings. Designated patient safety program budgets are not common (28.3% and 38.6%) (Table 2). A notable innovation is full implementation of “patient safety rounds,”29 which increased from 49.1% to 60.3% during the study period (eTable 1).
Although there were some declines in the percentage of hospitals with a given characteristic, the majority of items in the plans, policies, and programs area remained stable. The largest gains were in the percentage of hospitals showing a greater level of implementation of a written patient safety plan (from 55.0% to 74.4%) and use of standardized formats and methods to disseminate data (from 40.8% to 55.2%). Although 74.4% reported full implementation of a written patient safety plan, nearly 9% reported no plan. Thus, the data must be reviewed carefully, for as high as reported implementation of some systems may be, it must be asked why such a basic component of a safety system would not be fully in place in all hospitals. And, despite repeated national calls for reductions in interns’ and other medical professionals’ work hours,30,31 responses in the leadership and environment area indicated problems in work hours of professionals involved in the medication administration process.
Each survey showed considerable variation in aspects of hospital leadership and environment, with partial implementation of policies ranging from 6.5% to 33.6% at survey 2 and full implementation ranging from 33.6% to 86.8% at that same point in time (eTable 1).
Our data reflect the national focus on increased reporting of errors and near misses. eTable 1 shows an increase in the frequency of hospitals with full implementation of policies providing for voluntary reporting (from 60.9% to 69.9%), error reporting without fear of reprisal (from 63.9% to 77.6%), no demerits/points for making a medical error (from 73% to 86.8%), and thanks/praise for error detection/reports (from 23.1% to 33.6%).
Availability and use of timely, accurate data and computer technology are vital to the success of patient safety programs.32- 34 Items in this area with the greatest percentage of full implementation were use of claims, compliments, complaints, and patient satisfaction data (72.5% and 78.4%) and assignment of billing data “e-codes” to reflect patient injury/adverse events (65.3% and 74.6%) (eTable 1). Computerized physician order entry (CPOE) systems are generally viewed as less prone to mistakes than written orders, especially when linked with laboratory data.35- 39 However, a recent study suggests that CPOE may not be a panacea for error prevention.40eTable 1 shows wide variation in levels of CPOE implementation, with laboratory CPOE most prevalent (69.4% full implementation at survey 2).
Five of 7 items related to surgical policies and procedures identified as vital by the literature and focus groups were implemented in the majority of hospitals (Table 2 and eTable 1; at survey 2). These include preanesthesia patient assessment and anesthesia plan (98.4%), all prediagnostic studies included in chart prior to surgery (97.6%), policy requiring the primary surgeon to verbally confirm the side for operation and mark the limb and/or site with a witness present (95.1%), policy requiring presurgical discussion of anesthesia options/risks with patient/family (94.3%), and assessing anesthesia adverse events/patterns (90.1%). Fewer (but still a majority at 75.9%) reported full policy implementation requiring each surgeon to obtain consent for multiple procedures conducted in 1 session. Identification of percentage of equipment failure vs surgical technical performance errors was found far less often (18.3%).
The compliance rate for survey items related to medication was 43% or higher for all 17 items (Table 2 and eTable 1). For all but 2 of these, the compliance rate was greater than 70%. Findings may reflect the great national attention paid to medication errors over the past 2 decades.37,41- 47
To summarize information contained in questions addressing a possible common construct, standard factor analytic methods were used. This resulted in identification of 7 constructs and corresponding latent variables: CPOE systems, computerized test results, and assessments of adverse events; specific patient safety policies; use of data in patient safety programs; drug storage, administration, and safety procedures; manner of handling adverse event/error reporting; prevention policies; and root cause analysis. Specific items that compose the latent variables are listed in eTable 2.
Standard principal-components factor analysis with appropriate rotations was used to construct the latent variables. Items that loaded on a specific factor were summed to form the corresponding latent variable. Coefficient α values were computed for each latent variable to investigate the internal consistency of the items. Resulting α values are 0.89, 0.76, 0.82, 0.84, 0.69, 0.76, and 0.80 for the aforementioned factors, respectively, suggesting that internal consistency is quite good. eTable 2 shows that the specific variables making up a given latent variable appear to measure a common construct.
These 7 latent variables were then used to determine if changes occurred between survey 1 and survey 2. Table 3 contains the means, medians, and standard deviations of the change variables. Since each of these changes is positive, we conclude that improvement occurred. All changes were significant with the exception of 2 latent variables: drug storage, administration, and safety procedures (the P value for this variable was nonsignificant at .11); and CPOE systems, computerized test results, and assessments of adverse events (P = .55).
In To Err Is Human, the IOM authors pose the question, “Must we wait another decade to be safer in our health system?” Much attention has been devoted to this vital topic. The professional literature is one indicator. Our extensive literature review from 1994-2004 identified 4836 related articles as of December 2003. In 1994, 240 articles were published, progressively increasing to a high of 853 in 2003 (complete 2004 listings were not yet available). The IOM report, issued at the mid point of that decade, clearly catalyzed the proliferation of literature as well as hospitals’ advances in patient safety. Quality of care and patient safety knowledge has evolved, in many cases leading to improvements in structure, policies, and systems vital to patient safety, reported in this study and elsewhere, including the AHRQ’s recent National Healthcare Quality Report.13 We view medical error as primarily an organizational issue resulting from inadequate or nonexistent systems that evidence suggests would reduce the probability of errors; from this perspective, problems in care are largely the result of poorly organized care systems in which breakdowns occur in the transfer of complex information from physician orders to the patient bedside, with many clinicians and systems involved from start to completion. These systems must be addressed a priori to anticipate and avert problems before they cause patient harm.48 This view stands in marked contrast to past beliefs that “bad apples” are the cause of patient harm and poor quality.9
The 7 latent variables identified are well established in the IOM reports and other pertinent patient safety system literature as systems integral to a state-of-the-art patient safety program. While these variables and their terminology (eg, CPOE systems) are well known to most health care professionals working in hospitals, some terms, such as root cause analysis, may not be familiar to those who do not conduct specific patient safety system analyses on a regular basis. Root cause analysis, a technique developed in industries that take a systems approach, examines in detail medical errors in an attempt to find the real cause of the problem rather than simply continuing to deal with its symptoms, and to remove the root problem so the situation does not occur again.36,49- 51 Given its success, it is not surprising that the factor analysis identified this technique, which, over time, will become routine terminology among health care professionals as they experience its beneficial effects.
Generally, the information contained in the latent variables suggests that some improvement in hospital patient safety systems has taken place between the 2 surveys. We identify many areas in which hospitals have developed and implemented vital patient safeguards over time. Data show improvements in 5 of the 7 latent variables studied, but the overall picture is mixed, and progress in general is at best modest. The combination of reviewing individual survey questions (Table 2 and eTable 1) and developing and testing the change in latent variables (Table 3 and eTable 2) gives a general profile of hospital patient safety systems. To understand specific aspects of these systems in terms of their prevalence and change, one may inspect the survey instrument’s individual variables. For example, the change in the latent variable of CPOE systems, computerized test results, and assessments of adverse events is not significant, yet eTable 1 shows that there are some very specific aspects of computerization that improved. Hospitals’ patient safety systems are stable overall, with improvements on some fronts. However, there are also hospital self-reports of decline in every area surveyed. For all items examined, a considerable number of hospitals did not have some important patient safety systems in place or even partially implemented.
The fact that some declines and absences were reported helps address the major limitation of this study, its self-reporting nature. Hospitals might be expected to respond to the survey so as to portray themselves in the most favorable light possible; that is, providing expected socially desirable responses. However, the data indicate, at least in general, that it is unlikely that the hospitals did so, particularly considering that they had access to their survey 1 responses for comparison when completing survey 2. It is possible that some hospital respondents may have been careless in their response to the second survey, but a telephone audit of hospitals that regressed indicated when regression occurred, it was the result of changes in hospital priorities, budgets, and patient safety system philosophies rather than instrument unreliability or carelessness. Our findings, though based on a very different data set, are consistent with key themes identified in the AHRQ’s quality report13: quality systems are improving, but such change takes time, progress is slow, and the gap between the best possible care and actual care remains large. This consistency provides further evidence that our study may be applicable to other hospitals nationally.
Our study is limited in that we report the presence or absence and level of development and implementation of patient safety systems, rather than actual problems in care. Given the complexity of hospital care, quality cannot be left to chance; fundamental systems must be in place that increase the potential for good quality. Our data are self-reports, and given the extensive nature of the study, we did not validate reported findings through either on-site review or requests for copies of supporting documents. Hospitals reported varying levels of system development and, in many cases, no systems. In some cases, hospitals reported in the second survey that they had eliminated a previously reported system. Finally, the sponsoring state health departments could, in theory, review the actual situation at a given hospital at licensure surveys. Consequently, we believe the results are reasonably reliable, especially in the trend identified.
Through implementation and improvements in policies and systems, hospitals may be safer today than when the IOM report was issued. However, work must be accelerated. For example, data in eTable 1 in many cases show improvements over time but in most cases it is modest; the percentages are largely in the single digits.
Response from within the health care system clearly has been slow. In part, this is because of the complexities involved in implementing systems and changing cultures; however, complexity can also be an excuse. Yet, the complex area of surgery appears to have the greatest level of patient safety systems, while other areas, such as medications, with a long history of efforts in patient safety and error prevention, show improvements, but the percentage of hospitals with various safety systems was already high at baseline for many systems. Some findings are surprising, given overall trends; for example, while a substantial percentage of hospitals have medication safety systems, only 34% report full implementation at survey 2 of CPOE systems for medication. Given computer technology growth in general and in hospital billing systems, it is disappointing to find such a high percentage of hospitals reporting no CPOE systems.
Based on our findings, we recommend that individual hospitals, including their boards of directors, medical staffs, administration, and staff, review the list of patient safety systems our expert focus groups identified as needed in all hospitals. They can conduct their own survey of where they stand with regard to development and implementation of each of these and report where they stand to the community. While the list may seem long, it is very manageable when viewed by individual hospital departments to which given system characteristics apply. We concur with the larger recommendations of others that nationally there must be a far more aggressive agenda.11 Furthermore, until those outside the system begin to play a new and more aggressive role, progress will continue at its slow pace. Patients must be made more knowledgeable and demanding of quality, in ways including but not limited to the use of coordinated national, regional, and local media campaigns and further development and dissemination of highly visible consumer guides and performance reporting systems.52,53 Communities must demand that hospital directors take their responsibility for quality seriously and make yearly reports to the public on progress in meeting the IOM recommendations. The public must demand that officials make patient safety a priority at local, state, and national levels. Such approaches will be uncomfortable to those inside the system, but such discomfort is a small matter compared with the devastating impact of even 1 fatal error on patients and their families. The road to hospital patient safety is long and complicated, and is an ongoing concern. As Florence Nightingale wrote, “It may seem a strange principle to enunciate as the very first requirement in a hospital that it should do the sick no harm.”54
Corresponding Author: Daniel R. Longo, OblSB, ScD, Department of Family and Community Medicine, University of Missouri–Columbia, MA306 Medical Sciences Bldg, Columbia, MO 65212 (firstname.lastname@example.org).
Author Contributions: Dr Longo had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Longo, Hewett.
Acquisition of data: Longo, Schubert.
Analysis and interpretation of data: Longo, Hewett, Ge.
Drafting of the manuscript: Longo, Hewett, Ge, Schubert.
Critical revision of the manuscript for important intellectual content: Longo, Hewett, Schubert.
Statistical analysis: Longo, Hewett, Ge.
Obtained funding: Longo.
Administrative, technical, or material support: Longo, Hewett, Ge, Schubert.
Study supervision: Longo.
Financial Disclosures: None reported.
Other Resources:eTable 1 and eTable 2 are available.
Funding/Support: Funding for this study was provided by the Agency for Healthcare Research and Quality grant 5 U18 HS011885 and through subcontracts with the Utah Department of Health (contract 026429) and the Missouri Department of Health and Senior Services (contract AOC02380132).
Role of the Sponsor: The funding organizations are public institutions and had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, and approval of the manuscript. The Utah and Missouri health departments provided practical support for the focus group and survey processes, including letters of endorsement, hospital contact information, and assistance with logistic arrangements for focus group sessions.
Acknowledgment: We appreciate the Utah Hospital Association and Missouri Hospital Association for their support and cooperation.
This article was corrected on 12/15/2005, prior to publication of the correction in print.