Barker KN, Flynn EA, Pepper GA, Bates DW, Mikeal RL. Medication Errors Observed in 36 Health Care Facilities. Arch Intern Med. 2002;162(16):1897-1903. doi:10.1001/archinte.162.16.1897
Copyright 2002 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.2002
Medication errors are a national concern.
To identify the prevalence of medication errors (doses administered differently than ordered).
A prospective cohort study.
Hospitals accredited by the Joint Commission on Accreditation of Healthcare Organizations, nonaccredited hospitals, and skilled nursing facilities in Georgia and Colorado.
A stratified random sample of 36 institutions. Twenty-six declined, with random replacement. Medication doses given (or omitted) during at least 1 medication pass during a 1- to 4-day period by nurses on high medication–volume nursing units. The target sample was 50 day-shift doses per nursing unit or until all doses for that medication pass were administered.
Medication errors were witnessed by observation, and verified by a research pharmacist (E.A.F.). Clinical significance was judged by an expert panel of physicians.
Main Outcome Measure
Medication errors reaching patients.
In the 36 institutions, 19% of the doses (605/3216) were in error. The most frequent errors by category were wrong time (43%), omission (30%), wrong dose (17%), and unauthorized drug (4%). Seven percent of the errors were judged potential adverse drug events. There was no significant difference between error rates in the 3 settings (P = .82) or by size (P = .39). Error rates were higher in Colorado than in Georgia (P = .04)
Medication errors were common (nearly 1 of every 5 doses in the typical hospital and skilled nursing facility). The percentage of errors rated potentially harmful was 7%, or more than 40 per day in a typical 300-patient facility. The problem of defective medication administration systems, although varied, is widespread.
THE 1999 Institute of Medicine report1 on the quality of care, entitled To Err Is Human: Building a Safer Health System, has drawn national attention to the occurrence, clinical consequences, and cost of adverse drug events (ADEs) in hospitals. The report calls for more systematic approaches to the prevention of injuries due to medical care. Many of these ADEs are viewed as originating from systems problems (ie, problems with the processes of the medication use system). We divide those processes into (1) prescribing and (2) delivery and administration. The focus of this article is on the latter.
Leape and associates2 studied ADEs involving medications using methods that included solicited self-report and daily medical record review by clinical nurse researchers. They found that 56% of the events they detected were due to prescribing errors and 44% involved administration. Obviously, drug therapy cannot be successful unless prescribing and delivery and administration are performed correctly.
A key variable in assessing the medication system in health care facilities is whether the patient receives the prescribed medication. A medication error was defined for this study as a discrepancy between the dose ordered and the dose received. This definition takes a systems view of medication error, because the focus is on the system outcome rather than on the actions of individual health care workers. Medication error is operationalized as an easily understood rate that is simply calculated: (doses in error/total doses given or omitted) × 100. This measure of medication error rate has been extensively used to test hypotheses about system improvements. For example, it was used to evaluate the impact of the unit dose system that was ultimately adopted by 90% of US hospitals.3- 6
This report is part of a study to seek the best method for detecting and counting the frequency of medication errors in US hospitals and skilled nursing facilities, comparing validity with cost-effectiveness. This article reports the errors verified by a research pharmacist (E.A.F.) using the observation method data as a gold standard in a study comparing different methods and data collectors. Observation was superior to medical record review, and to the examination of incident reports.7
Observation uses as the primary outcome measure the percentage of doses ordered that are in error when administered to the patient (or omitted). Uses of this measure have included benchmarking to help hospitals test and evaluate new systems (eg, unit dose) in 40 studies, comparing with "best practice" hospitals, evaluating expensive interventions (eg, automated pharmacy systems) before and after installation, and enforcing governmental standards and regulations.8,9
Because this measure may vary by accreditation status, whether the site is an acute-care or skilled nursing facility, or geographic location, we performed a study to assess the medication error rate in various hospitals and skilled nursing facilities in 2 states.
The areas from which the samples of each type of facility were drawn were the Atlanta, Ga, metropolitan statistical area and the Denver-Boulder-Greeley, Colo, consolidated metropolitan statistical area, using lists provided by the Health Care Financing Administration. Data provided for each facility included address, telephone number, accreditation status, and bed size. From these lists, 18 facilities were randomly selected for each of 3 facility types in each state: 6 accredited hospitals, 6 nonaccredited hospitals, and 6 skilled nursing facilities, for a total of 36 sites. Facilities were invited to participate via letter and telephone. When a facility declined, the hospital or skilled nursing facility in the same positional order in the next random sample was contacted in turn until enough facilities of that category agreed to participate. Facilities were required to have an incident report system in place (and all approached did). A minimum bed size requirement of 24 was established after it was found that nonaccredited hospitals with fewer beds often had too few patients; 6 such hospitals had to be excluded.
Based on previous experience, a sample size of 50 doses per nursing unit was chosen as large enough to obtain an adequate measure of an observation-based error rate for each of the 36 facilities. The doses were those occurring during a medication pass on a nursing unit identified as high volume by an official of the facility. Up to 4 different nursing units were included if available at each site, so that 200 doses per facility could have been observed.
The nonaccredited hospitals presented special sampling problems. Five achieved accreditation status during the study period (7 months). The judgment was made that the data from these hospitals should be analyzed as nonaccredited and then as accredited.
Another problem was the small number of doses per day in some of the nonaccredited facilities. Nonaccredited hospitals accounted for 21% of all acute-care hospitals in the United States in 1998, with a mean bed size of 67 (median, 44).10 In the sample, the mean bed size was 48, compared with 268 for the accredited hospitals. A consequence was that the research team (E.A.F. and G.A.P.) arriving on the previously negotiated day sometimes encountered fewer than 50 doses for study, and sometimes none for several days, due to unanticipated changes in the census. In contrast, in the larger accredited hospitals and skilled nursing facilities, the workload of doses offered many more than 50 doses per day for study, at minimum incremental cost. These additional data were collected to achieve a better description of the error rate in that facility type.
Two registered nurses, 2 licensed practical nurses, and 2 pharmacy technicians per state were sought from the general population by placing advertisements in the newspapers and on the Internet in Denver and Atlanta. Only 1 pharmacy technician was hired in Colorado because of a lack of qualified applicants. Applicants took a qualifying test to determine their base knowledge of medication and administration techniques.
Training in the observation technique required 20 hours and included classroom lectures, an interactive videotape program, practice observations on a nursing unit, and 2 examinations. Additional practice observations were performed after training. One registered nurse in Georgia withdrew after training for personal reasons.
The final examination included a paper test of the observer's ability to detect errors when provided with a typed list of drugs administered and a typed set of drug orders for the patients involved. A test set of 49 doses, which included 27 errors and 22 nonerrors, was constructed. The frequency of each error type was proportional to the occurrence of error categories in 12 previous observation-based studies (wrong time, 16; omissions, 8; wrong dose, 2; and unauthorized drug, 1). The scores on the examination served as the basis for interrater and intrarater reliability assessments. The percentage agreement on each question was used to calculate the interrater reliability score. A repeated-measures analysis of variance was performed on the split-halves test scores to determine intrarater reliability.
Direct observation was used to detect medication errors, based on the method of Barker and McConnell.11 An orientation to each site was provided by facility personnel before observations started. On each day of observation, the observer arrived on the nursing unit in time to attend the change-of-shift report, to meet the staff and allow nurses to ask questions about the study. An information sheet approved by the Auburn University Institutional Review Board was provided to the nurse subjects. The observer witnessed the preparation and administration of 50 doses by the first nurse encountered plus a second nurse if necessary. The period for the observation was 2 hours, or until all doses due were administered. The observer wrote down exactly what the subject did, including all details about the medication, and witnessed the administration to the patient. Data recorded included patient names (which were later coded), drug product, amount of drug, dose form, route of administration, time of administration, and medication-related procedures (such as measuring the patient's heart rate or giving with food). After the medication pass, the observer and research pharmacist made their own independent copies of the original medication orders for patients involved in the observation. Each dose observed was compared with what the prescriber ordered. If there was a difference, the error was described and categorized. After comparing all doses witnessed, the observer determined if any other drugs should have been given at the time of the observation based on what the prescriber ordered. If any were identified, they were recorded as omission errors unless a valid reason was discovered. Doses given based on orders judged difficult to interpret were excluded from the study (0.2% of the orders were deemed uninterpretable). The medication error rate was calculated as follows: [(number of errors, with no more than 1 error per dose)/(number of doses given + number of omissions)] × 100.
After the observer finished the error determination, all data were turned over to a research pharmacist. The researcher made a blinded independent determination of errors by comparing each dose on the observer's drug pass worksheet with the pharmacist's copy of the prescriber's orders (correcting 210 false negatives and 87 false positives). The research pharmacist for the Colorado area (E.A.F.) reviewed the data collected at the Georgia sites to address inconsistencies. Only doses confirmed as in error or not in error by the research pharmacist are reported herein.
A medication error was defined in general as a dose administered differently than as ordered on the patient's medical record. Such medication errors were viewed as system defects (ie, outcomes different from those the system was designed to deliver and administer to the patients). Categories of medication errors were defined as follows.
1. Unauthorized drug: the administration of a dose of medication that had never been ordered for that patient.
2. Extra dose: any dose given in excess of the total number of times ordered by the physician, such as a dose given based on the expired order, after a drug had been discontinued, or after a drug had been put on hold.
3. Wrong dose: any dose of preformed dosage units (such as tablets) that contained the wrong strength or number; if an injectable product, then any dose that was ±10% or more different from the correct dosage; if any other dosage form, then any dose that was ±17% or more of the correct dose in the judgment of the observer. In judging dosage, measuring devices and graduations were those provided for routine use by the institution: graduations on the syringe for injections, graduations on medicine cups for oral liquids, and drops for the dropper provided. Wrong dose errors were counted for ointments, topical solutions, and similar medications only when the dose was specified quantitatively by the prescriber (eg, in inches of ointment).
4. Omission: failure to give an ordered dose. If no attempt was made to administer the dose, an omission error was counted. If the patient refused the medication, an opportunity for error was not counted provided the nurse responsible for administering the dose tried to give it. Doses withheld according to policies calling for the withholding of medication doses, such as nothing by mouth before surgery, were not counted as errors or opportunities for errors. Omissions were detected by comparing the medications administered at a given time with doses that should have been given at that time based on the physician's written order and protocols.
5. Wrong route: medication administered to a patient using a different route than ordered (eg, oral administration of a drug ordered intramuscularly). Included in this category were doses given in the wrong site, such as the right eye instead of the left eye.
6. Wrong form: the administration of a dose in a different form than ordered by the physician. If enteric-coated aspirin was ordered, but plain aspirin was administered, a wrong form error was counted.
7. Wrong technique: exclusion, or incorrect performance, of a procedure ordered by the prescriber immediately before administration of each dose of medication. Examples include lack of heart rate or blood pressure measurement before giving a dose.
8. Wrong time: administration of a dose more than 60 minutes before or after the scheduled administration time. A 30-minute window was used for medications that were ordered before, with, or after a meal. Routine administration times were obtained from each site, and times assigned on the medication administration record were used when no other policy was available.
Each dose observed to be given or omitted was operationally defined to be a dose (ie, opportunity for error), and is the basic unit of data. Any dose could be only in error or not in error. Doses included only those for which the preparation and administration of the medication were witnessed by an observer or that the observer was certain were not administered (ie, omitted). Doses labeled by the pharmaceutical manufacturers were assumed to be correct.
The overall medication error rates (with and without wrong time errors) for each site were compared between states, facility types, accreditation status, and facility size categories using an analysis of variance. The Tukey test was used to determine the means between which significant differences existed in the comparison of facility types. Computer software (SAS statistical software for Windows, version 6.12; SAS Institute Inc, Cary, NC) was used. The α level was set at .05.
A potential ADE was defined in general as a medication error that had the potential to cause a patient discomfort or jeopardize the patient's health and safety. In this study, the operational definition was the expert judgment (and majority decision) of a 3-physician advisory panel, each experienced in making such judgments, who evaluated the same descriptive information for each medication error detected. Health Care Financing Administration guidelines (available from the authors) for judging significance were provided to the panel.
The information sent to the physicians' panel (at Brigham and Women's Hospital) included a description of each individual error, the drug involved, and the error category. The information excluded the data collection method used and the data collector type so as to blind the panel to these factors. Information sent about each patient's condition included sex, age, allergies, disease states, selected laboratory data if associated with a medication, red flag drugs ordered, and physician or nurse progress notes when deemed noteworthy by the research pharmacist reviewing the patient's medical record.
An institutional review board application was submitted and approved by Auburn University and the Colorado Multiple Institutional Review Board.
Overall percentage agreement on the Drug Pass Examination was 96%, with the range of agreement on each individual question between 89% and 100%, indicating interrater reliability. The result of the repeated-measures analysis of variance test found that the split-halves test scores were not significantly different within subjects, indicating intrarater reliability (F1,8 = 2.26, P = .0541).
The mean error rate detected in the 36 sites in Atlanta and Denver was 19% (605 of 3216 doses). Excluding wrong time errors, the error rate was 10%. The range was 0% to 67%, with a 95% confidence interval of ±4.5%. All data were collected during 81 observation days from May 4 to November 11, 1999. The error rates by category (Table 1) demonstrate that the most frequent errors were wrong time (8%), omission (6%), and wrong dose (3%); as a percentage of all errors, the results included wrong time (43%), omission (30%), wrong dose (17%), and unauthorized drug (4%).
The distribution of error rates by error category was similar between accredited and nonaccredited hospitals and skilled nursing facilities (Table 1). When rate by site was compared (Table 2), however, substantial variation between sites was found, with error rates ranging from 0% at one site to 66.7% at another. To help maintain anonymity of the sites, size was categorized as large (>100) and small (≤100) based on number of certified beds.
There was no significant difference in error rates by type or size of facility. The statistics comparing the 3 types of sites were as follows: F2,33 = 0.20, P = .82; and excluding wrong time errors, F2,33 = 0.39, P = .68. The 17 large sites had a mean ± 95% confidence interval error rate of 16.5% ± 5.8% (excluding wrong time errors, 9.7% ± 2.9%), and the 19 small sites had a mean ± 95% confidence interval error rate of 20.5% ± 6.9% (excluding wrong time errors, 10.2% ± 3.0%), resulting in F1,34 = 0.75, P = .39 (excluding wrong time errors: F1,34 = 0.06, P = .80).
The one teaching hospital had a low error rate of 4.7% (8.2% including wrong time errors). The error rates for the accredited and nonaccredited facilities are shown by error category in Table 1. There was no significant difference in error rates by accreditation status, with (F1,22 = 0.30, P = .59) or without (F1,22 = 0, P = .95) wrong time errors. This also proved true when those 5 in transition were treated as nonaccredited (F1,22 = 0.05, P = .82) (excluding wrong time errors: F1,22 = 0, P = .96).
The mean ± 95% confidence intervals error rates in the Colorado sites (Table 3), 23.4% ± 7.6% (excluding wrong time errors, 13.0% ± 3.1%) (F1,34 = 4.75, P = .04), were significantly greater than in the Georgia sites, 13.8% ± 4.0% (excluding wrong time errors, 7.0% ± 1.9%) (F1,34 = 10.25, P = .003).
The 3-physician panel rated 7% of the errors detected (48 of 675 errors assessed) as potential ADEs. When wrong time errors are excluded, 10% of the errors were considered potential ADEs (45 of 448 errors). Table 4 lists examples of those errors rated as potential ADEs. Table 5 shows the potential clinical significance for each error type.
The results show that medication errors were common, occurring in 19% or nearly 1 of every 5 doses in the typical site. Assuming 10 doses per patient day, this would mean the typical patient was subject to about 2 errors every day. There was substantial variation by site and region, however; therefore, the results can only be described for the sample observed.
A panel of 3 physicians, experienced with such judgments, rated 7% of these errors as potential ADEs (10% if wrong time errors are excluded). This is comparable to the 8% of all errors found in the teaching hospital study by Bates et al.12 (The methods and definitions used, although not identical, were similar.) For 300 inpatients, assuming 10 doses per patient on 1 day, this would be almost 40 potential ADEs per day in that facility. Many drugs could have the potential for harm in some patients, but were judged safe herein because these particular patients were not susceptible. For example, enteric coated aspirin, 325 mg, was administered to a patient without an order. This unauthorized drug error was rated as not significant by the 3-physician panel. However, if the patient were also receiving warfarin sodium therapy, this could have been a clinically significant error. When pharmacists in other studies were asked to judge the potential for harm from drugs involved in errors based on their pharmacological class alone, they found 67% of the doses threatening harm.13- 15 Systems should be designed to eliminate threats to patients for the full range of clinical conditions that might be encountered.
Statistically, accreditation by the Joint Commission on Accreditation of Healthcare Organizations was irrelevant for differentiating the hospitals by error rate. The error rates, excluding wrong time errors, ranged from 0% (in 2 hospitals gaining accreditation during the study period) to 26.2% (also an accredited hospital). The Joint Commission on Accreditation of Healthcare Organizations has identified medication errors as one of the most frequent sentinel events.16
It is unclear why the error rates for the 3 types of sites were significantly higher in Colorado than in Georgia. The possibility that the difference was in part due to a difference in the skills of the observers was investigated, but no evidence of this was found. (All observers were checked by the same pharmacist.)
In general, the prescribers in the typical facility faced the reality that almost 1 in every 5 doses they ordered (605/3216) would be given in error, 30% of which would be omissions—the most common error type after wrong time errors. However, the rates across facilities differed widely.
The 36 institutions studied were selected at random (or via random replacement) from 2 metropolitan statistical areas and were limited to those agreeing to be studied. Remarks by those 26 institutions declining were to the effect that they might have poor scores and wanted to improve their performance first. Two institutions were prevented from participating as a matter of corporate policy. Two were planning to close. Most did not give reasons. Thus, the error rates reported likely represent a lower bound.
The doses selected for examination were a convenience sample of a medication pass from a nursing unit identified as high volume. The typical medication pass does not include contrast media, respiratory therapy, or most chemotherapy. The number of doses examined was less for the nonaccredited hospitals, because of the difficulty in anticipating medication workloads in these typically small hospitals.
The possibility of an effect of the presence of an observer on the subjects observed is always a concern, but it is not a severe problem when the subjects are observed doing an activity familiar to them, such as their regular jobs, and when the observer is trained to be unobtrusive and nonjudgmental.13,17- 20 It is possible that some errors represented intercepted prescribing errors detected and, therefore, not followed by pharmacists or nurses. However, there was no evidence to support this.
Medication errors were frequent, occurring at a rate of nearly 1 of every 5 doses in the typical hospital and skilled nursing facility. The percentage of errors rated potentially harmful was 7%, or more than 40 per day per 300 inpatients, on the average. Accreditation by the Joint Commission on Accreditation of Healthcare Organizations was not associated with significantly lower error rates. Error rates were higher in Colorado than in Georgia. Substantial variations in error rates by facility were identified. If the rates detected are durable over time, it should be possible to identify organizations that deserve closer study.
The error rates are likely to be understated because of the large proportion of facilities that declined to participate. This evidence of a high rate of medication errors in many of the institutions in the sample supports the implications of the Institute of Medicine report that the medication delivery and administration systems of the nation's hospitals and skilled nursing facilities have major systems problems. These results are especially valuable because they provide data from primarily nonteaching sites, complementing data from large teaching hospitals, and examine the association of accreditation with error rates.
Accepted for publication February 13, 2002.
This study was supported by grant 500-96-P605 from the Alabama Quality Assurance Foundation, Birmingham.
We thank Robert M. Cisneros, RPh, MS, for his valuable assistance at 16 sites in Georgia. We appreciate the input and advice of Samuel W. Kidder, PharmD, MPH, pharmacy consultant at Health Care Financing Administration. We thank Linda A. Pfaff, RN, MS, coordinator for operations in Georgia, for her valuable assistance. We also thank Helen Deere-Powell, RPh; Lucian L. Leape, MD; Loriann E. DeMartini, PharmD; G. Neil Libby, PhD, RPh; Richard Shannon, RPh; Robert E. Pearson, RPh, MS; Tejal Gandhi, MD; Rainu Kaushal, MD; and Jeffrey Rothschild, MD, for the various roles they played in the preparation of the manuscript.