The timing of ED discharges among the 304 980 patients ultimately diagnosed as having acute myocardial infarction. The height of each vertical bar represents the observed number of daily ED discharges, and the regression line represents the number expected. The logistic regression was fitted using the unshaded portion of the graph (ie, days 365-46). The number of unrecognized diagnostic opportunities is represented as the net difference between the number of ED discharges observed (vertical bars) and the number expected (regression line) during the 45 days preceding the index hospital admission (shaded area). eFigure 1 in the Supplement provides a similar illustration of the other 4 conditions.
Shown are longitudinal trends. Yearly point estimates and 95% CIs are calculated in the same manner as for the results presented in Table 2 but using the subset of patients whose index hospital admission began during the calendar year.
eTable 1.ICD-9-CM Codes Used to Define the Index Hospital Admission
eTable 2. Effect of Excluding Recent Hospitalizations
eTable 3. Effect of Excluding Discharges After Observation Admissions
eTable 4. Effect of Varying Duration of “Pre-admission Window”
eTable 5. Discharge Diagnoses Associated With Unrecognized Emergencies
eFigure. Distribution of ED Visits
Customize your JAMA Network experience by selecting one or more topics from the list below.
Waxman DA, Kanzaria HK, Schriger DL. Unrecognized Cardiovascular Emergencies Among Medicare Patients. JAMA Intern Med. 2018;178(4):477–484. doi:10.1001/jamainternmed.2017.8628
Of emergency department visits attributable to imminent ruptured abdominal aortic aneurysm, acute myocardial infarction, stroke, aortic dissection, and subarachnoid hemorrhage, what proportions end in discharge home without diagnosis?
In this cohort study of Medicare claims, the proportion of missed opportunities to diagnose these conditions in the emergency department ranged from 2.3% (ruptured abdominal aortic aneurysm) to 4.5% (aortic dissection). We found no evidence for improvement across the 2007 to 2014 study time frame.
Among Medicare patients, opportunities to diagnose these conditions in the emergency department are missed infrequently, but further improvement may prove difficult.
The Institute of Medicine described diagnostic error as the next frontier in patient safety and highlighted a critical need for better measurement tools.
To estimate the proportions of emergency department (ED) visits attributable to symptoms of imminent ruptured abdominal aortic aneurysm (AAA), acute myocardial infarction (AMI), stroke, aortic dissection, and subarachnoid hemorrhage (SAH) that end in discharge without diagnosis; to evaluate longitudinal trends; and to identify patient characteristics independently associated with missed diagnostic opportunities.
Design, Setting, and Participants
This was a retrospective cohort study of all Medicare claims for 2006 to 2014. The setting was hospital EDs in the United States. Participants included all fee-for-service Medicare patients admitted to the hospital during 2007 to 2014 for the conditions of interest. Hospice enrollees and patients with recent skilled nursing facility stays were excluded.
Main Outcomes and Measures
The proportion of potential diagnostic opportunities missed in the ED was estimated using the difference between observed and expected ED discharges within 45 days of the index hospital admissions as the numerator, basing expected discharges on ED use by the same patients in earlier months. The denominator was estimated as the number of recognized emergencies (index hospital admissions) plus unrecognized emergencies (excess discharges).
There were 1 561 940 patients, including 17 963 hospitalized for ruptured AAA, 304 980 for AMI, 1 181 648 for stroke, 19 675 for aortic dissection, and 37 674 for SAH. The mean (SD) age was 77.9 (10.3) years; 8.9% were younger than 65 years, and 54.1% were female. The proportions of diagnostic opportunities missed in the ED were as follows: ruptured AAA (3.4%; 95% CI, 2.9%-4.0%), AMI (2.3%; 95% CI, 2.1%-2.4%), stroke (4.1%; 95% CI, 4.0%-4.2%), aortic dissection (4.5%; 95% CI, 3.9%-5.1%), and SAH (3.5%; 95% CI, 3.1%-3.9%). Longitudinal trends were either nonsignificant (AMI and aortic dissection) or increasing (ruptured AAA, stroke, and SAH). Patient characteristics associated with unrecognized emergencies included age younger than 65 years, dual eligibility for Medicare and Medicaid coverage, female sex, and each of the following chronic conditions: end-stage renal disease, dementia, depression, diabetes, cerebrovascular disease, hypertension, coronary artery disease, and chronic obstructive pulmonary disease.
Conclusions and Relevance
Among Medicare patients, opportunities to diagnose ruptured AAA, AMI, stroke, aortic dissection, and SAH are missed in less than 1 in 20 ED presentations. Further improvement may prove difficult.
Accurate and timely diagnosis is a cornerstone of high-quality care. A report by the Institute of Medicine (IOM), Improving Diagnosis in Health Care, calls diagnostic error an underappreciated source of patient harm and “a blind spot” in the patient safety movement.”1 The report cites the inherent difficulty in measuring the incidence of diagnostic error as an impediment to improvement and states that developing measurement tools should be an urgent research priority.2 As motivational examples, the report presents several vignettes of patients who are seen at the emergency department (ED) with symptoms of acute, life-threatening cardiovascular emergencies.1
The IOM committee suggests that to quantify diagnostic error “one would need an estimate of the number of opportunities to make a diagnosis each year (denominator) and the number of times the diagnosis (health problem) is not made in an accurate and timely manner or is not communicated to the patient.”1(p7) Although the word error has the linguistic implication of fault, the committee distinguished between failure of diagnostic process and the outcome of that process, choosing an outcome-based definition of error as patient centric. They note that one does not necessarily imply the other. For example, a radiograph misread by a radiologist but correctly overread by the ordering physician would be a near miss but not a diagnostic error. Conversely, if a patient is seen with an aortic dissection and is discharged, then a diagnostic error occurred, no matter how nonspecific the clinical presentation.
Defined this way, the frequency of diagnostic error can only be measured retrospectively. If the goal is to identify specific instances of diagnostic error, the presence of the disease during the encounters in question would need to be adjudicated after medical record review. However, to estimate the proportion of diagnostic opportunities missed, we suggest that a statistical approach may offer a more accurate, reproducible, and feasible alternative.
In this study, we consider the ED diagnosis of the following 5 life-threatening emergencies: ruptured abdominal aortic aneurysm (AAA), acute myocardial infarction (AMI), stroke, aortic dissection, and subarachnoid hemorrhage (SAH). Adapting the IOM’s formulation, we measure the proportion of patients discharged home among those seen at the ED with symptoms retrospectively attributable to the underlying acute cardiovascular pathology.
We defined the study population as all fee-for-service (FFS) Medicare patients newly diagnosed as having 1 of the 5 conditions of interest during a hospitalization that began in the ED during calendar years 2007 to 2014. The index conditions were defined according to the coded principal discharge diagnosis (eTable 1 in the Supplement). To ensure that the pathology was acute, that all preceding ED visits were captured, and that patients were not knowingly discharged home despite the acute condition, index cases were excluded for the following reasons: (1) lack of continuous FFS Medicare enrollment in the 12 months preceding the index hospital admission, (2) previous enrollment in hospice, (3) a claim for a skilled nursing facility stay within the preceding 30 days, (4) an ED visit within the preceding 365 days where the index condition was listed as a diagnosis, and (5) the absence of ED-specific charges (Medicare Provider Analysis Review File [MedPAR]) or a matching outpatient ED claim (outpatient file) indicating admission occurred via an ED.3 The data sources were Medicare standard analytic files (100%) for the calendar years 2006 to 2014 (allowing for a 1-year look-back period).
The study was approved by RAND’s institutional review board. Informed consent was not applicable.
We performed separate analyses for each of the 5 conditions. Starting with the index hospital admission, we look back in time to identify all ED visits in the preceding year. We assume that the acute pathology giving rise to the index condition (eg, rupture of a coronary plaque preceding AMI) begins in the hours, days, or weeks before the index hospital admission. At some point after the acute pathology appears, patients are seen at the ED for symptoms that are in some way related (albeit sometimes nonspecific) and either are admitted to the hospital (or transferred) and diagnosed or are discharged with an unrecognized emergency. Those who are discharged return to the ED one or more times and are eventually admitted.
We treat the presence of the underlying acute pathology at ED discharge as unobservable in the data. Patients may coincidentally be seen at the ED for problems unrelated to the cardiovascular emergency at any time before the index hospital admission. We specify that the underlying pathology can start no more than 45 days before the index hospital admission (based on preliminary inspection of the data) and demonstrate in a sensitivity analysis that results are robust to choices of shorter or longer durations. We estimated the number of unrecognized emergencies in each population as the number of “excess ED discharges” within those 45 days. Excess discharges, defined as the difference between the number observed and the number expected to occur, are based on the mean number of daily ED discharges for the same patients earlier in the year. Logistic regression was used to adjust for a slightly increasing rate with each passing day owing to age and a survivorship bias (eg, an aggressive cancer is less likely to appear earlier in the year because we impose the constraint that all patients must survive to the index hospital admission).
The primary study outcome (for each condition) is formulated as the false-negative rate—the proportion of ED visits where the diagnosis was not made (ie, the patient was discharged)—among all visits where the acute pathology was present. The numerator (unrecognized emergencies) is estimated as the number of excess ED discharges. Because each index patient has only one ED visit leading to correct diagnosis, we estimated the denominator as the sum of index hospital admissions (recognized emergencies) plus excess discharges (unrecognized emergencies).
We aggregated data across all available years. To evaluate longitudinal trends, we also performed yearly estimates per index hospital admission date. To calculate the expected number of discharges in the 45 days preceding the index hospital admission, we expanded the data to 365 observations for each patient. We fit a logistic regression model of whether an individual had an ED discharge on any given day as a function of the number of days to the index hospital admission, using observations that occurred 365 to 45 days before the index visit to fit the model. We calculated expected discharges as the aggregate of predicted probabilities for the 45 days previously excluded. Because each patient’s observations were correlated in time, only point estimates were used; we obtained 95% CIs for the overall result via the bootstrap method.
To avoid counting official or unofficial transfers miscoded as discharges, ED discharges were excluded if they occurred on the same day as the index hospital admission (any) or if they occurred 1 day prior and the discharge diagnosis was for the index condition itself. Observation stays were treated as discharges if patients were discharged home at the end of observation and were treated as admissions if the patient was subsequently admitted to the hospital. As a sensitivity analysis, we show how results are affected by excluding ED visits that ended in observation and then discharge. We also show how results are affected by casewise exclusion of patients hospitalized for any other reason within 30 days of the index admission, to mitigate possible confounding (see the Discussion section).
As an exploratory analysis meant to illustrate expanded applications of our approach, we measured multivariable associations between patient characteristics or socioeconomic factors and unrecognized emergencies. To do so, we fit patient-level logistic models, with ED discharge during the 45-day preadmission window (one or more) as the dependent variable, as a function of age, sex, Medicare and Medicaid dual eligibility, race/ethnicity, and a history of any of the following conditions (chosen a priori on clinical grounds, defined by Chronic Conditions Data Warehouse indicators provided by the Centers for Medicare & Medicaid Services4): end-stage renal disease, dementia, depression, diabetes, hypertension, cerebrovascular disease, or ischemic heart disease. We also included the number of ED discharges earlier in the year (ie, the 320 days preceding the 45-day window) as a covariate, to account for possible correlation between patient characteristics and baseline ED use.
Similarly, we identified ED discharge diagnoses occurring with greater than expected frequency during the 45 days before index hospital admissions compared with the preceding 320 days. We report incidence rate ratios (IRRs) and exact 95% CIs for the 20 diagnoses that occur in greatest excess and the 20 diagnoses with the highest IRRs.
Analyses were performed with SAS Enterprise Guide (version 7.1; SAS Institute Inc), hosted by the Centers for Medicare & Medicaid Services Virtual Research Data Center, and in Stata (version 13.1; StataCorp LP). We created 95% CIs for our main estimates by clustered bootstrap at the level of the patient, sampling individuals with replacement over 1000 replications, fitting the logistic model, and performing the post-fit calculations for each replication, using the 2.5th and 97.5th percentiles of these results for the 95% CI. To evaluate longitudinal trends, we fit a linear regression (with robust SEs) of the logarithm of each year’s estimate as a function of calendar year, and we report the mean annual percentage change with the exponentiated coefficient.
We identified 1 873 207 Medicare FFS beneficiaries with 1 of the 5 conditions of interest. Of these, 152 484 were excluded because index hospital admission via the ED could not be established, 146 621 because they had previously been enrolled in hospice (n = 19 852) and/or had a skilled nursing facility claim within the prior 30 days (n = 131 220), and 12 162 because they had a previous ED discharge for the index diagnosis. For the 5 conditions in aggregate, the mean (SD) age was 77.9 (10.6) years. The proportion younger than 65 years ranged from 3.1% for ruptured AAA to 14.5% for SAH (8.9% overall) (Table 1). Quiz Ref IDMortality during the index hospital admission ranged from 5.2% for stroke to 48.2% for ruptured AAA.
Among the 1 561 940 individuals remaining after exclusions, there were 902 159 ED discharges to home 1 to 365 days before the index hospital admission. Histograms showing the frequency of ED discharges as a function of the number of elapsed days before the index hospital admission are shown in Figure 1 and in the eFigure in the Supplement.
Table 2 lists the main study results. Quiz Ref IDEstimates for the proportion of ED diagnostic opportunities that were missed are as follows: ruptured AAA (3.4%; 95% CI, 2.9%-4.0%), AMI (2.3%; 95% CI, 2.1% -2.4%), stroke (4.1%; 95% CI, 4.0%-4.2%), aortic dissection (4.5%; 95% CI, 3.9%-5.1%), and SAH (3.5%; 95% CI, 3.1%-3.9%).
Figure 2 shows longitudinal trends. Quiz Ref IDThe mean annual changes in the proportion of diagnostic opportunities missed (ie, the relative change) were as follows: ruptured AAA (8.7% per year increase; 95% CI, 1.5%-16.4%), AMI (2.3% per year decrease; 95% CI, 6.0% decrease to 1.6% increase), stroke (1.7% per year increase; 95% CI, 1.0%-2.4%), aortic dissection (1.1% per year decrease; 95% CI, 4.8% decrease to 2.7% increase), and SAH (6.5% per year increase; 95% CI, 0.0%-13.5%).
Results of sensitivity analyses are listed in eTables 2, 3, and 4 in the Supplement. Exclusion of patients who had been admitted to the hospital within 30 days before their index event (and eliminating ED discharges preceded by hospital admissions) reduced estimates slightly (eTable 2 in the Supplement). Exclusion of patients admitted under observation status and then discharged home (90 816 [10.1% of all ED discharges to home]) also reduced estimates slightly (eTable 3 in the Supplement). Using 30 or 60 days rather than 45 days as the duration of the pre–index event window for the observed vs expected calculation had negligible associations (eTable 4 in the Supplement).
In multivariable analyses that adjusted for demographics, chronic conditions, and the number of visits at baseline, excess ED discharges within 45 days of index hospital admission were positively correlated with age younger than 65 years, female sex, Medicare and Medicaid dual eligibility, and a history of end-stage renal disease, dementia, depression, stroke or transient ischemic attack (TIA), hypertension, coronary artery disease, and chronic obstructive pulmonary disease (Table 3). The associations with race/ethnicity and diabetes varied across conditions.
Quiz Ref IDEmergency department discharge diagnoses in greatest excess during the 45 days before admission were as follows: abdominal pain (unspecified site) for ruptured AAA (66 excess discharges; IRR for days 1-45 vs 46-365, 6.7; 95% CI, 4.5-9.4), chest pain (unspecified) for AMI (1194 excess discharges; IRR, 2.8; 95% CI, 2.6-2.9), unspecified transient cerebral ischemia for stroke (4532 excess discharges; IRR, 5.2; 95% CI, 5.0-5.4), chest pain (unspecified) for aortic dissection (149 excess discharges; IRR, 3.9; 95% CI, 3.3-4.7), and headache for SAH (290 excess discharges; IRR, 5.7; 95% CI, 4.9-6.6). eTable 5 in the Supplement lists the 20 diagnoses in greatest excess and the 20 diagnoses with the highest IRR, for each condition.
This study introduces what is to our knowledge a novel approach to measuring missed diagnostic opportunities in the ED. Among Medicare patients with the 5 diseases studied, we estimate that between 2.3% (AMI) and 4.5% (aortic dissection) of ED visits for symptoms relating to an imminent emergency end in discharge home without diagnosis. In a multivariable analysis, we demonstrate that patients who are younger than 65 years, female, or poor (ie, dually eligible for Medicare and Medicaid) or those who have chronic medical conditions were at increased risk. We see no evidence of improvement during the time frame of this study from 2007 to 2014.
Direct comparisons with previous reports are complicated by differing definitions of outcomes and metrics.5,6 For example, some studies6,7 define missed AMI as the actual presence of an AMI at the time of ED discharge. Herein, we define each condition as the presence of the underlying acute pathology, where a ruptured plaque, an intimal tear, a TIA, or a sentinel bleed would be counted. As recommended in the IOM report cited at the beginning of this article, we use the proportion of diagnostic opportunities missed (ie, the false-negative rate) as our metric of interest. Compared with commonly used alternatives, our formulation has the advantage of not being conditional on patients’ propensity to visit the ED, allowing comparison between populations or health systems.
We relax several assumptions common to previous work and demonstrate that they are often violated. Whereas it has been typical to consider ED discharges only within 1 or 2 weeks of an index hospital admission,8-12 we took a data-driven approach and found that excess visits can be detected earlier. Older literature supports this finding13-15; however, in an era when patients are quicker to seek emergency care, the possibility that acute cardiovascular pathology can remain undiagnosed for weeks may not be fully appreciated. A longer look-back period amplifies the importance of accounting for coincidental ED visits, which few previous studies do directly. We found that approximately two-thirds of the ED visits within the 45 days preceding an index hospital admission can be attributed to the population’s baseline rate of ED use. Failure to account for baseline use would substantially overestimate the frequency of events.
The ED discharge diagnosis has often been used to retrospectively adjudicate whether an unrecognized emergency was present at the time of an ED discharge.8,9,12,16,17 In our analysis, we found that the most common diagnoses associated with excess visits are what one might expect (eg, unspecified abdominal pain for ruptured AAA), but we also found that seemingly unrelated diagnoses are common. Published accounts of high-profile diagnostic errors support this contention. For example, the playwright Jonathan Larson was discharged from 2 EDs in the week before his death from an undiagnosed aortic dissection, with diagnoses of “food poisoning” and “viral syndrome.”18 While it is clear from subsequent investigation that his ED visits were prompted by symptoms of the dissection, discharge diagnosis alone could not have distinguished between misdiagnosis and coincidence.
Our method relies on a key assumption that nothing other than the acute cardiovascular pathology should increase ED visits just before the index hospital admission. One can imagine confounding by other factors that prompt ED visits and also cause cardiovascular emergency to occur. For example, vascular surgery might prompt ED visits for wound complications and may increase the risk for postoperative myocardial infarction or stroke. Confounding conditions would only influence results if they arise acutely. For example, psychological stress might precipitate both ED visits and cardiovascular emergencies, but would be accounted for in the baseline rate of ED use if chronic. Because the most plausible confounders (particularly high-risk surgery) would usually involve a hospitalization, we tested the association of excluding patients whose index hospital admission was preceded by any other admission within 30 days, and only a small association with results was found. To the extent that confounding remains, it would bias estimates upward (ie, the true proportion of emergencies unrecognized would be lower than what we report).
Quiz Ref IDIn the other direction of bias (toward underestimating the diagnostic error rate), we do not account for patients who die before they are admitted to a hospital. This is a limitation of most previous work on the topic. Future study might take the approach of treating outpatient deaths as index events, although the inaccuracy of death certificate diagnoses (particularly for out-of-hospital deaths) presents a challenge.19 Another limitation is that we do not account for diagnoses that are not appreciated in the ED but made after admission (or after death).
Our task was simplified by choosing to focus on cardiovascular emergencies generally understood to mandate hospital admission from the ED (unless patients’ primary goal is palliation). However, we found TIA to be among ED discharge diagnoses occurring in excess in the weeks before stroke hospital admissions, suggesting that for TIA the risk reduction achieved with hospitalization might not always be perceived to be worth the costs, financial or otherwise. Stroke estimates must be interpreted with that caveat.
Among Medicare patients, opportunities to diagnose ruptured AAA, AMI, stroke, aortic dissection, and SAH are missed in less than 1 in 20 ED presentations. Further improvement may prove difficult.
The frequency of unrecognized emergencies can only be estimated retrospectively; therefore, the call for better measurement tools must be answered with studies such as this one. Our study was not designed to consider trade-offs between diagnostic sensitivity and specificity and thus cannot address clinical decision making. However, the absence of improvement over the 8-year time frame of the study raises the question of whether our ability to diagnose these acute emergencies has reached a plateau. Some researchers have suggested a natural asymptote, where the costs or risks of seeking additional diagnostic certainty become prohibitive.20,21 Studies of organization-level variation aimed at identifying high-performing or low-performing outliers might help us understand whether we can do better. Regardless, measurement is important because technological or practice innovation might improve or degrade diagnostic sensitivity in the future. Cardiovascular emergencies provide a straightforward example of how surveillance for excess ED visits can be used to monitor diagnostic error, much as it has been used in other public health contexts.22 More widespread application of probabilistic approaches—to other diagnoses and to other care venues—will require further methodological innovation but seems inevitable.
Accepted for Publication: December 18, 2017.
Corresponding Author: Daniel A. Waxman, MD, PhD, RAND, 1776 Main St, Santa Monica, CA 90407 (email@example.com).
Published Online: February 26, 2018. doi:10.1001/jamainternmed.2017.8628
Author Contributions: Dr Waxman had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: All authors.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: All authors.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: Waxman, Schriger.
Obtained funding: Waxman.
Administrative, technical, or material support: Kanzaria.
Study supervision: Waxman.
Conflict of Interest Disclosures: Dr Schriger received salary support through an unrestricted grant from the Korein Foundation. No other conflicts are reported.
Funding/Support: RAND provided programming support for this project. Data access for Dr Waxman was supported through an interagency agreement between the Office of the Assistant Secretary for Planning and Evaluation (ASPE) (US Department of Health & Human Services) and the Centers for Medicare & Medicaid Services for the research and analysis.
Role of the Funder/Sponsor: The funding sources had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Contributions: Asa Wilks, MPA, provided programming assistance, and Carolyn Rutter, PhD, helped develop the methods at an early phase in the project. Both are affiliated with RAND. No compensation was received.