The ranking of items by the technical expert panel (TEP) of list of tests, treatments, and disposition decisions to a top-five list of items that are of little value, amenable to standardization, and actionable by emergency medicine clinicians.
eAppendix. Web-based survey tool
eTable. Initial 64 items solicited from technical expert panel and provider e-mails
Schuur JD, Carney DP, Lyn ET, Raja AS, Michael JA, Ross NG, Venkatesh AK. A Top-Five List for Emergency MedicineA Pilot Project to Improve the Value of Emergency Care. JAMA Intern Med. 2014;174(4):509-515. doi:10.1001/jamainternmed.2013.12688
Copyright 2014 American Medical Association. All Rights Reserved. Applicable FARS/DFARS Restrictions Apply to Government Use.
The mean cost of medical care in the United States is growing at an unsustainable rate; from 2003 through 2011, the cost for an emergency department (ED) visit rose 240%, from $560 to $1354. The diagnostic tests, treatments, and hospitalizations that emergency clinicians order result in significant costs.
To create a “top-five” list of tests, treatments, and disposition decisions that are of little value, are amenable to standardization, and are actionable by emergency medicine clinicians.
Design, Setting, and Participants
Modified Delphi consensus process and survey of 283 emergency medicine clinicians (physicians, physician assistants, and nurse practitioners) from 6 EDs.
We assembled a technical expert panel (TEP) and conducted a modified Delphi process to identify a top-five list using a 4-step process. In phase 1, we generated a list of low-value clinical decisions from TEP brainstorming and e-mail solicitation of clinicians. In phase 2, the TEP ranked items on contribution to cost, benefit to patients, and actionability by clinicians. In phase 3, we surveyed all ordering clinicians from the 6 EDs regarding distinct aspects of each item. In phase 4, the TEP voted for a final top-five list based on survey results and discussion.
Main Outcomes and Measures
A top-five list for emergency medicine. The TEP ranked items on contribution to cost, benefit to patients, and actionability by clinicians. The survey asked clinicians to score items on the potential benefit or harm to patients and the provider actionability of each item. Voting and surveys used 5-point Likert scales. A Pearson interdomain correlation was used.
Phase 1 identified 64 low-value items. Phase 2 narrowed this list to 7 laboratory tests, 3 medications, 4 imaging studies, and 3 disposition decisions included in the phase 3 survey (71.0% response rate). All 17 items showed a significant positive correlation between benefit and actionability (r, 0.19-0.37 [P ≤ .01]). One item received unanimous TEP support, 4 received majority support, and 12 received at least 1 vote.
Conclusions and Relevance
Our TEP identified clinical actions that are of low value and within the control of ED health care providers. This method can be used to identify additional actionable targets of overuse in emergency medicine.
The cost of medical care in the United States is high relative to that of other industrial countries and is growing1; emergency care is no exception. From 2003 through 2011, the mean cost for an emergency department (ED) visit rose 240%, from $560 to $1354.2,3 The diagnostic tests, treatments, and hospitalizations that emergency clinicians order result in significant costs, estimated to range from 5% to 10% of national health expenditures.4 Overuse of health care services is a major contributor to rising health care costs.5 The Choosing Wisely campaign, recently launched by the American Board of Internal Medicine Foundation, has led a number of specialty societies to develop lists of 5 tests or procedures that are of low value and may be avoidable. The American College of Emergency Physicians released a list of 5 low-value items in October 2013 when joining the Choosing Wisely campaign.
In 2011, Partners Healthcare, an integrated delivery system in Massachusetts, initiated a project to improve affordability of health care by charging all clinical specialties to develop and implement projects to reduce costs. The departments of emergency medicine sponsored a pilot project to prioritize future affordability projects. We aimed to identify a “top-five” list of tests, treatments, and disposition decisions that emergency clinicians order frequently, that have a significant cost, and that provide little or no benefit to a subset of patients.6 We sought to identify decisions that are amenable to standardization, actionable by emergency clinicians, and thus good targets for quality improvement.
We conducted a 4-phase consensus development project, illustrated in the Figure. We first convened a technical expert panel (TEP) and followed a modified Delphi technique7 using expert opinions and results of a clinician survey to rank potentially avoidable clinical actions. Costs of tests and treatments were not calculated for use in the project. The project was determined to be exempt from review by the Partners Human Research Committee. Informed consent was waived.
Partners Healthcare is an integrated delivery system in eastern Massachusetts that includes 2 academic and 4 community-hospital EDs. These 6 EDs account for more than 320 000 annual patient visits.
We convened a TEP to represent emergency medical practice in our system. The TEP included the chief or the physician quality director of each ED (J.D.S., E.T.L., J.A.M., and N.G.R.), 1 emergency physician (EP) executive sponsor of the affordability project (E.T.L.), 1 EP with research fellowship training and expertise in diagnostic imaging (A.S.R.), 1 EP with expertise in hospital admission and transfers, and 1 emergency medicine chief resident (A.K.V.). All TEP panel members except the chief resident were board certified in emergency medicine. The emergency clinician survey included all attending and resident physicians, physician assistants (PAs), and nurse practitioners (NPs) who practiced in the 6 EDs. Because our project focused on test ordering and admission to the hospital, we did not include nurses, respiratory therapists, or other ED staff.
We conducted a modified Delphi and survey process to identify low-value care items.7 In phase 1, we brainstormed an initial list of low-value clinical decisions that were under the control of emergency clinicians and were thought to have a potential for cost savings. All health care practitioners and the TEP were solicited by e-mail to suggest actions for the project. In phase 2 of the project, the TEP performed 2 rounds of review and ranking of the initial items. First, panelists ranked each item using a 5-point Likert scale on the following 3 dimensions: (1) perceived contribution to cost (ie, how commonly the item is ordered by emergency clinicians and the individual expense of the test/treatment/action); (2) benefit to patients (scientific evidence to support use of the item in the literature or in guidelines); and (3) actionability by an EM practitioner (ie, use decided by emergency clinicians [not other specialties] and ability to standardize the action). In the second round, panelists reviewed the panel’s mean first-round votes, and each TEP member selected 5 low-value tests within each prespecified domain of emergency care, including laboratory tests, medications and transfusions, imaging, and disposition decisions.
In phase 3 of the project, we surveyed all ED clinicians using a web-based survey tool (Supplement [eAppendix]). The survey included actions that had TEP consensus that such care was high cost, low benefit, and highly actionable. Each item was rephrased into a specific overuse statement reflecting the action necessary to improve the value of care, for example: “Do not order amylase in order to diagnose acute pancreatitis (order lipase only).” Respondents were asked to score each overuse statement using two 5-point Likert scales to assess the impact of following the action statements as the potential benefit or harm to the patient and the degree to which the practice is actionable by emergency clinicians (1, very beneficial/actionable; 2, somewhat beneficial/actionable; 3, neutral; 4, somewhat harmful/inactionable; or 5, very harmful/inactionable).
In phase 4, the TEP panelists reviewed the survey results, including the mean benefit, actionability, and a composite score, and had to choose 5 items to create a top-five list. The TEP reviewed and discussed the final rankings aiming to achieve consensus on 5 items that do not represent high-value care, defined as actions with significant potential savings without affecting quality that are amenable to standardization.
Ranking and voting by the TEP was conducted using a spreadsheet program (Microsoft Excel; Microsoft Corp). The survey was administered using the Research Electronic Data Capture (REDCap) tool.8 Clinicians were invited to participate via e-mail in December 2011 and sent follow-up reminders 2 and 4 weeks later.
We describe the results of the consensus process listing the frequency of TEP votes for the top-five list. For the survey, we report mean scores of benefit and actionability and a composite mean score of both. We calculated Pearson product moment correlation coefficients between benefit and actionability scores for each item. We compared composite scores between practice settings (academic vs community hospital), clinician types (physician vs PA or NP), and clinician experience (0-2, 3-10, or >10 years) using unpaired, 2-tailed t tests and 1-way analyses of variance. All calculations were performed with commercially available software (STATA 64MP, version 10.1; StataCorp).
The initial brainstorming exercise identified 64 unique items across the following 4 domains: laboratory tests, medications and transfusions, imaging, and disposition decisions (Supplement [eTable]). The 2-round TEP ranking process identified 17 items (7 laboratory tests, 3 medications, 4 imaging studies, and 3 disposition decisions) that group consensus defined as high cost, low benefit, and highly actionable. These 17 items were included in the clinician survey.
Of 283 clinicians in the survey sample, 201 (71.0%) responded to the survey, and 174 (61.5%) completed it. Completion rate did not differ among attending physicians (101 of 149 [67.8%]), PAs and NPs (44 of 78 [56.4%]), and residents (29 of 56 [51.8%]; P = .06). Academic ED practitioners (120 of 189 [63.5%]) had similar completion rates to those of community-hospital ED practitioners (54 of 94 [57.4%]). Item nonresponse ranged from 4% to 10% on clinical questions. Among survey respondents, 58.0% were attending physicians, 25.3% were PAs or NPs, and 16.7% were residents. Most respondents (120 [69.9%]) identified their primary practice setting as an academic ED, whereas 54 (31.0%) practiced in community-hospital EDs. Overall, 19.5% of respondents had 0 to 2 years in practice; 35.1%, 3 to 10 years; 21.3%, greater than 10 years; and 24.1%, not specified.
Table 1 shows the results of the survey and the final TEP voting. All 17 items had a mean and median score of very or somewhat beneficial and actionable. For all items, we found a significant positive correlation between scores on benefit and actionability (r, 0.19-0.37 [P ≤ .01]). Because benefit and actionability were closely correlated, we analyzed between-group differences comparing the mean scores for benefit and actionability (composite score). When ranked by composite score, general consensus was achieved regarding the relative importance of avoidable action items (Table 2). Except for several actions, composite scores were not significantly different when stratified by clinician type (attending physician or resident vs NP or PA), setting (academic vs community hospital), and experience (0-2, 3-10, or >10 years). Practitioners in community-hospital EDs scored several items as more beneficial and actionable than did clinicians in academic EDs, including not ordering magnetic resonance imaging of the lumbar spine for lower back pain (item 3) (1.2 vs 0.9 [P = .02]) and not mandating follow-up wound checks in the ED for uncomplicated abscesses or cellulitis (item 15) (2.6 vs 1.9 [P = .04]). Conversely, clinicians at academic EDs scored not admitting patients with low-risk chest pain (item 7) as more beneficial and actionable than clinicians in community-hospital EDs (1.3 vs 2.7 [P < .001]). Physicians, but not PAs or NPs, scored the following 2 items as more beneficial and actionable: not ordering blood cultures for cellulitis (item 6) (1.3 vs 1.9 [P = .02]) and not ordering screening chest radiography in stable patients with atraumatic chest pain (item 14) (2.2 vs 3.0 [P = .004]). The following 2 items were scored as less beneficial and actionable with increasing clinician experience (0-2, 3-10, or >10 years): not ordering coagulation studies without hemorrhage or suspected coagulopathy (item 5) (2.4 vs 1.8 vs 1.5 [P = .04]) and not mandating follow-up wound checks in the ED for uncomplicated abscesses or cellulitis (item 15) (2.8 vs 2.5 vs 1.8 [P = .04]). Attending physicians scored the same 5 items highest, as did the TEP. Resident physicians, NPs, and PAs scored 4 of the final top 5 items in their respective lists. Respondents from academic and community-hospital EDs also ranked 4 of the final top 5 items in their respective lists.
Of the 17 survey items, 12 items received at least 1 TEP member vote for potential inclusion in the top-five list. Only 1 item addressing imaging of the cervical spine was unanimously selected. The following final top-five list gained majority support from the TEP (Table 1):
Do not order computed tomography (CT) of the cervical spine for patients after trauma who do not meet the National Emergency X-ray Utilization Study (NEXUS) low-risk criteria9 or the Canadian C-Spine Rule.10
Do not order CT to diagnose pulmonary embolism without first risk stratifying for pulmonary embolism (pretest probability and D-dimer tests if low probability).
Do not order magnetic resonance imaging of the lumbar spine for patients with lower back pain without high-risk features.
Do not order CT of the head for patients with mild traumatic head injury who do not meet New Orleans Criteria11 or Canadian CT Head Rule.12
Do not order coagulation studies for patients without hemorrhage or suspected coagulopathy (eg, with anticoagulation therapy, clinical coagulopathy).
A top-five list is a new idea to engage clinicians in resource stewardship and to address rising health care costs. Historically, physicians in the United States have practiced focusing on their patients, with little regard to cost. This paradigm was articulated in 1984 by Livinsky,13(p1575) who stated, “When practicing medicine, doctors cannot serve two masters. The doctor’s master must be the patient.” As the rising cost of medical care has threatened patient access to health care and forced society to choose between health care and other worthy expenditures, this paradigm has been questioned. In 2010, Brody6 proposed that physicians have an ethical obligation to take some responsibility for health care costs, and his call on medical specialty societies to develop top-five lists of tests and treatments that are frequently performed, are high cost, and are of no value to a significant proportion of the patients who undergo the tests was recently answered by many societies in the Choosing Wisely campaign.14 To develop an agenda for cost reduction in emergency care, we conducted a project to identify tests, treatments, and disposition decisions that are of low value to an explicit subset of patients. We engaged an expert panel of EPs and surveyed emergency clinicians in 6 EDs to determine a top-five list for emergency medicine.
This project used a method that can serve as a model for emergency medicine locally or on a wider scale to prioritize efforts to address overuse. Our project went beyond the current specialty society top-five lists by formally evaluating benefit and actionability in a large group of clinicians. The expert panel processes used by most specialties in the Choosing Wisely campaign do not appear to measure actionability, and some items do not appear directly actionable by the specialties.15 For example, the top-five list released by the American College of Radiology includes avoiding imaging for uncomplicated headache. This item may be challenging for radiologists to influence in light of current practice patterns, in which radiologists interpret images but do not directly influence the ordering process.15 The challenge of translating guidelines into practice has been well described, so measuring actionability is a critical initial step in developing a top-five list.16,17 With several exceptions, our top-five list received similar ratings by different groups of ED clinicians, including physicians and midlevel practitioners, clinicians in academic and community-hospital EDs, and practitioners with experience ranging from less than 3 to more than 10 years. The differences raise interesting questions that may reflect understanding of the evidence, different patient populations, or different practice environments. For example, health care clinicians at community-hospital EDs scored not admitting patients with low-risk chest pain less favorably than did academic clinicians. Because chest pain is a leading cause of hospitalization from the ED, this item merits further exploration. We found that a top-five list creates an arbitrary cutoff that may not be meaningful; our fifth- and sixth-rated items were laboratory tests that had 5 and 4 votes, respectively (Table 1). Although ordering blood cultures for cellulitis scored more favorably on the survey, the final TEP discussion and voting ranked ordering coagulation tests higher because of the frequency of use.
Some EPs may be hesitant to embrace stewardship efforts, such as Choosing Wisely, for fear of losing autonomy and the medicolegal risk. However, if EPs, who best understand the clinical evidence and unique needs of our patients, do not define measures of overuse for our specialty, others will. Concerns have been expressed that the legal obligation of the Emergency Medicine and Treatment of Active Labor Act makes emergency practice unique and not amenable to addressing overuse18; however, our study suggests that consensus exists among emergency health care clinicians and with other specialty societies. Although initiated before the publication of the Choosing Wisely lists, 4 of 5 items on our top-five list are similar to recommendations advanced by specialty societies.15 Only item 1, ordering CT of the cervical spine for patients after trauma who do not meet evidence-based clinical decision rules, is not included in Choosing Wisely. Although we did not include formal evidence reviews in our TEP process, all items on our top-five list are supported by clinical guidelines or systematic reviews.19- 23
Despite using accepted consensus techniques, our project has several limitations. First, the project was focused on a single health care delivery system that overrepresents academic EDs, and the top-five list was influenced by TEP members, thus limiting the generalizability to different systems. Second, we did not use formal cost data to guide the consensus process or survey because each ED has its own method to determine costs and a unique charge master. Obtaining and normalizing cost and charge data was beyond the resources of the project. Nevertheless, the TEP representatives had significant operational experience with cost of care and used this experience in their ranking. Finally, 2 affordability projects were begun in parallel with this project—reducing the use of CT for pulmonary embolus and reducing the use of head CT for mild traumatic head injury—and these projects may have biased TEP members’ rankings. However, the affordability projects should not have affected the clinician survey results because both projects were publicly launched after completion of the survey.
Emergency medicine is under immense pressure to improve the value of health care services delivered. Emergency physicians and the organizations that represent them have an obligation to their patients and to society to address the cost of emergency care directly. Our project piloted a method that EDs can use to identify actionable targets of overuse; we identified clinical actions that were of low value, within clinician control, and for which consensus existed among ED health care clinicians. Developing and addressing a top-five list is a first step to addressing the critical issue of the value of emergency care.
Accepted for Publication: June 19, 2013.
Corresponding Author: Jeremiah D. Schuur, MD, MHS, Department of Emergency Medicine, Brigham and Women’s Hospital, 75 Francis St, Boston, MA 02115 (email@example.com).
Published Online: February 17, 2014. doi:10.1001/jamainternmed.2013.12688.
Author Contributions: Dr Schuur had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Schuur, Carney, Ross, Venkatesh.
Acquisition of data: All authors.
Analysis and interpretation of data: Schuur, Carney, Lyn, Ross, Venkatesh.
Drafting of the manuscript: Schuur, Carney, Lyn, Michael, Venkatesh.
Critical revision of the manuscript for important intellectual content: Schuur, Carney, Lyn, Raja, Ross, Venkatesh.
Statistical analysis: Schuur, Carney.
Obtained funding: Schuur, Carney.
Administrative, technical, and material support: Schuur, Carney, Lyn, Raja, Ross, Venkatesh.
Study supervision: Schuur, Venkatesh.
Conflict of Interest Disclosures: None reported.
Funding/Support: This study was supported by an Emergency Medicine Residents’ Association student research grant (Mr Carney).
Role of the Sponsor: The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Additional Information: The Partners Emergency Medicine Top-Five Working Group Members include Anthony R. Berner, Newton-Wellesley Hospital, Newton, Massachusetts; Theodore I. Benzer, Massachusetts General Hospital, Boston; Richard E. Larson, Faulkner Hospital, Boston; Jeremiah D. Schuur, MD, MHS; Everett T. Lyn, MD; John A. Michael, MD, MS; Ali S. Raja, MD, MBA, MPH; Nicholas G. Ross, MD, MS; Arjun K. Venkatesh, MD, MBA; and Richard D. Zane, Brigham and Women’s Hospital, Boston.
Additional Contributions: Carmen Varga-Sen, MS, and Bhrunil Patel, MS, assisted in administering the project as part of their work at Partners’ Healthcare.
Correction: This article was corrected on March 5, 2014, to fix an error in the Introduction.