Background
Pain is a major quality issue. The objective of this study was to evaluate the effectiveness of a series of interventions on pain management.
Methods
This controlled clinical trial (April 1, 2002, to February 28, 2003) involved the staggered implementation of 3 interventions into 2 blocks of matched hospital units. The setting was an 1171-bed hospital. A total of 3964 adults were studied. Interventions included education, standardized pain assessment using a 1- or 4-item (enhanced) pain scale, audit and feedback of pain scores to nursing staff, and a computerized decision support system. The main outcome measures were pain assessment and severity and analgesic prescribing.
Results
Units using enhanced pain scales had significantly higher pain assessment rates than units using 1-item pain scales (64% vs 32%; P<.001), audit and feedback of pain results was associated with increases in pain assessment rates compared with units in which audit and feedback was not used (85% vs 64%; P<.001), and the addition of the computerized decision support system was associated with significant increases in pain assessment only when compared with units without audit and feedback (79% vs 64%; P<.001). The enhanced pain scale was associated with significant increases in prescribing of World Health Organization step 2 or 3 analgesic for patients with moderate or severe pain compared with the 1-item scale (83% vs 66%; P=.01). The interventions did not improve pain scores.
Conclusions
A clinically meaningful pain assessment instrument combined with either audit and feedback or a computerized decision support system improved pain documentation to more than 80%. The enhanced pain scale was associated with improved analgesic prescribing. Future interventions should be directed toward altering physician behavior related to titration of opioid analgesics.
Pain is the most common symptom experienced by hospitalized adults1,2 and is highly prevalent in multiple patient populations and settings.3-12 Whereas pain management guidelines exist,13-16 and the Joint Commission for the Accreditation of Healthcare Organizations has mandated routine pain assessment in hospitals, these strategies have not been rigorously evaluated. Institutionwide quality improvement studies that have used these guidelines have met with only modest success in improving pain assessment rates17-19 and have yet to demonstrate improvements in pain intensity.17 These studies have been limited by small sample sizes, targeted at isolated hospital units or patient populations (eg, those with cancer), lacked adequate control groups, or used cross-sectional rather than longitudinal designs.17
In an effort to institutionalize improvement in pain management, this study evaluated the incremental effectiveness of a series of additive interventions designed to improve the detection and treatment of pain in hospitalized patients. Education in pain management was followed by a series of interventions modeled after strategies that have been shown to change practice patterns and improve health outcomes in other settings.
This controlled clinical trial enrolled a 25% random sample of all eligible subjects admitted to matched medical/surgical units at a large teaching hospital. Physician and nursing education in pain management was followed by the staggered implementation of combinations of additive interventions based on published pain guidelines16,20,21 into 2 blocks of matched medical/surgical hospital units at 6-month intervals. The interventions consisted of the following: (1) patient education and nursing pain assessment of current pain, worst pain, pain relief, and acceptability of pain; (2) audit and feedback to nursing staff of patients' pain intensity and staff compliance with assessment; and (3) a computerized clinical decision support system (CDSS) to guide analgesic prescribing. Table 1 summarizes the study design.
This study was performed at Mount Sinai Hospital, New York, a 1171-bed hospital. Nine medical/surgical units were selected for inclusion based on similar baseline patient demographics and pain scores (3 general medicine, 2 general surgery, 2 specialty surgery, 1 oncology, and 1 mixed oncology/general medicine). Each unit contained 32 to 36 beds, with the exception of 1 general medicine unit (18 beds). A general medicine, general surgery, and specialty surgery unit were matched to a similar unit, and the smaller general medicine unit was combined with the oncology unit and matched to the mixed oncology/general medicine unit.
We prospectively screened a 25% random sample of all admissions to each of the 9 study units daily, Monday through Friday, from February 16, 2001, through February 15, 2003, for study enrollment and approached eligible patients for informed consent. All patients admitted to each study unit were exposed to the interventions, although data were collected only from those enrolled. Eligibility criteria and study enrollment details are in the Figure.
The intervention was divided into 4 phases (Table 1).
Nurse educators conducted pain management training on all hospital nursing units and during orientation for newly hired nurses using modules developed from published guidelines.16,20,22-24 Three of us (R.S.M., D.E.M., and D.F.) conducted grand rounds for all clinical departments and led workshops on pain management for house staff.
Hospital policy required that pain be assessed at least once per nursing shift using a 4-point scale (0 indicates none; 1, mild; 2, moderate; and 3, severe). During phase 2, an enhanced pain assessment that included more comprehensive information and that could better guide analgesic prescribing was implemented only on block B. Block B nurses asked patients to rate their current pain, worst pain, pain relief, and whether the level of pain was acceptable to them on 4-point scales. Pain scores were printed on vital sign reports.
Phase 3: Audit and Feedback
During phase 3, the expanded pain assessment was placed into block A and the audit and feedback intervention was implemented on block B. Three process measures and 1 outcome measure were selected for audit and feedback based on previous studies25,26 and guidelines16,20,21 (Table 1). The last 2 months of phase 2 were audited, and the first feedback reports were provided to the units' nursing managers at the beginning of phase 3 and monthly thereafter. Feedback reports detailed the individual units' performance and benchmarked their performance against block B units and block B as a whole. Block A did not receive feedback.
During phase 4, the CDSS was implemented simultaneously on both blocks. Because of the design of Mount Sinai Hospital's clinical information system, and similar to other studies27-30 involving CDSS linked to order entry systems, we were unable to randomize the implementation of CDSS by block. In addition, randomization by block presented the potential for contamination because physicians treating patients on CDSS units were likely to be simultaneously caring for patients on non–CDSS-exposed units. Contamination was unlikely in earlier phases because nurses were unit based.
The CDSS was modeled on a previously described program.31 Physicians ordering analgesics on the hospital's order entry system (TDS; Eclipsys Corporation, Atlanta, Ga) were provided with links to reports summarizing patient characteristics relevant for analgesic prescribing and to the CDSS. The CDSS provided recommendations for initiating analgesics, dose escalation, switching agents, patient-controlled analgesia, bowel regimens, and opioid adverse effect management. After accessing recommendations, physicians entered medication orders. The CDSS was available to nurses through their charting pathways. The algorithms for the CDSS were developed by the investigators based on published guidelines, reviewed by content experts in pain medicine, and piloted with physicians. All nonpediatric clinical departments received CDSS training.
Beginning in phase 2, trained clinical interviewers blinded to the study intervention interviewed enrolled subjects within 48 hours of admission and then once daily, Monday through Friday. Patients were asked to rate their current pain, their worst pain over 24 hours, their pain relief with analgesics, and whether their pain was acceptable to them. Pain and pain relief were rated on 4-point scales (Table 1). Data were gathered directly from patients rather than from nursing reports because of concerns that nursing compliance with pain assessment would not be high enough to ensure that reliable and valid pain reports could be obtained.
Outcomes included measures of pain assessment, pain severity, and analgesic prescribing. We evaluated the percentage of patients who had a daily pain assessment for each nursing shift for the first 5 days after enrollment, the percentage of patients who reported moderate to severe pain the day of enrollment (medicine patients) or on postoperative day 1 (surgery patients) and who continued to have moderate to severe pain 72 to 96 hours later, mean pain scores for the first 72 hours after enrollment (medical patients) or on postoperative days 1 through 3 (surgical patients), the percentage of patients with 2 or more days of nurse-recorded moderate to severe pain receiving a World Health Organization (WHO) step 2 or 3 analgesic,32 the percentage of patients receiving standing opioids for 48 hours or more who were receiving a concomitant laxative, and the percentage of patients receiving meperidine. A reduction in meperidine use was considered an important outcome given uniformity among professional guidelines that meperidine not be used as an analgesic because of its epileptogenic metabolites.15,16,21,33 Pain scores reported to clinical interviewers were used in all analyses.
Hierarchical multivariate linear and logistic regression models (HLMs) were used to examine the interventions' effects on outcomes. Because the intervention was assigned at the level of the hospital unit, but measures were at the individual patient level, there was the potential that data from patients on the same unit would be correlated, leading to incorrect standard errors.34,35 The HLMs took into account these nested data and allowed for correct inferences.36,37
A 3-level HLM model (shift, patient, and hospital unit) was used to test the effect of the intervention on nursing pain assessment using data collected for each patient for each of the 3 daily nursing shifts. We hypothesized that assessment might vary across shifts because of changes in nursing staff ratios and differences in unit activities. Two-level HLMs (patient and unit) were developed to examine the interventions' effects on pain severity and prescribing of analgesics and laxatives. Both 2- and 3-level models controlled for patient characteristics, block, and phase, and accounted for unmeasured (random) effects at patient and hospital unit levels. The 3-level model additionally accounted for the fixed effects of the nursing shifts.
Data were analyzed with SAS statistical software (PROC MIXED for continuous variables and GLIMMIX for discrete variables) (SAS Institute Inc, Cary, NC).38 The 3-level hierarchical model was estimated using a computer program (MLwiN).39,40
The analyses examined whether (1) the enhanced pain scale was associated with better outcomes than the 1-item pain scale, (2) the addition of audit and feedback to the enhanced pain scale was associated with better outcomes than the enhanced pain scale alone, (3) the addition of the CDSS to the enhanced pain scale was associated with improved outcomes, and (4) the addition of the CDSS to the enhanced pain scale with audit and feedback was associated with improved outcomes. Hypotheses were tested directly based on the significance levels of dummy variables representing the main effect and interaction of each block and phase, controlling for the patient characteristics of race, sex, age, insurance status, surgical vs medical patient, and diagnosis. Point estimates were determined for each block and phase subgroup along with tests of the joint hypotheses previously noted, based on the HLM analysis. The Mount Sinai School of Medicine institutional review board approved the study.
Patient characteristics are given in Table 2. There were few significant differences between blocks.
Controlling for Table 2 variables, patients on units using the enhanced pain scale were significantly more likely to have their pain assessed than those on units in which the 1-item pain scale was used (variable estimate, 0.62; SE, 0.14; P<.001), audit and feedback of pain results was associated with significant increases in pain assessment rates compared with units without audit and feedback (variable estimate, 0.80; SE, 0.11; P<.001), and the addition of the CDSS was associated with significant increases in pain assessment only when compared with units that lacked audit and feedback (variable estimate, 1.13; SE, 0.14; P<.001) (Table 3). Overall, the percentage of patients who received at least 1 pain assessment per day improved from 32.1% with the standard pain assessment to 79.3% when the enhanced pain scale was combined with the CDSS, and to more than 80% for interventions using audit and feedback.
The intervention did not alter pain severity (Table 3). The percentage of patients who had 72 to 96 hours of persistent pain following study enrollment and overall mean pain scores remained relatively constant across both blocks and all 3 phases. Mean pain scores recorded by nurses were on average 0.1 points lower than those recorded by research staff for the same interview day (P<.001).
Several interventions improved analgesic prescribing. The enhanced pain scale was associated with significant increases in WHO step 2 or 3 analgesic prescribing for patients with moderate to severe pain compared with the standard pain scale. The CDSS was associated with a significant reduction in the use of meperidine in block B. No intervention seemed to improve laxative prescribing. Use of the CDSS was low: physicians used the CDSS a mean of 3.3 times per day for enrolled patients.
We also examined changes in analgesic prescribing. For patients with persistent moderate to severe pain who were not receiving opioids on day 1 and who were subsequently prescribed an opioid, the average dose in parenteral morphine sulfate equivalents was 32 mg/d at day 3. For patients with persistent pain who were receiving an opioid on day 1, the mean percentage dose increase in parenteral morphine sulfate equivalents was 17% (51.4 mg on day 1 to 60.4 mg on day 3). There were no significant differences across either block or phase.
To our knowledge, this is the largest study to examine a series of previously well-described but relatively untested interventions to improve pain management in hospitals. Improving the treatment of pain requires that pain be routinely assessed and that, once identified, appropriate analgesic medications be prescribed. To our knowledge, no consistent and generalizable interventions have been demonstrated to achieve these goals.17-19 Our study suggests that routine assessment of pain severity, relief, and acceptability can be effectively used within a large medical center; is more likely to be used than a simpler assessment that only addresses severity; and results in improved analgesic prescribing. In addition, our study suggests that audit and feedback of pain results to nursing staff can significantly increase assessment rates. Although the intervention did improve several process measures related to analgesic prescribing (increase in WHO step 2 or 3 analgesics and reductions in meperidine prescribing), we did not observe reductions in pain severity.
Effect on pain identification and assessment
Improving the detection of pain in hospitalized patients is a fundamental first step in improving its treatment, and one that has been difficult to achieve.17 Consistent with prior reports,17 this study demonstrated that combining nursing education with the implementation of a 1-item pain severity question at vital sign assessment is ineffectual in ensuring universal pain assessment. Several interventions in this study increased pain assessment rates. An enhanced pain assessment that provided nurses with clinically meaningful information doubled the prevalence of daily pain assessment compared with the use of a standard 1-item scale (64.0% vs 32.1%). The addition of audit and feedback or CDSS to nursing units using the enhanced pain scale increased daily pain assessment rates to more than 79.4%, suggesting that regular nursing assessment of pain in hospitals can be achieved by combining our enhanced pain scale to either audit and feedback of nursing pain scores or a CDSS. Although nursing-recorded pain scores were statistically different than those obtained by research staff, the average difference was only 0.1 points on the 4-point scale, and not large enough to justify the added expense of assigning pain assessment to nonclinical staff for the purpose of audit. Which intervention (audit and feedback or CDSS) is most cost-effective will likely depend on local institutional factors.
Effect on analgesic prescribing
This study is one of the first large-scale studies to demonstrate improvement in analgesic prescribing. A recent systematic review17 of hospital interventions designed to improve pain management found that only 3 of 20 studies evaluated analgesic prescribing, and none of these studies demonstrated significant changes as a result of their interventions. In this study, the enhanced pain scale was associated with significant increases in the prescribing of WHO step 2 or 3 analgesics compared with the single-item scale.
The interventions used in this study did not lead to reductions in pain despite improved prescribing of WHO step 2 or 3 analgesics. Possible explanations for this finding might be a failure to titrate to pain relief or that patients reporting severe pain may have declined additional analgesia. In a prior study,41 31% of patients with severe pain reported that this was acceptable to them in the setting of their illness.
Computerized decision support systems have been shown to be effective in influencing physicians' behaviors.29,42 A possible reason for the system's failure in this study was the fact that the CDSS did not automatically prompt or require prescribers to use it and was used for only a few enrolled patients. Although we sought an active CDSS, the physician information technology advisory committee was willing to use a passive system only. Two recent reviews42,43 of CDSS studies published after this study was initiated found that voluntarily activated decision support systems are relatively ineffectual in altering physician behavior.
There are limitations to this study. First, given the Joint Commission for the Accreditation of Healthcare Organizations' pain standards, it was impossible to determine whether the routine assessment of pain was associated with improved pain management compared with no assessment. It is unlikely, however, that the rate of pain assessment in the absence of a standard scale would be higher than the 32.1% that we observed with the 1-item measure. Second, although we did not observe statistical differences among patient characteristics between the 2 blocks, it is possible that some unmeasured factor contributed to the differences observed. We believe that this is unlikely because our study design allowed us to compare each intervention's effect, with the exception of the CDSS, across blocks and within blocks across phases. Thus, if there were either block (unmeasured differences between units) or phase (unmeasured differences over time) effects, our analyses would have identified them. Third, it is possible that our study was underpowered to detect differences in pain severity. Nevertheless, even if the study was underpowered, the magnitude of the effect is likely to be too small to be clinically relevant. Fourth, our study was undertaken in a large academic medical center. It is possible that in a smaller hospital, different results might have been obtained. Fifth, because physicians were not geographically based, it is possible that their exposure to the block B interventions influenced their behavior on block A. Finally, because each phase of this study lasted 6 to 8 months, it is unclear whether the observed results persisted after the study's close.
To our knowledge, this is the largest study to examine interventions to improve the assessment and management of pain in a large US hospital. By using a rigorous experimental design, this study identified successful and generalizable interventions that significantly improved nursing assessment of pain to more than 80% and prescribing of appropriate analgesics. The implementation of a passive CDSS designed to assist physician analgesic prescribing did not improve pain scores. Our data suggest that hospitals can accomplish the first step in improving the management of pain—ensuring routine pain detection and scaling—by implementing an enhanced pain scale, as described in this study, in combination with regular audit and feedback of pain assessment results to nursing staff. Future efforts to improve analgesic prescribing and relief of pain should target physicians and focus on opioid titration and adverse effect management. Possible strategies could include either an active CDSS or audit and feedback of pain scores and analgesic prescribing practices to physicians.
Correspondence: R. Sean Morrison, MD, Department of Geriatrics, Mount Sinai School of Medicine, Campus Box 1070, New York, NY 10029 (sean.morrison@mssm.edu).
Accepted for Publication: December 24, 2005.
Author Contributions: Dr Morrison had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Financial Disclosure: None.
Funding/Support: This study was supported by grant R01 HS10539 from the Agency for Healthcare Research and Quality. Drs Morrison and Siu are recipients of midcareer investigator awards in patient-oriented research from the National Institute on Aging, and Dr Morrison was a Paul Beeson Physician Faculty Scholar. Dr Meier is the recipient of an Academic Career Leadership Award from the National Institute on Aging. Dr Moore is the recipient of a Minority Supplement (R01 HS10539-03S1) from the Agency for Healthcare Research and Quality.
Role of the Sponsor: The funding bodies had no role in data extraction and analyses, in the writing of the manuscript, or in the decision to submit the manuscript for publication.
1.Harris
L Poll for National Council on Aging. Rochester, NY Harris Interactive1997;
2.Warfield
CAKahn
CH Acute pain management: programs in US hospitals and experiences and attitudes among US adults.
Anesthesiology 1995;831090- 1094
PubMedGoogle ScholarCrossref 3.Kelsen
DPPortenoy
RKThaler
HT
et al. Pain and depression in patients with newly diagnosed pancreas cancer.
J Clin Oncol 1995;13748- 755
PubMedGoogle Scholar 6.Manfredi
PLMorrison
RSMorris
JGoldhirsch
SLCarter
JMMeier
DE Palliative care consultations: how do they impact the care of hospitalized patients?
J Pain Symptom Manage 2000;20166- 173
PubMedGoogle ScholarCrossref 7.Bernabei
RGambassi
GLapane
K
et al. SAGE Study Group, Systematic Assessment of Geriatric Drug Use via Epidemiology, Management of pain in elderly patients with cancer.
JAMA 1998;2791877- 1882[published erratum appears in
JAMA. 1999;281:136]
PubMedGoogle ScholarCrossref 8.Lynch
EPLazor
MAGellis
JEOrav
JGoldman
LMarcantonio
ER Patient experience of pain after elective noncardiac surgery.
Anesth Analg 1997;85117- 123
PubMedGoogle Scholar 9.Oates
JDSnowdon
SLJayson
DW Failure of pain relief after surgery: attitudes of ward staff and patients to postoperative analgesia.
Anaesthesia 1994;49755- 758
Google ScholarCrossref 10.SUPPORT Principal Investigators, A controlled trial to improve care for seriously ill hospitalized patients: the Study to Understand Prognoses and PREFERENCES for Outcomes and Risks of Treatments (SUPPORT).
JAMA 1995;2741591- 1598
PubMedGoogle ScholarCrossref 11.Cleeland
CSGonin
RHatfield
AK
et al. Pain and its treatment in outpatients with metastatic cancer.
N Engl J Med 1994;330592- 596
PubMedGoogle ScholarCrossref 12.Morrison
RSSiu
AL A comparison of pain and its treatment in advanced dementia and cognitively intact patients with hip fracture.
J Pain Symptom Manage 2000;19240- 248
PubMedGoogle ScholarCrossref 13.American Pain Society, Principles of analgesic use in the treatment of acute pain and chronic cancer pain, 2nd edition.
Clin Pharm 1990;9601- 612
PubMedGoogle Scholar 14.Jacox
ACarr
DBPayne
R
et al. Management of Cancer Pain: Clinical Practice Guideline No. 9. Rockville, Md Agency for Health Care Policy and Research, US Dept of Health and Human Services, Public Health Service1994;AHCPR publication 94-0592
16.Acute Pain Management Panel, Agency for Health Care Policy and Research, Acute Pain Management: Operative or Medical Procedures and Trauma. Washington, DC US Dept of Health and Human Services1992;
17.Gordon
DBPellino
TAMiaskowski
C
et al. A 10-year review of quality improvement monitoring in pain management: recommendations for standardized outcome measures.
Pain Manag Nurs 2002;3116- 130
PubMedGoogle ScholarCrossref 18.Rischer
JBChildress
SB Cancer pain management: pilot implementation of the AHCPR guideline in Utah.
Jt Comm J Qual Improv 1996;22683- 700
PubMedGoogle Scholar 19.Wallace
KGGraham
KMVentura
MRBurke
R Lessons learned from implementing a staff education program in pain management in the acute care setting.
J Nurs Staff Dev 1997;1324- 31
PubMedGoogle Scholar 20.Jacox
ACarr
DBPayne
R New clinical-practice guidelines for the management of pain in patients with cancer.
N Engl J Med 1994;330651- 655
PubMedGoogle ScholarCrossref 21.American Pain Society Quality of Care Committee, Quality improvement guidelines for the treatment of acute pain and cancer pain.
JAMA 1995;2741874- 1880
PubMedGoogle ScholarCrossref 22.EPEC, The EPEC project: education for physicians on end of life care. Available at:
http://www.epec.net. Accessed June 6, 2005
23.Portenoy
RKCherny
N Opioid pharmacotherapy of cancer pain. Kirstein
L
The Network Project Teaching Module. New York, NY Memorial Sloan-Kettering Cancer Center1996;
Google Scholar 24.Portenoy
RKCherny
NBreitbart
W Cancer pain: principles of assessment. Kirstein
L
The Network Project Teaching Module. New York, NY Memorial Sloan-Kettering Cancer Center1996;
Google Scholar 25.Bookbinder
MCoyle
NKiss
M
et al. Implementing national standards for cancer pain management: program model and evaluation.
J Pain Symptom Manage 1996;12334- 347
PubMedGoogle ScholarCrossref 26.Ward
SEGordon
DB Patient satisfaction and pain severity as outcomes in pain management: a longitudinal view of one setting's experience.
J Pain Symptom Manage 1996;11242- 251
PubMedGoogle ScholarCrossref 27.Bates
DWLeape
LLCullen
DJ
et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors.
JAMA 1998;2801311- 1316
PubMedGoogle ScholarCrossref 28.Teich
JMMerchia
PRSchmiz
JLKuperman
GJSpurr
CDBates
DW Effects of computerized physician order entry on prescribing practices.
Arch Intern Med 2000;1602741- 2747
PubMedGoogle ScholarCrossref 29.Kaushal
RShojania
KGBates
DW Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review.
Arch Intern Med 2003;1631409- 1416
PubMedGoogle ScholarCrossref 30.Bates
DWTeich
JMLee
J
et al. The impact of computerized physician order entry on medication error prevention.
J Am Med Inform Assoc 1999;6313- 321
PubMedGoogle ScholarCrossref 31.Evans
RSPestotnik
SLClassen
DC
et al. A computer-assisted management program for antibiotics and other antiinfective agents.
N Engl J Med 1998;338232- 238
PubMedGoogle ScholarCrossref 32.World Health Organization, Cancer Pain Relief. Geneva, Switzerland World Health Organization1986;
33.Cancer Pain Guideline Panel, Agency for Health Care Policy and Research, Management of cancer pain: adults.
Am Fam Physician 1994;491853- 1868
PubMedGoogle Scholar 35.Murray
DMHannan
PJ Planning for the appropriate analysis in school-based drug-use prevention studies.
J Consult Clin Psychol 1990;58458- 468
PubMedGoogle ScholarCrossref 36.Bryk
ARaudenbush
S Hierarchical Linear Models. Newbury Park, Calif Sage1992;
37.Goldstein
H Multilevel Statistical Models. New York, NY John Wiley & Sons Inc1995;
38.Singer
JD Using SAS PROC MIXED to fit multilevel models, hierarchical models, and individual growth models.
J Educ Behav Stat 1998;24323- 355
Google ScholarCrossref 39.Rasbash
JSteele
FBrowne
WProsser
B A User's Guide to MLwiN Version 2.0. London, England Institute of Education2004;
40.Goldstein
H Multilevel Statistical Methods. 3rd ed. London, England Arnold2003;
41.Maroney
CLLitke
AFischberg
DMoore
CMorrison
RS Acceptability of severe pain among hospitalized adults.
J Palliat Med 2004;7443- 450
PubMedGoogle ScholarCrossref 42.Kawamoto
KHoulihan
CABalas
EALobach
DF Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success.
BMJ 2005;330765
PubMedGoogle ScholarCrossref