Of 21 710 citations retrieved, 811 were selected for full-text screening, and 72 met the study inclusion criteria.
eTable 1. Study Characteristics
eTable 2. Information Needs by High-Level Category
eTable 3. Drug-Related Information Needs
eAppendix. PubMed Search Strategy
Del Fiol G, Workman TE, Gorman PN. Clinical Questions Raised by Clinicians at the Point of CareA Systematic Review. JAMA Intern Med. 2014;174(5):710-718. doi:10.1001/jamainternmed.2014.368
In making decisions about patient care, clinicians raise questions and are unable to pursue or find answers to most of them. Unanswered questions may lead to suboptimal patient care decisions.
To systematically review studies that examined the questions clinicians raise in the context of patient care decision making.
MEDLINE (from 1966), CINAHL (from 1982), and Scopus (from 1947), all through May 26, 2011.
Studies that examined questions raised and observed by clinicians (physicians, medical residents, physician assistants, nurse practitioners, nurses, dentists, and care managers) in the context of patient care were independently screened and abstracted by 2 investigators. Of 21 710 citations, 72 met the selection criteria.
Data Extraction and Synthesis
Question frequency was estimated by pooling data from studies with similar methods.
Main Outcomes and Measures
Frequency of questions raised, pursued, and answered and questions by type according to a taxonomy of clinical questions. Thematic analysis of barriers to information seeking and the effects of information seeking on decision making.
In 11 studies, 7012 questions were elicited through short interviews with clinicians after each patient visit. The mean frequency of questions raised was 0.57 (95% CI, 0.38-0.77) per patient seen, and clinicians pursued 51% (36%-66%) of questions and found answers to 78% (67%-88%) of those they pursued. Overall, 34% of questions concerned drug treatment, and 24% concerned potential causes of a symptom, physical finding, or diagnostic test finding. Clinicians’ lack of time and doubt that a useful answer exists were the main barriers to information seeking.
Conclusions and Relevance
Clinicians frequently raise questions about patient care in their practice. Although they are effective at finding answers to questions they pursue, roughly half of the questions are never pursued. This picture has been fairly stable over time despite the broad availability of online evidence resources that can answer these questions. Technology-based solutions should enable clinicians to track their questions and provide just-in-time access to high-quality evidence in the context of patient care decision making. Opportunities for improvement include the recent adoption of electronic health record systems and maintenance of certification requirements.
A seminal 1985 study by Covell et al1 reported that internal medicine physicians raise 2 questions for every 3 patients they see in office practice. Since then, numerous studies have examined the questions clinicians raise during patient care. In general, these studies have confirmed that questions arise frequently and often go unanswered, but no systematic review of this literature exists to date. Unanswered questions are seen as an important opportunity to improve patient outcomes by filling gaps in medical knowledge in the context of clinical decisions.2- 4 In addition, providing just-in-time answers to clinical questions offers an opportunity for effective adult learning.5 The challenge of maintaining current knowledge and practices is likely to be aggravated by the expansion of medical knowledge, increasing complexity of health care delivery, and the growing aging population.6- 8
Understanding clinicians’ questions is essential to guide the design of interventions aimed at providing the right information at the right time to improve care. To increase current understanding, we conducted a systematic review of the literature on clinicians’ questions. We focused on the need for general medical knowledge that might be obtained from books, journals, specialists, and online knowledge resources. The systematic review was guided by 4 primary questions: (1) how often do clinicians raise clinical questions; (2) how often do clinicians pursue questions they raise; (3) how often do clinicians succeed at answering the questions that they pursue; and (4) what types of questions are asked? We also conducted a thematic analysis of the barriers to clinicians’ information seeking and the potential effects of information seeking on clinicians’ decision making.
The methodology was based on the Standards for Systematic Reviews set by the Institute of Medicine.9 Study procedures were conducted based on formally defined processes and instruments that were drafted and piloted by one of us (G.D.F.) and refined with input from an expert review panel.
We searched MEDLINE (1966 through May 26, 2011), CINAHL (1982 through May 26, 2011), and Scopus (1947 through May 26, 2011); inspected the citations of included articles and previous relevant reviews; and requested citations from experts on this topic. Search strategies (eAppendix in the Supplement) were developed with the assistance of 2 medical librarians.
We searched for original studies that examined clinicians’ questions as defined by Ely et al,10 that is, “questions about medical knowledge that could potentially be answered by general sources such as textbooks and journals, not questions about patient data that would be answered by the medical record.” We used a broad definition for clinicians that included physicians, medical residents, physician assistants, nurse practitioners, nurses, dentists, and care managers. We included only studies that collected questions that arose in the care of real patients.
We excluded studies that met any of the following criteria: (1) data collection outside the context of patient care, such as surveys and focus groups; (2) focus on the use, awareness, satisfaction, impact, or quality of information resources without providing data on the frequency of information seeking or the nature of the questions asked; (3) questions of individuals not defined as clinicians in our study, such as patients, medical students, and administrators; (4) needs for specific patient data (eg, laboratory test results) that can be found in the patient’s medical record; (5) no data on at least 1 of the systematic review primary questions; and (6) articles not written in English.
One of us (G.D.F.) independently reviewed the title and abstract of all retrieved citations. Two others (T.E.W. and P.N.G.) independently reviewed 2 random samples of 100 citations. In this phase, articles were labeled as “not relevant” or “potentially relevant.”
Two of us (G.D.F. and T.E.W.) independently reviewed the full text of all citations labeled as potentially relevant. Included articles were classified into 1 of 5 categories based on the method used to collect clinical questions: (1) interviews with clinicians after each patient visit or at the end of a clinic session (after-visit interviews); (2) clinicians’ keeping records of questions as they are raised in the care of their patients (self-report); (3) direct observation of clinicians by a researcher who records questions clinicians raise during routine patient care activities (direct observation); (4) analysis of inquiries submitted to information services, such as drug information services (information services); and (5) analysis of online information resource use logs (search logs). Disagreements between the 2 reviewers were reconciled through consensus with a third (P.N.G.).
Two of us (G.D.F. and T.E.W.) independently reviewed the included articles to extract the data into a data abstraction spreadsheet and verified quantitative data for accuracy. Disagreements were reconciled with the assistance of a third reviewer (P.N.G.).
For quantitative measures, we aggregated data from published studies to determine descriptive statistics across these studies. Owing to large variation in study methods and measurements, a meta-analysis of methodologic features and contextual factors associated with the frequency of questions was not possible.
Of 21 710 unique citations retrieved, 811 were selected for full-text screening; 72 articles met the study criteria (Figure). Clinical questions were collected in after-visit interviews in 19 studies, through clinician self-report in 11, by direct observation of patient care activities in 11, by analysis of questions submitted to an information service in 26, and by analysis of online information resource search logs in 8. Three studies used more than 1 method. Characteristics of included studies are listed in eTable 1 [Supplement]. The search also identified a systematic review on clinicians’ information-seeking behavior11 and several informal literature reviews on related topics.6,12- 17 No systematic review was found of the questions clinicians raise at the point of care. Agreement on abstract and full-text screening was 99% (κ = 0.88) and 95% (κ = 0.74), respectively.
Table 1 lists the number of questions raised by clinicians, the proportion pursued, and the proportion of pursued questions that were successfully answered. In 20 studies that provided sufficient data, the frequency of questions ranged from 0.16 to 1.85 per patient seen. The frequency varied according to study methods, with intermediate frequencies in 11 after-visit interview studies (median, 0.57; range, 0.22-1.27), lower frequencies in 4 self-report studies (median, 0.20; range, 0.16-0.23), and higher frequencies in 5 direct observation studies (median, 0.85; range, 0.24-1.85) (Table 2).
The proportion of questions that were pursued was available in 16 studies, with medians of 81% (range, 23%-82%) in 3 self-report studies, 47% (range, 28%-85%) in 11 after visit-interview studies, and 47% (22%-71%) in 2 direct observation studies. Finally, the reported rates of successfully answered questions were most consistent: when clinicians decided to pursue a clinical question, they were successful approximately 80% of the time across all study types (Table 2).
Sixty-four studies classified questions using various methods and classification systems. Of these studies, 48 (75%) used ad hoc and informal classification approaches, using general categories such as diagnosis, therapy, etiology, and prognosis. Although these categories had similar names, the definitions and methods used were poorly defined and varied substantially among studies, precluding meaningful comparison or aggregation. For simplicity, we have collapsed data from these studies into approximate categories (eTable 2 in the Supplement).
Five studies classified questions according to a formal taxonomy of 64 question types developed by Ely et al.56 The question types followed a Pareto distribution, with roughly 30% of the question types accounting for 80% of the questions clinicians asked. Table 3 lists the 13 most frequent question types across these 5 studies. Overall, 34% of the questions asked were about drug treatment, and 24% were related to the potential causes of a symptom, physical finding, or diagnostic test finding.
Studies that focused on drug-related questions classified questions according to various categories, such as dose and administration, contraindications, and adverse reactions. The most frequent categories were dose and administration, indication, and adverse reactions (eTable 3 in the Supplement).
The Box summarizes other substantial findings that were recurrent across studies. Clinicians cited several barriers to pursuing their questions, such as their lack of time (cited in 11 studies) and their perception that the question was not urgent (5 studies) or important (5 studies) for the patient’s care. Eleven studies reported that the information found by clinicians had some positive effect on clinical decision making. According to 4 studies, clinicians spent a mean of less than 2 to 3 minutes seeking an answer to a specific question. Two studies demonstrated that the perceived frequency of questions reported by clinicians in surveys was much lower than that obtained through patient care observations.
Barriers to pursuing clinical questions/reasons not to pursue questions
Lack of time*
Question not urgent5,10,20,24,28
Question not important23,27,28,31,58
Doubt that a useful answer exists10,20,23,24,26,27,58
Information found affected clinician and decision making, confirming or changing decisions†
Most questions are pursued when the patient is still in the practice20,24,25
Most questions are highly patient specific and nongeneralizable1,20,24
Clinicians used human and paper resources more often than computer resources1,10,19,20,23- 25
Clinicians spend mean of <2-3 min seeking information28,44,47,58
Observed frequency of clinical questions much higher than clinicians’ own estimate (once per week vs 2 of every 3 patients seen for Covell et al1; once a week to once a month vs 10.5 questions per half-day period for Schaafsma et al59)
To our knowledge, this is the first systematic review of clinicians’ patient care questions. In nearly 3 decades since the study by Covell et al,1 more than 20 additional studies have addressed these issues, using differing methods in a variety of settings. What has emerged from these efforts is a fairly stable picture: clinicians have many questions in practice—at least 1 for every 2 patients they see, and although they find answers to most (78% to 87%) of the questions they pursue, more than half of their questions are never pursued and thus remain unanswered. These unanswered questions continue to represent a significant opportunity to improve patient care and offer self-directed learning by providing needed information to clinicians in the context of care.
The methods of the included studies varied substantially regarding the definition of clinical questions, data collection and analysis, care setting, and clinician background. We found important differences in the results that can be explained in part by these differences.
Direct observation studies can provide more information about the underlying context that motivates a clinical question and may identify questions that clinicians fail to articulate. However, the presence of an observer might artificially stimulate29 or inhibit62 articulation of clinical questions. Furthermore, there may be greater variation in how direct observation is performed.
After-visit interviews may have a smaller effect on artificially stimulating questions, but these studies might miss questions that clinicians fail to articulate. On the other hand, the after-visit method may be more consistently applied, resulting in more stable estimates of the frequency of questions.
The self-report method is the least expensive and intrusive but most susceptible to memory or saliency bias. On the other hand, given the difficult logistics and expense of direct observations and after-visit interviews, the self-report method may be a useful alternative when the goal is to collect a large number of clinical questions from a variety of settings.
Considering methodologic differences, we found fairly stable reports of the frequency of questions, the pursuit of information, and clinician success in finding answers to the questions they elected to pursue.
Most after-visit interview studies were conducted in community clinics and used similar methods. Except for 2 outliers, these studies reported similar results. Therefore, 0.57 (95% CI, 0.38-0.77) seems to be a reasonable estimate of the mean frequency of recognized clinicians’ questions in outpatient community settings. The 2 outliers can be explained by methodologic differences. On the lower extreme (at a per-patient question frequency of 0.22), Norlin et al27 excluded simple drug reference questions and questions answered during the patient visit. At the other extreme (at a per-patient frequency of 1.27), 25 residents asked twice as many questions as 11 faculty members, increasing the overall question frequency.28
The direct observation studies were the least uniform regarding data collection methods and observation setting, which may explain the wide 95% CIs for the frequency of questions. The per-patient question frequency for self-report studies ranged from 0.16 to 0.23, but this method is likely to underestimate the question frequency owing to recall bias.
According to 13 after-visit interview and direct observation studies, clinicians pursued roughly half of their questions. The percentage of questions they answered was similar across studies, with the median per study type ranging between 78% and 87%. This relatively high success rate may be explained by clinicians’ ability to selectively pursue questions that can be answered quickly.63 According to information foraging theory, humans constantly weigh the expected benefits vs the estimated cost of engaging in certain information-seeking activities.64 This process is more notably observed among experts and in time-sensitive environments.
According to the 5 studies that classified questions according to the taxonomy of Ely et al,56 a relatively small percentage of question types accounted for a large percentage of the questions asked. This finding has important implications in the design of information retrieval interventions, in the priorities for information resource development, and for the optimal structure of information resources. For example, although typical book chapters list the signs and symptoms under the description of a specific condition, clinicians most often ask which conditions can lead to a specific sign or symptom.
Despite the many studies that examined the nature of clinical questions, most studies did not follow a rigorous question classification method and lacked definitions for the question types used, making it challenging to aggregate the results across studies. Therefore, the findings reported in eTable2 (Supplement) should be interpreted with caution.
We were unable to determine whether decision support tools and electronic health record (EHR) systems were available to clinicians in many of the included studies. In addition, no study directly compared clinical questions among clinicians with vs without access to an EHR. Therefore, we were unable to assess the effects of these tools on the rate of unanswered questions. However, there is some early evidence that EHR systems with clinical decision support tools and seamless access to online reference resources are helping clinicians answer simple questions more quickly,26,40,44- 47,65,66 with some recent evidence of improvement in patient outcomes.67,68
Despite encouraging results obtained by recent information retrieval technology and online information resources,66- 68 the rate of unanswered clinical questions has remained remarkably stable over time. It is possible that gains achieved through widespread use of online information resources are being offset by busier settings, more complex patients, and increasingly complex medical knowledge.8
As already discussed, clinicians may self-select simpler and more urgent questions because they estimate the value of information in terms of its perceived benefits and cost, with a high threshold for engaging in information seeking.63 Hence, information interventions should allow clinicians to easily estimate the benefits of the information available vs the cost of seeking the information as early as possible with minimal cognitive effort.
Our results also have implications for clinicians’ training and lifelong learning. A systematic review has shown decreasing physician knowledge and performance with increasing years in practice.69 Conventional approaches to address this issue include continuing medical education. However, the typical continuing medical education program follows a passive learning approach and fails to improve physician performance and patient outcomes.70,71 Alternate learning interventions could promote just-in-time and self-directed learning in the context of care as questions arise.72 In the United States, this kind of approach could be integrated as a part of the requirements for maintenance of certification, particularly the lifelong commitment to learning through ongoing knowledge self-assessment and practice performance improvement.73
Electronic health record systems are considered critical enablers of practice improvement driven by maintenance of certification.72,74 Hence, the EHR could be a natural environment for innovative tools that help clinicians identify knowledge gaps, address these gaps, and improve practice. In addition, questions and answers related to a particular patient could be tracked in the patient’s EHR in an application similar to the problem list. Questions and their answers could be made available to the entire care team who could collaboratively seek optimal answers. In academic settings, such a “question tracker” could also be used as a teaching device, with interesting questions selected for broader discussion in venues such as grand rounds. This environment could also be used to automatically suggest unasked but relevant questions. These ideas are feasible to be implemented in currently available EHR and information retrieval technology. One important accomplishment toward this EHR vision is the recent inclusion of the Health Level Seven Context-Aware Knowledge Retrieval Standard75 as a requirement for EHR certification in the United States.76 This standard enables the just-in-time delivery of clinical evidence into EHR systems and may provide a foundation for innovations that can help transform EHR systems into practice improvement and learning environments.
This systematic review has several limitations. First, the heterogeneity of the studies compromised general observations about questions and precluded a meta-analysis of factors associated with the frequency in which questions are raised, pursued, and answered. Grouping methodologically similar studies minimized this limitation. Second, the studies included in this review focused on questions that clinicians recognize. Less is known about unrecognized questions, such as the availability of new evidence that may alter the management of a particular patient. Finally, the most recent study identified in our review was published in 2011,57 with most studies being at least 5 years old. Therefore, it was not possible to examine the influence of recent trends, such as the growing EHR adoption in the United States motivated by the EHR Meaningful Use Program.77
Several gaps in the literature regarding clinicians’ questions call for further research. Studies identified in our review were largely focused on primary care physicians in outpatient settings. Furthermore, none of the studies directly compared the questions raised and pursued according to factors such as clinicians’ specialty, years of practice, cognitive style, and care setting. Further research is needed to investigate these gaps. In-depth knowledge about these effects can be used to personalize information retrieval solutions to the characteristics of the clinician.
Our review identified only one study that systematically assessed the nature of questions that clinicians were unable to answer.78 Studies have shown that information seeking has a positive effect on clinicians’ performance66 and patient outcomes.67,68 However, we found no studies that assessed this association between unanswered questions and inferior clinicians’ performance or patient outcomes.
Further research is also needed to investigate clinical questions in subpopulations of special interest, such as complex and aging patients. In particular, the study of patient complexity has gained recent traction, but the literature still lacks a better understanding of clinicians’ questions and information-seeking behavior in the care of these patients.8,79 Although several studies in this review were conducted in the post-Web age, it is still unclear whether we are facing a change in status quo given the new generation of clinicians, who may have incorporated the use of information resources as a natural component of their clinical practice. Finally, research is needed to assess the recent effect of growing EHR77 and online resource adoption on the rate at which questions are raised, pursued, and answered.
This systematic review estimates that the per-patient frequency of questions raised by clinicians ranges from 0.4 and 0.8 and that roughly two-thirds of these questions are left unanswered. This picture has been fairly stable over time despite the broad availability of online evidence resources that can answer these questions. Unanswered questions are missed opportunities for timely learning and practice improvement. Potential solutions should help clinicians keep track of their questions and provide seamless and just-in-time access to high-quality evidence in the context of patient care decision making. Opportunities for improvement include the increasing adoption of EHR systems and maintenance of certification requirements.
Accepted for Publication: January 24, 2014.
Corresponding Author: Guilherme Del Fiol, MD, PhD, Department of Biomedical Informatics, University of Utah, 421 Wakara Way, Ste 140, Salt Lake City, UT 84108 (firstname.lastname@example.org).
Published Online: March 24, 2014. doi:10.1001/jamainternmed.2014.368.
Author Contributions: Drs Del Fiol and Workman had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Del Fiol.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Del Fiol.
Critical revision of the manuscript for important intellectual content: All authors.
Statistical analysis: All authors.
Obtained funding: Del Fiol.
Administrative, technical, or material support: Del Fiol, Workman.
Study supervision: Del Fiol.
Conflict of Interest Disclosures: None reported.
Funding/Support: This project was supported by the Agency for Healthcare Research and Quality (grant K01HS018352).
Role of the Sponsors: The funding source had no role in the design and conduct of the study; collection, management, analysis, or interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Disclaimer: The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality.
Previous Presentation: A preliminary version of this study was presented at the American Medical Informatics Association Annual Symposium; November 7, 2012; Chicago, Illinois.
Additional Contributions: James J. Cimino, MD (Laboratory for Informatics Development, Clinical Center, National Institutes of Health, and Lister Hill National Center for Biomedical Communication, National Library of Medicine, Bethesda, Maryland; and Columbia College of Physicians and Surgeons, New York, New York), Kensaku Kawamoto, MD, PhD (Department of Biomedical Informatics, University of Utah, and University of Utah Health Sciences Center, Salt Lake City), and Leslie A. Lenert, MD (Medical University of South Carolina, Charleston), provided insights on the design of this study, and Dr Cimino and Charlene R. Weir, RN, PhD (Department of Biomedical Informatics, University of Utah, and IDEAS Center of Innovation, Salt Lake City VA Medical Center), reviewed the manuscript. None of them received compensation for their contributions.