Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 25.07.19 in Vol 21, No 7 (2019): July

Preprints (earlier versions) of this paper are available at, first published Jan 05, 2019.

This paper is in the following e-collection/theme issue:


    Impact of Clinicians' Use of Electronic Knowledge Resources on Clinical and Learning Outcomes: Systematic Review and Meta-Analysis

    1Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD, United States

    2Division of General Internal Medicine, Mayo Clinic College of Medicine and Science, Rochester, MN, United States

    3Department of Biomedical Informatics, University of Utah School of Medicine, Salt Lake City, UT, United States

    4Center for Translational Informatics and Knowledge Management, Mayo Clinic, Rochester, MN, United States

    Corresponding Author:

    Lauren A Maggio, PhD

    Department of Medicine

    Uniformed Services University of the Health Sciences

    4301 Jones Bridge Road

    Bethesda, MD, 20814-4799

    United States

    Phone: 1 301 295 4371



    Background: Clinicians use electronic knowledge resources, such as Micromedex, UpToDate, and Wikipedia, to deliver evidence-based care and engage in point-of-care learning. Despite this use in clinical practice, their impact on patient care and learning outcomes is incompletely understood. A comprehensive synthesis of available evidence regarding the effectiveness of electronic knowledge resources would guide clinicians, health care system administrators, medical educators, and informaticians in making evidence-based decisions about their purchase, implementation, and use.

    Objective: The aim of this review is to quantify the impact of electronic knowledge resources on clinical and learning outcomes.

    Methods: We searched MEDLINE, Embase, PsycINFO, and the Cochrane Library for articles published from 1991 to 2017. Two authors independently screened studies for inclusion and extracted outcomes related to knowledge, skills, attitudes, behaviors, patient effects, and cost. We used random-effects meta-analysis to pool standardized mean differences (SMDs) across studies.

    Results: Of 10,811 studies screened, we identified 25 eligible studies published between 2003 and 2016. A total of 5 studies were randomized trials, 22 involved physicians in practice or training, and 10 reported potential conflicts of interest. A total of 15 studies compared electronic knowledge resources with no intervention. Of these, 7 reported clinician behaviors, with a pooled SMD of 0.47 (95% CI 0.27 to 0.67; P<.001), and 8 reported objective patient effects with a pooled SMD of 0.19 (95% CI 0.07 to 0.32; P=.003). Heterogeneity was large (I2>50%) across studies. When compared with other resources—7 studies, not amenable to meta-analytic pooling—the use of electronic knowledge resources was associated with increased frequency of answering questions and perceived benefits on patient care, with variable impact on time to find an answer. A total of 2 studies compared different implementations of the same electronic knowledge resource.

    Conclusions: Use of electronic knowledge resources is associated with a positive impact on clinician behaviors and patient effects. We found statistically significant associations between the use of electronic knowledge resources and improved clinician behaviors and patient effects. When compared with other resources, the use of electronic knowledge resources was associated with increased success in answering clinical questions, with variable impact on speed. Comparisons of different implementation strategies of the same electronic knowledge resource suggest that there are benefits from allowing clinicians to choose to access the resource, versus automated display of resource information, and from integrating patient-specific information. A total of 4 studies compared different commercial electronic knowledge resources, with variable results. Resource implementation strategies can significantly influence outcomes but few studies have examined such factors.

    J Med Internet Res 2019;21(7):e13315




    Clinicians and trainees frequently identify clinical questions while caring for patients [1]. They have been trained, and often attempt, to answer these questions using a variety of resources, including increasing use of electronic resources [2-4]. Electronic knowledge resources have been defined as “electronic (computer-based) resources comprising distilled (synthesized) or curated information that allows clinicians to select content germane to a specific patient to facilitate medical decision making” [5]. Commonly used electronic knowledge resources include commercial products, such as UpToDate, Micromedex, and Epocrates [6,7]; locally developed products, such as McMaster Premium LiteratUre Service (PLUS) [8]; and crowdsourced resources, such as Wikipedia [9]. Electronic knowledge resources are related to, but distinct from, decision-support tools that provide pop-up alerts, reminders, and other push notifications or databases of unsynthesized information, such as MEDLINE.

    Electronic knowledge resources are commonly used in clinical practice and typically require significant resources, including the financial investment in procuring access and clinicians' investment of time in learning to use them [10]. However, their impact on patient care and learning outcomes is incompletely understood [4,11]. Previous reviews of health information resources have, in general, broadly focused on clinical decision-support tools [12,13]. One review characterized features of clinical information retrieval technology that promote its use [14] but did not examine the specific knowledge resources themselves. Another review of clinicians' information-seeking behaviors identified textbooks, colleagues, journal articles, professional websites, and medical libraries as information sources but did not report the outcomes associated with using these sources [15]. A review of clinical questions noted the use of knowledge resources to answer such questions but did not directly address knowledge resources [1]. Moreover, the age of these reviews (ie, the most recent having been published in 2014) limits their application to current practice. An up-to-date, comprehensive synthesis of evidence regarding the effectiveness of electronic knowledge resources could guide clinicians, health care system administrators, medical educators, and informaticians in making evidence-based decisions about their purchase, implementation, and use. Thus, we conducted a systematic review to answer the following question: What is the impact of electronic knowledge resources for clinicians on clinical and learning outcomes?


    This study is part of a large systematic review of knowledge resources and point-of-care learning that was planned, conducted, and reported in adherence to standards of quality for reporting meta-analyses [16].

    Search Strategy and Study Selection

    With support from an experienced reference librarian, on February 14, 2017, we simultaneously searched MEDLINE, Embase, PsycINFO, and the Cochrane Library Database using Ovid’s integrated search interface for comparative studies of electronic knowledge resources. We used the databases’ controlled vocabulary thesauri, Web searches, the research teams' files, and previous reviews [1,6,13,14,17] to create and refine the search strategy and supplemented the database search by examining the full bibliography of these reviews. Search terms included a combination of keywords and controlled vocabulary terms (eg, information-seeking behavior, point-of-care systems, drug information services, UpToDate, and Micromedex). Multimedia Appendix 1 describes the complete search strategy. We limited our search to studies published after January 1, 1991, the year in which the World Wide Web was first described. We made no exclusions based on language.

    Article Selection

    We included all original, comparative studies that evaluated clinicians' use of an electronic knowledge resource, using quantitative outcomes of knowledge, skills in a test setting, attitudes, behaviors with real patients, patient effects, and costs. We required that outcome measures relate to a clinical decision for a specific patient or clinical vignette; we excluded studies measuring only general experiences or overall perceived impact. Measurements in a test setting had to be objectively assessed, as opposed to clinician-reported, and performed without immediate support from the knowledge resource (ie, evaluating sustained impact on knowledge after a period of access, rather than concurrent decision support). Measurements in the care of real patients could be clinician-reported (eg, “found an answer”) or objectively assessed and could reflect concurrent support or sustained impact.

    We defined electronic knowledge resource as quoted in the Introduction, which was adapted from the definition proposed by Lobach [12].We defined clinicians as practitioners or students in a health-related field with direct responsibility for patient-related decisions; this included, but was not limited to, physicians, nurse practitioners, physician assistants, certified nurse anesthetists, pharmacists, midwives, dentists, and psychologists. We excluded studies focused solely on nurses and allied health professionals. We included studies making a comparison with a separate intervention, including randomized, nonrandomized, and crossover designs, or with baseline performance (ie, single-group, pre-/postintervention studies).

    Reviewers (DAC, CAA, and LAM) worked independently and in duplicate to screen each identified study for inclusion, first reviewing the title and abstract and then reviewing the full text if needed; the kappa indicating interrater reliability should be greater than or equal to .70. All disagreements were resolved by consensus.

    Data Abstraction

    Two reviewers (DAC and LAM) used a standardized abstraction form to independently extract data from all included studies, resolving all disagreements by consensus. We extracted information about the participants, topic, resources used, outcomes, and potential conflicts of interest. We appraised study quality using the Newcastle-Ottawa Scale as modified for education [18,19], which evaluates sample selection and comparability, blinding of assessment, and attrition. We converted all quantitative results, including odds ratios (ORs) [20], to standardized mean differences (SMDs).

    Data Synthesis

    We conducted a meta-analysis to pool SMDs whenever three or more studies shared conceptually aligned, between-intervention contrasts [20]. In accordance with our study protocol, we used random-effects meta-analysis because we anticipated pooling across different resources, with likely different effects. We planned to weight studies by the number of users, but most studies reporting clinical outcomes reported only the number of patients or hospitals. Thus, we weighted analyses of knowledge and skills outcomes by the number of users, and we weighted analyses of clinician behaviors and patient effects by the number of patients, with exceptions as noted in the text. We conducted sensitivity analyses limited to randomized trials, recent publications (ie, after 2007), and studies of physicians in practice or postgraduate trainees. We planned to check for publication bias using funnel plots, but the small number of studies precluded meaningful analysis. We estimated heterogeneity using I2.

    For studies that did not permit meta-analysis, we synthesized results using narrative methods, taking into account key differences in study design, study quality, intervention, and context.


    We identified 10,811 potentially relevant studies: 10,799 studies in our literature search and 12 from our examination of previous reviews. From these, we included 25 comparative studies evaluating the impact of electronic knowledge resources (see Figure 1) [21-45].

    Study Characteristics

    Table 1 summarizes study characteristics and Table 2 provides detailed information about each study. Out of 25 studies, 20 (80%) investigated electronic knowledge resources in the context of patient care, while 5 (20%) took place in laboratory or test settings. Nearly all studies (22/25, 88%) included physicians in practice or in training. Other studies included nurse practitioners or mixed user groups. Common topics included general medicine (15/25, 60%), surgery (5/25, 20%), and pediatrics (5/25, 20%). All studies were published between 2003 and 2016 and were in English.

    Figure 1. Trial flowchart.
    View this figure
    Table 1. Summary of key study characteristics and quality.
    View this table
    Table 2. Detailed information about each study.
    View this table

    The electronic knowledge resources most commonly evaluated were UpToDate (6/25, 24%) and InfoRetriever (5/25, 20%). Several studies evaluated more than one resource. The 25 studies reported 29 distinct contrasts. Out of 29, 15 contrasts (52%) compared electronic knowledge resources with no intervention; 7 (24%) compared electronic knowledge resources with resources not meeting our definition of electronic knowledge resources, such as MEDLINE or a paper resource, hereafter collectively labeled other resources; and 7 (24%) compared one electronic knowledge resource against another (eg, Micromedex vs SkolarMD or two implementations of the same resource, such as presentation as a desktop vs mobile version). Across the 29 contrasts, we extracted 48 discrete outcomes, reflecting knowledge and skills (24/48, 50%), behaviors in practice (10/48, 21%), patient effects (10/48, 21%), attitudes (3/48, 6%), and costs (1/48, 2%). Selected contrasts and outcomes are reported in Figures 2 and 3; Multimedia Appendix 2 lists all contrasts and outcomes.

    Figure 2. Comparative usage of electronic knowledge resources versus no intervention. Knowledge outcome analyses are weighted by user, while behavior and patient effects analyses are weighted by patients or hospitals. “a” denotes a locally developed resource; “b” is the number of hospitals, not patients; “c” indicates no comparison group (ie, one-group, pre-/postintervention study). Abx Guide: Johns Hopkins Antibiotic Guide; Ang Soft: angina software; CEM: clinical evidence module; eAAP: Emergency Asthma Action Plan; Epoc: Epocrates; GRAIDS: Genetic Risk Assessment on the Internet with Decision Support; InfoRet: InfoRetriever; MD: practicing physicians; MOC: Maintenance of Certification; MS: medical students; NP: nurse practitioners; ns: not specified; PG: residents; PIER: Physicians’ Information and Education Resource; Rep Sup: Report Support; SCAMP: Standardized Clinical Assessment and Management Plans; UTD: UpToDate.
    View this figure
    Figure 3. Impact of electronic knowledge resources in comparison with other resources (Panel A) and alternate electronic knowledge resources (Panel B). All analyses are weighted by patients except as noted. “a” refers to analysis weighted by users; “b” means the comparison group (ie, study data) is the same for these contrasts; “c” means the comparison type “Mixed” indicates a comparison with both electronic and nonelectronic knowledge resources; “d” means the comparison type “Any other” indicates users could select any resource, except the ones it was being compared against; “e” denotes a locally developed resource. 5-min: 5-Minute Clinical Consult; AccessMed: AccessMedicine; ARUSC: Antibiotic Utilization and Surveillance-Control; Clin Evid: clinical evidence; Epoc: Epocrates; InfoRet: InfoRetriever; K: Knowledge; MD: practicing physicians; MMX: Micromedex; MS: medical students; NOS: not otherwise specified; NP: nurse practitioners; ns: not specified; PG: residents; Q: question; rec: recommendation; spec: specific; Taras: Tarascon Pharmacopeia; Trip: Turning Research Into Practice; UTD: UpToDate; Wiki: Wikipedia.
    View this figure

    Study Quality

    When reported, the number of enrolled users ranged from 3 to 15,148; 7 studies out of 25 (28%) did not report the number of users, and 4 (16%) did not report user demographics. A total of 11 (44%) of the 25 studies included two or more groups, of which 5 (45%) were randomized. Assessors were blinded to the study intervention in 9 (36%) of the 25 studies. The mean Newcastle-Ottawa Scale quality score (maximum 6 points) was 2.3 (SD 1.6). In 15 out of 25 studies (60%), outcomes were determined objectively (eg, based on patient records, computer logs, or test scores), including all studies that reported patient outcomes. The other 10 studies (40%) reported only clinician-reported measures (eg, “I found an answer”). Only 9 of 25 studies (36%) enrolled users that were considered representative of the larger community of potential participants. A total of 11 studies (44%) had a separate comparison group; among these, 9 (82%) drew the comparison group from the same community and 5 (45%) were randomized. A total of 16 out of 25 studies (64%) reported high participant follow-up. A total of 10 studies (40%) reported potential financial conflicts of interest (eg, industry grant, discounted or free product pricing, involvement of resource creators, or employment by industry). A total of 6 studies (24%) did not report funding sources (see Table 3).

    Synthesis: Comparisons With No Intervention

    A total of 15 studies out of 25 (60%) compared one or more electronic knowledge resources with no intervention, including comparisons of usual practice without versus with access to the resource, reporting a total of 22 outcomes [21,25-28,30,32-35,37,39-42,44]. Of these 15 studies, 9 (60%) reported potential conflicts of interest.

    Out of these 15 studies, 4 (27%) reported knowledge or skill outcomes, evaluating InfoRetriever, UpToDate, American College of Physicians (ACP) Physicians’ Information and Education Resource (PIER), and three local resources, alone or in varying combinations. The pooled SMD was 0.41 (95% CI –0.13 to 0.95; P=.14; see Figure 2, Panel A). Inconsistency was high, with individual SMDs ranging from –0.35 to 1.34 and an I2 of 89%. None of these studies were randomized and only 1 out of the 4 (25%) was published since 2007. Limiting this analysis to the 3 studies out of 4 (75%) without a potential conflict of interest yielded an SMD of 0.35 (95% CI –0.29 to 0.99; P=.29). Limiting the analysis to the 3 studies (75%) enrolling physicians in practice or postgraduate trainees revealed an SMD of 0.10 (95% CI –0.34 to 0.54; P=.65). Out of the 4 studies, 2 (50%) explored attitudes about information seeking and evidence-based medicine, with results showing improved, neutral, and worsened attitudes, depending on the attitude statement, after use of knowledge resources [21,34].

    Table 3. Quality appraisal of included studies.
    View this table

    Out of the 15 studies, 8 (53%) reported behavior outcomes, such as appropriate therapy recommendations and test orders, evaluating combinations of Epocrates, Isabel, UpToDate, and four local resources [27,30,32,33,39,40,44]. The pooled SMD was 0.47 (95% CI 0.27 to 0.67; P<.001; see Figure 2, Panel B). Inconsistency was again high, with individual SMDs ranging from 0.01 to 1.67 and an I2 of 97%. Out of the 8 studies, 2 (25%) were randomized [27,32]. Limiting analyses to the 4 studies (50%) published since 2007 revealed similar results, with an SMD of 0.41 (95% CI 0.10 to 0.71; P=.01). Alternately, limiting to the 3 studies (38%) without a potential conflict of interest yielded an SMD of 0.94 (95% CI 0.02 to 1.86; P=.05). Lastly, limiting analysis to the 7 studies (88%) that included physicians in practice or postgraduate trainees produced an SMD of 0.49 (95% CI 0.27 to 0.70; P<.001).

    Out of the 15 studies, 7 (47%) reported patient effects, including complications, length of stay, optimal management, and mortality, evaluating UpToDate and five locally developed resources [27,32,33,37,40,42,44]. Pooling nonmortality outcomes across these 7 studies revealed an SMD of 0.19 (95% CI 0.07 to 0.32; P=.003; see Figure 2, Panel C). Inconsistency was again high, with individual SMDs ranging from 0 to 0.72 and an I2 of 81%. Out of these 7 studies, 2 (29%) were randomized [26,31]. Limiting analyses to the 4 studies out of 7 (57%) published since 2007 revealed a similar SMD of 0.20 (95% CI 0.05 to 0.35; P=.01). Limiting to the 4 studies out of 7 (57%) without potential conflicts of interest yielded an SMD of 0.31 (95% CI 0.01 to 0.61; P=.04). Focusing on the 6 studies out of 7 (86%) that enrolled physicians in practice or postgraduate trainees produced an SMD of 0.22 (95% CI 0.09 to 0.36; P=.001). The 2 studies out of 7 (29%) reporting mortality outcomes, both funded by UpToDate, compared hospitals that did versus did not have access to UpToDate. Out of these 2 studies, 1 (50%) found a very small but statistically significant association between the use of UpToDate and lower mortality (absolute risk difference –0.1%; N=3322 hospitals) [40]; the other found no statistically significant association (risk-adjusted z-score 0.18; N=5515 hospitals) [37].

    Out of the 15 studies, 1 (7%) objectively evaluated cost reductions associated with implementation of a local resource; this study found a statistically significant 49% reduction in the cost of care (95% CI 0.46 to 0.53) compared with preimplementation [44].

    Synthesis: Comparisons With Other Resources

    A total of 7 studies out of 25 (28%) compared electronic knowledge resources with other information resources that were provided instead of the knowledge resource and that did not meet our definition of electronic knowledge resources (see Figure 3, Panel A) [21-24,31,36,38]. Variation in comparisons and outcomes precluded meaningful meta-analysis. Out of the 7 studies, 2 (29%) found mixed results for the use of electronic knowledge resources on personal digital assistants (PDAs) compared with paper resources. In 1 crossover study (50%), residents given a PDA with electronic knowledge resources (eg, Epocrates and Tarascon Pharmacopeia) demonstrated improvements in self-reported patient management (SMD 0.38, 30 users, 295 patients), compared with resource access limited to print materials [30]. The second study, conducted by the creators of InfoRetriever, found essentially no difference in attitudes about evidence-based medicine when comparing use of a PDA preloaded with InfoRetriever versus an evidence-based medicine pocket card (SMD –0.04, 113 users) [21].

    Out of the 7 studies, 2 (29%) suggested that clinicians found answers to more questions, and more rapidly, when using electronic knowledge resources than when using journal-based resources. Out of these 2 studies, 1 crossover study (50%) compared general practitioners’ use of Turning Research Into Practice (Trip) and clinical evidence with their use of journal articles from the BMJ and found that these electronic knowledge resources were associated with more frequently finding answers (Trip vs the BMJ: SMD 0.80, 5 users, 219 patients; clinical evidence vs the BMJ: SMD 0.21, 5 users, 255 patients) [36]. Another study (1/2, 50%) reported a statistically significant association between the use of UpToDate and answering more questions (SMD 0.57, 70 users, 1305 patients) and finding answers more quickly (SMD 2.07), in comparison with clinicians using PubMed [38].

    Out of the 7 studies, 3 (43%) examined electronic knowledge resources in comparison with a user’s choice of any other information resources (eg, Google and textbooks) and reported mixed findings. In a randomized crossover study (1/3, 33%) authored by the founder of DynaMed, physicians using DynaMed reported that they found answers more often (SMD 0.30, 46 users, 698 patients) and that answers more often changed patient care (SMD 0.42), although finding answers took slightly, but not statistically significantly, longer (SMD –0.07) [24]. Another study (1/3, 33%) compared the use of InfoRetriever, DynaMed, Trip, and clinical evidence against a user’s choice of any other resources; the study found that the use of these electronic knowledge resources was not significantly associated with clinician-reported success in answering questions (SMD –0.03, 3 users, 92 patients) or changes in care (SMD 0.21, 3 users, 65 patients) [22]. In a third study (1/3, 33%), which is not represented in Figure 3, Panel A, because of insufficient extractable data, pediatricians were randomized to use an online pediatrics library or a resource of their choice and found no statistically significant difference in questions answered or changes in care [23].

    Synthesis: Comparisons Between Electronic Knowledge Resources

    The high inconsistency noted in the meta-analyses above may suggest substantial differences between knowledge resources in their implementation (eg, training, policies, and technical support to encourage or facilitate use) and design. Studies comparing different electronic knowledge resources, designs, or implementation strategies can help identify best practices. We identified 7 such studies out of 25 (28%; see Figure 3, Panel B) [23,26,29,36,41,43,45].

    Out of these 7 studies, 2 (29%) reported associations between different resource implementation strategies of the same knowledge resource and changes in care. In 1 study (50%), clinicians who were allowed to optionally use a local electronic knowledge resource more often followed the resource’s suggestion on antibiotic use compared with when they were provided such information without their request (SMD 1.28, 18,360 patients) [43]. The other study compared two subsections of InfoRetriever: one that employed user-entered patient data to provide patient-specific information and recommendations and the other containing general information resources, such as The 5-Minute Clinical Consult, Cochrane Reviews, Information Patient-Oriented Evidence that Matters (Info-POEMs), and guideline summaries. This crossover study determined that the patient-specific resources were associated with a slight but statistically significant improvement in clinician-reported changes in care (SMD 0.11, 26 users, 2474 patients) [26].

    Out of the 7 studies, 4 (57%) focused on head-to-head comparisons of different electronic knowledge resources. Out of these 4 studies, 1 crossover study (25%) found no statistically significant difference on a knowledge test for 18 medical students who had used Wikipedia, AccessMedicine, or UpToDate (UpToDate vs Wikipedia SMD 0.06; AccessMedicine vs UpToDate SMD 0.22; AccessMedicine vs Wikipedia SMD 0.37) [45]. Another randomized study (1/4, 25%) found no statistically significant differences between Micromedex and SkolarMD in the frequency of answering questions (SMD 0.51, 89 users, 289 patients) or clinician-reported changes in patient care (SMD 0.09) [29]. A randomized crossover study (1/4, 25%) found that clinicians could answer questions more often when using Trip than when using clinical evidence (SMD 0.60, 5 users, 292 patients) [36]. Finally, 1 study (25%) found a statistically significant difference in maintenance of certification exam scores between physicians using two electronic knowledge resources over an extended period; however, due to deliberately blinded reporting, it is not possible to know which resource (ie, PIER or UpToDate) was superior [41]. The effects of these resources in comparison with no intervention were reported earlier in this review.


    Principal Findings

    We identified 25 studies that investigated the impact of electronic knowledge resources on patient and clinician outcomes and found results that are mixed and at times contradictory. Nevertheless, we found statistically significant associations between the use of electronic knowledge resources and improved clinician behaviors and patient effects. When compared with other resources, use of electronic knowledge resources was associated with increased success in answering clinical questions, with variable impact on speed. Comparisons of different implementation strategies of the same electronic knowledge resource suggest benefits from allowing clinicians to choose to access the resource, versus automated display of resource information, and from integrating patient-specific information. A total of 4 studies compared different commercial electronic knowledge resources, with variable results.

    Comparison With Other Reviews and Meta-Analyses

    Clinicians frequently face clinical questions [1,46], which they are taught and expected to answer using some form of knowledge resources. Previous reviews have focused on interventions to promote knowledge resource adoption [14] or addressed knowledge resources as only one of many information technology tools [12,47]. This review expands upon prior work by focusing specifically on electronic knowledge resources and quantitatively estimating their impact on clinical outcomes and point-of-care learning. Our finding of limited evidence regarding different approaches to electronic knowledge resource implementation strategies parallels the paucity of evidence found in a previous review of health information technology [48].


    As with all systematic reviews, our findings are constrained by the quality and quantity of published evidence. For example, only 6 studies reported patient effects and 5 were randomized. Inconsistency was high in all analyses. Additionally, lack of conceptual alignment precluded meta-analysis for comparisons of electronic knowledge resources with other resources or with different implementation strategies. Several studies allowed users access to multiple resources simultaneously, making interpretation difficult. Vague and incomplete reporting limited our ability to extract key information on study design, outcomes, contextual details (eg, setting and disease acuity), and resource design and implementation (eg, how participants accessed the resource, password requirements, or optimization for use on a mobile device) for several studies. A total of 10 studies presented potential conflicts of interest, which could bias results. However, sensitivity analyses limited to recent studies and studies without conflicts of interest generally yielded similar results. The small number of studies precluded meaningful evaluation of publication bias. We did not attempt to distinguish resources based on the developer's intended purpose (eg, education, decision support, or information) but instead focused on the resource's function and application (ie, decision making for a specific patient). Several studies are over a decade old, which limits their relevance to current resource implementations. We conducted our literature search in 2017, and studies published since that date were thus omitted from our analyses. This review has several strengths, including a comprehensive search of multiple databases by information professionals, duplicate review at all stages of screening and data extraction, and broad inclusion criteria encompassing a range of health professionals and topics.


    In this meta-analysis, use of electronic knowledge resources appeared to improve patient care and their continued use in clinical practice appears to be warranted. More specifically, these resources provide answers to clinician-initiated questions “just in time,” thus preserving clinicians' autonomy and workflow. This functionality contrasts with that of interruptive clinical decision-support systems, such as reminders and alerts, that have been associated with workflow disruption, alert fatigue, inappropriate recommendations, and provider dissatisfaction [49-51]. Use of electronic knowledge resources is also associated with enhanced durable learning (ie, improved performance on knowledge tests conducted without concurrent resource use). Clinicians may benefit from increased and more strategic use of electronic knowledge resources at various stages in training. Knowledge resources may be particularly important for practicing clinicians as part of their lifelong point-of-care learning activities [52]. The optimal promotion of durable learning may require resource features, such as spaced repetition and quizzing [53,54], that differ from those required for concurrent decision support (ie, maximal efficiency). Electronic knowledge resources offer flexibility allowing such features to be built in yet activated only for relevant learners and contexts.

    The impact of electronic knowledge resources, while generally favorable, varied widely across studies. Such differences likely arise from specifics of the topic, clinical context, clinician specialty, and clinician stage of training, in addition to the knowledge resource itself. It seems unlikely that any one resource will optimally address all information needs; rather, health care organizations will likely need to make multiple electronic knowledge resources available and effectively integrate electronic knowledge resources into clinician workflows. Suboptimal integration results in suboptimal outcomes, as was seen in one study in this review [43]. Information tools, such as easily accessible online portals (eg, infobuttons [17]), might further help clinicians select resources appropriate for their specific questions and contexts.

    Our review highlights several areas for improvement in the quality of research methods and reporting. For example, several studies failed to report the number of participants or participant demographics. Many studies did not use comparison groups, reported limited participant follow-up, or enrolled participants that were not considered representative of the larger community of potential participants. Also, 10 studies presented potential conflicts of interest (eg, funding from a resource vendor). Finally, the majority of studies lacked details on the design and implementation of the resources under investigation, and information was rarely reported regarding the cost—monetary and nonmonetary—of implementation, use, and maintenance. When planning future studies, researchers should consider and seek to mitigate these and other limitations.

    Many uncertainties remain regarding optimal design, implementation strategies, and use of electronic knowledge resources. Unfortunately, studies comparing different knowledge resources or making comparisons with no intervention have largely failed to produce generalizable insights in this regard. Additional research is needed to clarify what works, in what context (ie, question type, topic, and clinical setting), and for what outcome. We believe that head-to-head studies of different resources, or different implementation strategies of a given resource, can provide such evidence; however, such studies must be guided by conceptual models and theories (eg, models and theories of information science and translational informatics). Noncomparative studies examining fundamental questions about information seeking, human factors, and user experience will also be useful. Outcomes of costs, both monetary and nonmonetary, will complement outcomes of effectiveness in supporting evidence-based decisions. Attention to these issues will permit more effective design, implementation strategies, and integration into the clinical workflow, which in turn will optimize electronic knowledge resources' benefits to patient care.


    Use of electronic knowledge resources is associated with a positive impact on clinician behaviors and patient effects. Further research into resource design and implementation strategies is needed.


    GDF was funded by a National Library of Medicine grant (grant number: 1R01LM011416). The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, the US Department of Defense, or the US Government.

    Conflicts of Interest

    In 2016, LAM received travel funds to deliver a lecture on evidence-based medicine for employees of Ebsco, the parent company of DynaMed; Ebsco did not have any involvement in the conduct of this study. We are unaware of any other conflicts of interest.

    Multimedia Appendix 1

    Supplemental search strategies.

    DOCX File, 23KB

    Multimedia Appendix 2

    Detailed listing of all contrasts and outcomes by study.

    DOCX File, 22KB


    1. Del Fiol G, Workman TE, Gorman PN. Clinical questions raised by clinicians at the point of care: A systematic review. JAMA Intern Med 2014 May;174(5):710-718. [CrossRef] [Medline]
    2. Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: A focus group study. PLoS One 2013;8(11):e80318 [FREE Full text] [CrossRef] [Medline]
    3. Cook DA, Sorensen KJ, Wilkinson JM, Berger RA. Barriers and decisions when answering clinical questions at the point of care: A grounded theory study. JAMA Intern Med 2013 Nov 25;173(21):1962-1969. [CrossRef] [Medline]
    4. Straus SE, Glasziou P, Richardson WS, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 5th edition. New York, NY: Elsevier; 2019.
    5. Aakre CA, Pencille LJ, Sorensen KJ, Shellum JL, Del Fiol G, Maggio LA, et al. Electronic knowledge resources and point-of-care learning: A scoping review. Acad Med 2018 Nov;93(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 57th Annual Research in Medical Education Sessions):S60-S67. [CrossRef] [Medline]
    6. Ketchum AM, Saleh AA, Jeong K. Type of evidence behind point-of-care clinical information products: A bibliometric analysis. J Med Internet Res 2011 Feb 18;13(1):e21 [FREE Full text] [CrossRef] [Medline]
    7. Maggio LA, Cate OT, Moorhead LL, van Stiphout F, Kramer BM, Ter Braak E, et al. Characterizing physicians' information needs at the point of care. Perspect Med Educ 2014 Nov;3(5):332-342 [FREE Full text] [CrossRef] [Medline]
    8. Haynes RB, Holland J, Cotoi C, McKinlay RJ, Wilczynski NL, Walters LA, et al. McMaster PLUS: A cluster randomized clinical trial of an intervention to accelerate clinical use of evidence-based information from digital libraries. J Am Med Inform Assoc 2006;13(6):593-600 [FREE Full text] [CrossRef] [Medline]
    9. Murray H. More than 2 billion pairs of eyeballs: Why aren't you sharing medical knowledge on Wikipedia? BMJ Evid Based Med 2019 Jun;24(3):90-91. [CrossRef] [Medline]
    10. Barreau D, Bouton C, Renard V, Fournier J. Health sciences libraries' subscriptions to journals: Expectations of general practice departments and collection-based analysis. J Med Libr Assoc 2018 Apr;106(2):235-243 [FREE Full text] [CrossRef] [Medline]
    11. Pluye P, Grad RM, Dunikowski LG, Stephenson R. Impact of clinical information-retrieval technology on physicians: A literature review of quantitative, qualitative and mixed methods studies. Int J Med Inform 2005 Sep;74(9):745-768. [CrossRef] [Medline]
    12. Lobach D, Sanders GD, Bright TJ, Wong A, Dhurjati R, Bristow E, et al. Enabling health care decision making through clinical decision support and knowledge management. Evid Rep Technol Assess (Full Rep) 2012 Apr(203):1-784. [Medline]
    13. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, et al. Effect of clinical decision-support systems: A systematic review. Ann Intern Med 2012 Jul 03;157(1):29-43. [CrossRef] [Medline]
    14. Gagnon M, Légaré F, Labrecque M, Frémont P, Pluye P, Gagnon J, et al. Interventions for promoting information and communication technologies adoption in healthcare professionals. Cochrane Database Syst Rev 2009 Jan 21(1):CD006093 [FREE Full text] [CrossRef] [Medline]
    15. Clarke MA, Belden JL, Koopman RJ, Steege LM, Moore JL, Canfield SM, et al. Information needs and information-seeking behaviour analysis of primary care physicians and nurses: A literature review. Health Info Libr J 2013 Sep;30(3):178-190 [FREE Full text] [CrossRef] [Medline]
    16. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [FREE Full text] [CrossRef] [Medline]
    17. Cook DA, Teixeira MT, Heale BS, Cimino JJ, Del Fiol G. Context-sensitive decision support (infobuttons) in electronic health records: A systematic review. J Am Med Inform Assoc 2017 Mar 01;24(2):460-468 [FREE Full text] [CrossRef] [Medline]
    18. Wells GA, Shea B, O'Connell D, Peterson J, Welch V, Losos M, et al. The Ottawa Hospital Research Institute. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses   URL: [accessed 2019-05-09] [WebCite Cache]
    19. Cook DA, Reed DA. Appraising the quality of medical education research methods: The Medical Education Research Study Quality Instrument and the Newcastle-Ottawa Scale-Education. Acad Med 2015 Aug;90(8):1067-1076. [CrossRef] [Medline]
    20. Brockwell SE, Gordon IR. A comparison of statistical methods for meta-analysis. Stat Med 2001 Mar 30;20(6):825-840. [CrossRef] [Medline]
    21. Leung GM, Johnston JM, Tin KY, Wong IO, Ho LM, Lam WW, et al. Randomised controlled trial of clinical decision support tools to improve learning of evidence based medicine in medical students. BMJ 2003 Nov 08;327(7423):1090 [FREE Full text] [CrossRef] [Medline]
    22. Schwartz K, Northrup J, Israel N, Crowell K, Lauder N, Neale AV. Use of online evidence-based resources at the point of care. Fam Med 2003 Apr;35(4):251-256. [Medline]
    23. D'Alessandro DM, Kreiter CD, Peterson MW. An evaluation of information-seeking behaviors of general pediatricians. Pediatrics 2004 Jan;113(1 Pt 1):64-69. [CrossRef] [Medline]
    24. Alper BS, White DS, Ge B. Physicians answer more clinical questions and change clinical decisions more often with synthesized evidence: A randomized trial in primary care. Ann Fam Med 2005;3(6):507-513 [FREE Full text] [CrossRef] [Medline]
    25. Grad RM, Meng Y, Bartlett G, Dawes M, Pluye P, Boillat M, et al. Effect of a PDA-assisted evidence-based medicine course on knowledge of common clinical problems. Fam Med 2005;37(10):734-740. [Medline]
    26. Grad RM, Pluye P, Meng Y, Segal B, Tamblyn R. Assessing the impact of clinical information-retrieval technology in a family practice residency. J Eval Clin Pract 2005 Dec;11(6):576-586. [CrossRef] [Medline]
    27. Greiver M, Drummond N, White D, Weshler J, Moineddin R, North Toronto Primary Care Research Network (Nortren). Angina on the Palm: Randomized controlled pilot trial of Palm PDA software for referrals for cardiac testing. Can Fam Physician 2005 Mar;51:382-383 [FREE Full text] [Medline]
    28. Bochicchio GV, Smit PA, Moore R, Bochicchio K, Auwaerter P, Johnson SB, POC-IT Group. Pilot study of a Web-based antibiotic decision management guide. J Am Coll Surg 2006 Mar;202(3):459-467. [CrossRef] [Medline]
    29. Maviglia SM, Yoon CS, Bates DW, Kuperman G. KnowledgeLink: Impact of context-sensitive information retrieval on clinicians' information needs. J Am Med Inform Assoc 2006;13(1):67-73 [FREE Full text] [CrossRef] [Medline]
    30. Ramnarayan P, Winrow A, Coren M, Nanduri V, Buchdahl R, Jacobs B, et al. Diagnostic omission errors in acute paediatric practice: Impact of a reminder system on decision-making. BMC Med Inform Decis Mak 2006 Nov 06;6:37 [FREE Full text] [CrossRef] [Medline]
    31. Rudkin SE, Langdorf MI, Macias D, Oman JA, Kazzi AA. Personal digital assistants change management more often than paper texts and foster patient confidence. Eur J Emerg Med 2006 Apr;13(2):92-96. [CrossRef] [Medline]
    32. Emery J, Morris H, Goodchild R, Fanshawe T, Prevost AT, Bobrow M, et al. The GRAIDS Trial: A cluster randomised controlled trial of computer decision support for the management of familial cancer risk in primary care. Br J Cancer 2007 Aug 20;97(4):486-493 [FREE Full text] [CrossRef] [Medline]
    33. King WJ, Le Saux N, Sampson M, Gaboury I, Norris M, Moher D. Effect of point of care information on inpatient management of bronchiolitis. BMC Pediatr 2007 Jan 24;7:4 [FREE Full text] [CrossRef] [Medline]
    34. Magrabi F, Westbrook JI, Coiera EW. What factors are associated with the integration of evidence retrieval technology into routine general practice settings? Int J Med Inform 2007 Oct;76(10):701-709. [CrossRef] [Medline]
    35. Skeate RC, Wahi MM, Jessurun J, Connelly DP. Personal digital assistant-enabled report content knowledgebase results in more complete pathology reports and enhances resident learning. Hum Pathol 2007 Dec;38(12):1727-1735. [CrossRef] [Medline]
    36. Van Duppen D, Aertgeerts B, Hannes K, Neirinckx J, Seuntjens L, Goossens F, et al. Online on-the-spot searching increases use of evidence during consultations in family practice. Patient Educ Couns 2007 Sep;68(1):61-65. [CrossRef] [Medline]
    37. Bonis PA, Pickens GT, Rind DM, Foster DA. Association of a clinical knowledge support system with improved patient safety, reduced complications and shorter length of stay among Medicare beneficiaries in acute care hospitals in the United States. Int J Med Inform 2008 Nov;77(11):745-753. [CrossRef] [Medline]
    38. Hoogendam A, Stalenhoef AF, Robbé PF, Overbeke AJ. Answers to questions posed during daily patient care are more likely to be answered by UpToDate than PubMed. J Med Internet Res 2008 Oct 03;10(4):e29 [FREE Full text] [CrossRef] [Medline]
    39. Lyman JA, Conaway M, Lowenhar S. Formulary access using a PDA-based drug reference tool: Does it affect prescribing behavior? AMIA Annu Symp Proc 2008 Nov 06:1034. [Medline]
    40. Isaac T, Zheng J, Jha A. Use of UpToDate and outcomes in US hospitals. J Hosp Med 2012 Feb;7(2):85-90. [CrossRef] [Medline]
    41. Reed DA, West CP, Holmboe ES, Halvorsen AJ, Lipner RS, Jacobs C, et al. Relationship of electronic medical knowledge resource use and practice characteristics with Internal Medicine Maintenance of Certification Examination scores. J Gen Intern Med 2012 Aug;27(8):917-923 [FREE Full text] [CrossRef] [Medline]
    42. Kuhn L, Reeves K, Taylor Y, Tapp H, McWilliams A, Gunter A, et al. Planning for action: The impact of an asthma action plan decision support tool integrated into an electronic health record (EHR) at a large health care system. J Am Board Fam Med 2015;28(3):382-393 [FREE Full text] [CrossRef] [Medline]
    43. Chow AL, Ang A, Chow CZ, Ng TM, Teng C, Ling LM, et al. Implementation hurdles of an interactive, integrated, point-of-care computerised decision support system for hospital antibiotic prescription. Int J Antimicrob Agents 2016 Feb;47(2):132-139. [CrossRef] [Medline]
    44. Luther G, Miller PE, Mahan ST, Waters PM, Bae DS. Decreasing resource utilization using standardized clinical assessment and management plans (SCAMPs). J Pediatr Orthop 2016 Sep 15;39(4):169-174. [CrossRef] [Medline]
    45. Saparova D, Nolan NS. Evaluating the appropriateness of electronic information resources for learning. J Med Libr Assoc 2016 Jan;104(1):24-32 [FREE Full text] [CrossRef] [Medline]
    46. Ely JW, Burch RJ, Vinson DC. The information needs of family physicians: Case-specific clinical questions. J Fam Pract 1992 Sep;35(3):265-269. [Medline]
    47. Bright MA. The National Cancer Institute's Cancer Information Service: A premiere cancer information and education resource for the nation. J Cancer Educ 2007;22(1 Suppl):S2-S7. [CrossRef] [Medline]
    48. Jones SS, Rudin RS, Perry T, Shekelle PG. Health information technology: An updated systematic review with a focus on meaningful use. Ann Intern Med 2014 Jan 07;160(1):48-54. [CrossRef] [Medline]
    49. Kesselheim AS, Cresswell K, Phansalkar S, Bates DW, Sheikh A. Clinical decision support systems could be modified to reduce 'alert fatigue' while still minimizing the risk of litigation. Health Aff (Millwood) 2011 Dec;30(12):2310-2317. [CrossRef] [Medline]
    50. Wright A, Ai A, Ash J, Wiesen JF, Hickman TT, Aaron S, et al. Clinical decision support alert malfunctions: Analysis and empirically derived taxonomy. J Am Med Inform Assoc 2018 May 01;25(5):496-506 [FREE Full text] [CrossRef] [Medline]
    51. Wright A, Hickman TT, McEvoy D, Aaron S, Ai A, Andersen JM, et al. Analysis of clinical decision support system malfunctions: A case series and survey. J Am Med Inform Assoc 2016 Dec;23(6):1068-1076 [FREE Full text] [CrossRef] [Medline]
    52. Cook DA, Blachman MJ, Price DW, West CP, Berger RA, Wittich CM. Professional development perceptions and practices among US physicians: A cross-specialty national survey. Acad Med 2017 Dec;92(9):1335-1345. [CrossRef] [Medline]
    53. Larsen DP, Butler AC, Roediger HL. Repeated testing improves long-term retention relative to repeated study: A randomised controlled trial. Med Educ 2009 Dec;43(12):1174-1181. [CrossRef] [Medline]
    54. Larsen DP, Butler AC, Roediger HL. Comparative effects of test-enhanced learning and self-explanation on long-term retention. Med Educ 2013 Jul;47(7):674-682. [CrossRef] [Medline]


    ACP: American College of Physicians
    ARUSC: Antibiotic Utilization and Surveillance-Control
    COI: conflict of interest
    eAAP: Emergency Asthma Action Plan
    GRAIDS: Genetic Risk Assessment on the Internet with Decision Support
    Info-POEMs: Information Patient-Oriented Evidence that Matters
    KR: comparison between knowledge resources
    MIMS: Monthly Index of Medical Specialties
    NI: knowledge resource compared versus no intervention
    OR: odds ratio
    ORes: knowledge resource compared versus other resource
    PDA: personal digital assistant
    PIER: Physicians’ Information and Education Resource
    PLUS: Premium LiteratUre Service
    RCT: randomized controlled trial
    SCAMP: Standardized Clinical Assessment and Management Plans
    SMD: standardized mean difference
    Trip: Turning Research Into Practice

    Edited by G Eysenbach; submitted 05.01.19; peer-reviewed by C White-Williams, H Hah, S Peters, O Ntsweng; comments to author 27.04.19; revised version received 12.05.19; accepted 18.06.19; published 25.07.19

    ©Lauren A Maggio, Christopher A Aakre, Guilherme Del Fiol, Jane Shellum, David A Cook. Originally published in the Journal of Medical Internet Research (, 25.07.2019.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.