JMIR Publications

Select Journals for Content Updates

When finished, please click submit.

This paper is in the following e-collection/theme issue:

    Original Paper

    Clinician Search Behaviors May Be Influenced by Search Engine Design

    1Centre for Health Informatics, Australian Institute of Health Innovation, University of New South Wales, Australia, Australian Institute of Health Innovation, University of New South Wales, Sydney, Australia

    2School of Computer Science and Engineering, University of New South Wales, Sydney, Australia

    Corresponding Author:

    Annie YS Lau, PhD

    Centre for Health Informatics

    Australian Institute of Health Innovation

    University of New South Wales

    Sydney, NSW 2052

    Australia

    Phone: 61 2 9385 8891

    Fax:61 2 9385 8692

    Email:


    ABSTRACT

    Background: Searching the Web for documents using information retrieval systems plays an important part in clinicians’ practice of evidence-based medicine. While much research focuses on the design of methods to retrieve documents, there has been little examination of the way different search engine capabilities influence clinician search behaviors.

    Objectives: Previous studies have shown that use of task-based search engines allows for faster searches with no loss of decision accuracy compared with resource-based engines. We hypothesized that changes in search behaviors may explain these differences.

    Methods: In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) were randomized to use either a resource-based or a task-based version of a clinical information retrieval system to answer questions about 8 clinical scenarios in a controlled setting in a university computer laboratory. Clinicians using the resource-based system could select 1 of 6 resources, such as PubMed; clinicians using the task-based system could select 1 of 6 clinical tasks, such as diagnosis. Clinicians in both systems could reformulate search queries. System logs unobtrusively capturing clinicians’ interactions with the systems were coded and analyzed for clinicians’ search actions and query reformulation strategies.

    Results: The most frequent search action of clinicians using the resource-based system was to explore a new resource with the same query, that is, these clinicians exhibited a “breadth-first” search behaviour. Of 1398 search actions, clinicians using the resource-based system conducted 401 (28.7%, 95% confidence interval [CI] 26.37-31.11) in this way. In contrast, the majority of clinicians using the task-based system exhibited a “depth-first” search behavior in which they reformulated query keywords while keeping to the same task profiles. Of 585 search actions conducted by clinicians using the task-based system, 379 (64.8%, 95% CI 60.83-68.55) were conducted in this way.

    Conclusions: This study provides evidence that different search engine designs are associated with different user search behaviors.

    J Med Internet Res 2010;12(2):e25)

    doi:10.2196/jmir.1396

    KEYWORDS



    Introduction

    Searching for information on the Web to support decision making is now an important part of clinician practice [1]. While much research focuses on the design of retrieval algorithms to identify potentially relevant documents, there has been little examination of the way that different search engine capabilities influence search behavior. Yet, to develop information retrieval systems that actively support decision making, it is necessary to understand the complex process of how people search for and review information when making decisions [2] and to design search user interfaces appropriate for these needs.

    Recent studies of clinical search strategies have concentrated on methods of optimizing queries sent to information retrieval systems that enhance the performance of the retrieval. Hoogendam and colleagues conducted a prospective observational study of how physicians at a hospital used PubMed to search for information during their daily clinical activities [3]. They found that the likelihood of physicians viewing article abstracts returned from PubMed increased as the number of terms contained in a search query increased. Haase and colleagues investigated the optimal performance for different search engines in retrieving clinical practice guidelines by combining different search query terms [4]. Our own prior analysis of information searching by clinicians used a Bayesian belief revision framework to retrospectively model how documents might influence decisions during and after a search session [5]; the analysis demonstrated that clinicians can experience cognitive biases while searching for online information to answer clinical questions [6].

    Few studies have looked at how clinicians reformulate queries and select sources to retrieve information during a search session to answer clinical questions. In previous studies, we have shown that a task-based search engine design allows for faster clinical decision making (ie, “decision velocity”) compared with purely resource-based engines at no cost in correctness of answers [7]. Similar results with respect to search times have been noted by others for the use of topic-specific “infobuttons” [8]. In the current study, we sought to understand the basis for these performance variations, by testing whether differences in search engine interface design are associated with any differences in user search behaviors.


    Methods

    Participants and Study Design

    In all, 75 clinicians (44 doctors and 31 clinical nurse consultants) practicing in the state of New South Wales, Australia, were recruited to use an online information retrieval system to answer questions on 8 clinical scenarios within 80 minutes in a controlled setting in a university computer laboratory (Table 1) [9]. Participants had an average of 17 years of clinical experience, with the majority having rated their computer skills as good to excellent and having reported use of an online information retrieval system once per week or more.

    Participants were randomly allocated to use either a resource-based or a task-based version of an online information retrieval system to answer the 8 questions. All participants were given a brief written orientation tutorial regarding their allocated system. Questions were presented in random order. Each participant was asked to use the allocated system to locate documentary evidence to help answer each question. Participants were asked to work through the questions as they would in a real clinical setting and not spend more than 10 minutes on any one question.

    Table 1. Clinical questions presented to participants [9]
    View this table

    Resource-based System Versus Task-based System

    The search systems used by participants were essentially identical in that both systems allowed users to first select a profile (ie, search filter) to delimit their search and then to enter keywords to specify the focus of their search. The resource-based system first required clinicians to select a profile by specifying one of six online resources. These included PubMed, MIMS (a pharmaceutical database), Therapeutic Guidelines (an Australian synthesized evidence source focusing on guidelines for therapy), the Merck Manual, Harrison’s Principles of Internal Medicine, and HealthInsite (a government-funded consumer-oriented health database). Of the six resources, five presented evidence in a predigested, summarized form with references available for follow-up.

    The task-based system first required the clinicians to select a profile by selecting one of six clinical tasks: diagnosis, drug information, etiology, patient education, treatment, and other (Figure 1). Four keyword categories were available for both systems: disease, drug, symptom, and other. Clinicians could enter keywords under one or more of these categories. Quick Clinical (University of New South Wales, Sydney, Australia), the task-based information retrieval system, utilized meta-search filters to simultaneously search across a set of disparate information sources [10]. This task-based system has been demonstrated to be effective and efficient in searching and delivering information in various technical, laboratory, and longitudinal evaluation studies [9-14].

    Figure 1. Screenshot showing Quick Clinical, the task-based query user interface
    View this figure

    Coding of Search Actions

    System logs unobtrusively capturing participants’ interactions with the systems were coded and analyzed for their search actions and query reformulation strategies. For each clinical scenario question, participants were able to reformulate queries and conduct a sequence of searches as they explored information to assist in answering the question. We first coded these query reformulations by the change in profile selection (task or resource) between consecutive searches in a session as “new profile,” “same profile,” or “previously used profile.” We next coded the keyword changes, as indicating a syntactic and/or a semantic reformulation [14]. Examples of syntactic reformulations include changing the following: the use of capitalization, the order of words, the conjunctions used between words, word spacing, or the typographic of the words (ie, variants of the base form of the word) used in the query (Table 2). Semantic reformulations include adding, removing, or replacing keywords.

    Table 2. Examples of syntactic query reformulation
    View this table

    Quantitative Analyses

    Chi-square analyses and the test for difference between proportions were conducted to detect statistically significant differences in profile and query search actions between clinicians using the resource-based and task-based systems.


    Results

    Of 75 clinicians, 39 were randomly allocated to use of the resource-based system and 36 to use of the task-based system. Two resource-based scenarios were not completed, giving a total of 310 (ie, 39×8−2) search sessions, 1708 searches, and 1455 document accesses using the resource-based system. The task-based system generated 288 (ie, 36×8) search sessions, 873 searches, and 1136 document accesses.

    Next Action in a Search Sequence

    Chi-square analyses conducted of data presented in Table 3, Table 4, and Table 5 showed statistically significant differences in the next action in a search sequence between the resource-based and task-based systems. These significant differences included (1) selecting the next profile in a search sequence (χ22 = 103.45, P < .001) (Table 3), (2) reformulating keywords (χ23 = 59.37, P < .001) (Table 4), and (3) both selecting the next profile and reformulating keywords (χ211 = 165.33, P < .001) (Table 5).

    Table 3. Comparison of next profile actions between resource-based and task-based systems
    View this table
    Table 4. Comparison of next query reformulation actions between resource-based and task-based system users
    View this table

    The test for difference between proportions revealed that clinicians using the resource-based system were 19.5% more likely to select a new profile and apply no changes to keywords (Z =11.43, P < .001), and 5.9% more likely to select a profile that was previously visited and apply no changes to keywords (Z = 5.80, P < .001) (Table 5). Also, clinicians using the task-based system were 7.8% more likely to keep the same profile in a sequence of search actions (Z = –5.28, P < .001), 7.5% to keep the same profile and apply both syntactic and semantic changes to the query (Z = –4.69, P < .001), and 6.5% to keep the same profile and apply semantic changes to the query (Z = –3.37, P < .001) (Table 5). Further, task-based clinicians seldom accessed a profile that had been previously visited (9.4%, 95% CI 7.29-12.04) (Table 3).

    Table 5. Comparison of profile and query reformulation actions between resource-based and task-based systems
    View this table

    Search Actions During a Session

    We examined search behaviors at the beginning, middle, and end of a search sequence. At the beginning of a search sequence, query reformulation was the most frequent choice for both systems (Table 6). In the middle of a session, clinicians using the resource-based system were 26.6% more likely to change profile only (Z = 10.21, P < .001) (Table 6), and clinicians using the task-based system were 20.7% more likely to reformulate query only (Z = –6.06, P < .001) (Table 6). At the end of a sequence, clinicians using the resource-based system were 26.7% more likely to change profile only (Z = 6.50, P < .001) (Table 6), and clinicians using the task-based system were 14.9% more likely to reformulate query only (Z = –2.75, P = .006) (Table 6).

    Table 6. Search action between resource-based and task-based systems during a session
    View this table

    Consecutive Search Actions

    Table 7 displays comparisons of the frequencies of use of consecutive pairs of actions anywhere within a sequence between the two systems. For clinicians using the resource-based system, the pair “change profile only” followed by “change profile only” was 18.6% more likely (Z = 13.88, P < .001) (Table 7). Among clinicians using the task-system, the pair “change query only” followed by “change query only” was used 17.8% more frequently compared with clinicians using the resource-based system (Z = –6.95, P < .001) (Table 7).

    Table 7. Consecutive search actions in a session between resource-based and task-based systems
    View this table

    Discussion

    Clinicians using the resource-based system appeared to favor a “breadth-first” search strategy, exploring different resources with the same keywords in the query before searching in a specific resource with query reformulations. Clinicians using the task-based system were provided with results from multiple resources in each search and so appeared to favor a “depth-first” search strategy, searching in the same task profile exhaustively with different keyword reformulations in the query before moving to other profiles.

    We have previously shown that changes in search engine design and interface were associated with changes in clinical decision velocity, number of search actions undertaken, and ultimate decision outcome [7]. To understand the basis for such differences, we have now looked at the type of actions undertaken by users of two different systems and the sequences of these actions. While it was the intention of the experiment to detect changes in search behavior, our present analysis extends the analytic framework of the original experiments and may thus suffer from being a post hoc explanation of the observed differences. This limitation may readily be addressed by further experiments specifically designed to test for changes in search strategy.

    Further study is needed to understand how clinicians assess the results of a search and formulate the next step in their strategy. We have discussed elsewhere that the process of searching can be thought of as a conversation [15] where individuals ask questions of knowledgeable agents (eg, information retrieval systems or people) to help find answers to their questions. Thinking of the interaction with a search engine as a conversation between a human with a question and a search engine with capabilities to help find an answer may help us understand the human behaviors observed in this study.

    According to Grice’s conversational maxims [15], (which were originally created to describe the “rules” for effective human conversations), an answer to a question may be inappropriate for a number of reasons. The respondent may be poorly qualified to answer the question (eg, the respondent may be an inappropriate, out of date, or otherwise misleading information source); may misunderstand the question (eg, the query may not be well expressed in terms understandable by the resource); or may reply with unhelpful or irrelevant information (eg, because of poor relevance metrics of the search algorithm). We can speculate that the search actions taken by clinicians are in response to judgments they make about the progress of their “conversation” with the information retrieval system.

    One can hypothesize, when clinicians are faced with a choice of several resources with no clear indication of which is the best, they scan multiple resources to gauge the "competence" of each before committing to a detailed conversation with the resource they feel best qualified to help. In contrast, clinicians with a task-based system are simultaneously receiving answers from multiple resources and so should be able to quickly form a view of the overall capabilities of the group of resources being simultaneously searched. Not faced with concerns about the competence of the system they are interacting with, clinicians focus on improving the dialogue with the system. This is done by finding different ways to ask the same question or by changing the question focus if there has been a “misunderstanding.” As a result, this could explain why users of task-based systems conduct fewer searches and consult fewer documents [7], that is, these users may not need to credential the resources they are interacting with in the same way that users of resource-based systems appear to do.

    Overall, given the clear differences in the styles of user-system dialogue demonstrated in this study, and the impact of such behavior on the clinical utility of information retrieval systems, discovering ways of optimizing the dialogue between knowledge sources and users seems a productive line of further enquiry.

    Acknowledgments

    The authors thank Professor Johanna Westbrook, Dr Sophie Gosling, and Dr Nerida Creswick for their contributions in collecting the data used in this analysis. This research was supported by the Australian Research Council Discovery Grant DP0452359. The search engine used in this study was developed with support from the Australian Research Council SPIRT grant C00107730 and the NHMRC development grant 300591. None of the funding sources had any role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation, review, or approval of the manuscript.

    Conflicts of Interest

    Quick Clinical (QC) was developed by researchers at the Centre for Health Informatics at the University of New South Wales, and the university and some of the authors could benefit from commercial exploitation of QC or its technologies.

    References

    1. Hersh WR. Evidence-based medicine and the Internet. ACP J Club 1996;125(1):A14-A16. [Medline]
    2. Spink A, Cole C. New Directions in Cognitive Information Retrieval. 1st edition. Dordrecht, the Netherlands: Springer; 2005.
    3. Hoogendam A, Stalenhoef AF, Robbe PF, Overbeke AJ. Analysis of queries sent to PubMed at the point of care: observation of search behaviour in a medical teaching hospital. BMC Med Inform Decis Mak 2008;8(1):42. [Medline] [CrossRef]
    4. Haase A, Follmann M, Skipka G, Kirchner H. Developing search strategies for clinical practice guidelines in SUMSearch and Google Scholar and assessing their retrieval performance. BMC Med Res Methodol 2007;7(1):28 [FREE Full text] [Medline] [CrossRef]
    5. Lau A, Coiera EW. A Bayesian model that predicts the impact of Web searching on decision-making. J Am Soc Inf Sci Technol 2006;57(7):873-880. [CrossRef]
    6. Lau AYS, Coiera EW. Do people experience cognitive biases while searching for information? J Am Med Inform Assoc 2007;14(5):599-608 [FREE Full text] [Medline] [CrossRef]
    7. Coiera E, Westbrook JI, Rogers K. Clinical decision velocity is increased when meta-search filters enhance an evidence retrieval system. J Am Med Inform Assoc 2008;15(5):638-646 [FREE Full text] [Medline] [CrossRef]
    8. Del Fiol G, Haug PJ, Cimino JJ, Narus SP, Norlin C, Mitchell JA. Effectiveness of topic-specific infobuttons: a randomized controlled trial. J Am Med Inform Assoc 2008;15(6):752-759 [FREE Full text] [Medline] [CrossRef]
    9. Westbrook JI, Coiera EW, Gosling AS. Do online information retrieval systems help experienced clinicians answer clinical questions? J Am Med Inform Assoc 2005;12(3):315-321 [FREE Full text] [Medline] [CrossRef]
    10. Coiera E, Walther M, Nguyen K, Lovell NH. Architecture for knowledge-based and federated search of online clinical evidence. J Med Internet Res 2005;7(5):e52 [FREE Full text] [Medline] [CrossRef]
    11. Lau AYS, Coiera E. How do clinicians search for and access biomedical literature to answer clinical questions? Stud Health Technol Inform 2007;129(Pt 1):152-156. [Medline]
    12. Magrabi F, Coiera EW, Westbrook JI, Gosling AS, Vickland V. General practitioners' use of online evidence during consultations. Int J Med Inform 2005 Jan;74(1):1-12. [Medline] [CrossRef]
    13. Magrabi F, Westbrook JI, Kidd MR, Day RO, Coiera E. Long-term patterns of online evidence retrieval use in general practice: a 12-month study. J Med Internet Res 2008;10(1):e6 [FREE Full text] [Medline] [CrossRef]
    14. Coiera E. Guide to Health Informatics. 2nd edition. London, United Kingdom: Hodder Arnold; 2003:66-80.
    15. Grice H. Logic and conversation. In: Cole P, Morgan JL, editors. Syntax and Semantics. Volume 3. New York, NY: Academic Press; 1974:41-58.


    Abbreviations

    IVF: in vitro fertilization
    SIDS: sudden infant death syndrome


    Edited by G Eysenbach; submitted 02.11.09; peer-reviewed by A Piga; comments to author 09.02.10; revised version received 12.02.10; accepted 17.03.10; published 30.06.10

    ©Annie YS Lau, Enrico Coiera, Tatjana Zrimec, Paul Compton. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 30.06.2010  

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.