JMIR Publications

Select Journals for Content Updates

When finished, please click submit.

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 29.09.04 in Vol 6, No 3 (2004)

This paper is in the following e-collection/theme issue:

    Editorial

    Improving the Quality of Web Surveys: The Checklist for Reporting Results of Internet E-Surveys (CHERRIES)

    Corresponding Author:

    Gunther Eysenbach, MD, MPH

    Editor-in-Chief, JMIR
    Associate Professor, Department of Health Policy, Management and Evaluation
    Senior Scientist, Centre for Global eHealth Innovation

    University of Toronto

    University Health Network

    190 Elizabeth Street

    Toronto ON M5G 2C4

    Canada

    Phone: +1 416 340 4800 ext 6427

    Fax:+1 416 340 3595

    Email:


    Related Article:

    This is a corrected version. See correction statement: http://www.jmir.org/2012/1/e8

    ABSTRACT

    : Analogous to checklists of recommendations such as the CONSORT statement (for randomized trials), or the QUORUM statement (for systematic reviews), which are designed to ensure the quality of reports in the medical literature, a checklist of recommendations for authors is being presented by the Journal of Medical Internet Research (JMIR) in an effort to ensure complete descriptions of Web-based surveys. Papers on Web-based surveys reported according to the CHERRIES statement will give readers a better understanding of the sample (self-)selection and its possible differences from a “representative” sample. It is hoped that author adherence to the checklist will increase the usefulness of such reports.

    J Med Internet Res 2004;6(3):e34)

    doi:10.2196/jmir.6.3.e34



    Introduction

    The Internet is increasingly used for online surveys and Web-based research. In this issue of the Journal of Medical Internet Research we publish two methodological studies exploring the characteristics of Web-based surveys compared to mail-based surveys [1,2]. In previous issues we have published Web-based research such as a survey among physicians conducted on a Web site [3].

    As explained in an accompanying editorial [4] as well as in a previous review [5], such surveys can be subject to considerable bias. In particular, bias can result from 1) the non-representative nature of the Internet population and 2) the self-selection of participants (volunteer effect). Often online surveys have a very low response rate (if the number of visitors is used as denominator). Thus, considerable debate ensues about the validity of online surveys. The editor and peer reviewers of this journal are frequently faced with the question of whether to accept for publication studies reporting results from Web surveys (or email surveys). There is no easy answer to this question. Often it “just depends”. It depends on the reasons for the survey in the first place, its execution, and the authors' conclusions. Conclusions drawn from a convenience sample are limited and need to be qualified in the discussion section of a paper. On the other hand, we will not, as many other journals do, routinely reject reports of Web surveys, even surveys with very small response rates, which are typical of electronic surveys, but decide on a case-by-case basis whether the conclusions drawn from a Web survey are valid and useful for readers. Web surveys may be of some use in generating hypotheses which need to be confirmed in a more controlled environment; or they may be used to pilot test a questionnaire or to conduct a Web-based experiment. Statistical methods such as propensity scores may be used to adjust results [4]. Again, it all depends on why and how the survey was done.

    Every biased sample is an unbiased sample of another target population, and it is sometimes just a question of defining for which subset of a population the conclusions drawn are assumed to be valid. For example, the polling results on the CNN Web site are certainly highly biased and not representative for the US population. But it is legitimate to assume that they are “representative” for visitors to the CNN Web site who choose to participate in the online survey.

    This illustrates the critical importance of carefully describing how and in what context the survey was done, and how the sample, which chose to reply, is constituted and might differ from a representative population-based sample. For example, it is very important to describe the content and nature of the Web site where the survey was posted in order to get an idea of the people who filled in the questionnaire (ie, to characterize the population of respondents). A survey on an anti-vaccination Web site run by concerned parents will have a different visitor structure than, for example, a vaccination clinic site. It is also important to describe in sufficient detail exactly how the questionnaire was administered. For example, was it mandatory that every visitor who wanted to enter the Web site fill it in, or were any other incentives offered? A mandatory survey is likely to reduce a volunteer bias.

    Analogous to checklists of recommendations such as the CONSORT statement (for randomized trials), or the QUORUM statement (for systematic reviews), which are designed to ensure the quality of reports in the medical literature, a checklist of recommendations for authors is being presented by JMIR in an effort to ensure complete descriptions of e-survey methodology. Papers reported according to the CHERRIES statement will give peer reviewers and readers a better understanding of the sample selection and its possible differences from a “representative” sample.


    The CHERRIES Checklist

    We define an e-survey as an electronic questionnaire administered on the Internet or an Intranet. Although many of the CHERRIES items are also valid for surveys administered via e-mail, the checklist focuses on Web-based surveys.

    While most items on the checklist are self-explanatory, a few comments about the “response rate” are in order. In traditional surveys investigators usually report a response rate (number of people presented with a questionnaire divided by the number of people who completed the questionnaire) to allow some estimation of the degree of representativeness and bias. Surveys with response rates lower than 70% or so (an arbitrary cut-off point!) are usually viewed with skepticism.

    In online surveys, there is no single response rate. Rather, there are multiple potential methods for calculating a response rate, depending on what are chosen as the numerator and denominator. As there is no standard methodology, we suggest avoiding the term “response rate” and have defined how, at least in this journal, response metrics such as, what we call, the view rate, participation rate and completion rate should be calculated.

    A common concern for online surveys is that a single user fills in the same questionnaire multiple times. Some users like to go back to the survey and experiment with the results of their modified entries. Multiple methods are available to prevent this or at least to minimize the chance of this happening (eg, cookies or log-file/IP address analysis).

    Investigators should also state whether the completion or internal consistency of certain (or all) items was enforced using Javascript (ie, displaying an alert before the questionnaire can be submitted) or server-side techniques (ie, after submission displaying the questionnaire and highlighting mandatory but unanswered items or items answered inconsistently).

    The hope is that the CHERRIES checklist provides a useful starting point for investigators reporting results of Web surveys. The editor and peer reviewers of this journal ask authors to ensure that they report the methodology fully and according to the CHERRIES checklist before submitting manuscripts.

    Table 1. Checklist for Reporting Results of Internet E-Surveys (CHERRIES)
    View this table

    References

    1. Ritter P, Lorig K, Laurent D, Matthews K. Internet versus mailed questionnaires: a randomized comparison. J Med Internet Res 2004 Sep 15;6(3):e29 [FREE Full text] [Medline] [CrossRef]
    2. Leece P, Bhandari M, Sprague S, Swiontkowski MF, Schemitsch EH, Tornetta P, et al. Internet versus mailed questionnaires: a randomized comparison (2). J Med Internet Res 2004 Sep 24;6(3):e30 [FREE Full text] [Medline] [CrossRef]
    3. Potts HWW, Wyatt JC. Survey of doctors' experience of patients using the Internet. J Med Internet Res 2002 Mar 31;4(1):e5 [FREE Full text] [Medline] [CrossRef]
    4. Schonlau M. Will web surveys ever become part of mainstream research? J Med Internet Res 2004 Sep 23;6(3):e31 [FREE Full text] [Medline] [CrossRef]
    5. Eysenbach G, Wyatt J. Using the Internet for surveys and health research. J Med Internet Res 2002 Nov 22;4(2):e13 [FREE Full text] [Medline] [CrossRef]

    Edited by G. Eysenbach; submitted 28.09.04; peer-reviewed by M Schonlau; comments to author 28.09.04; revised version received 29.09.04; accepted 29.09.04; published 29.09.04

    © Gunther Eysenbach. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 29.9.2004. Except where otherwise noted, articles published in the Journal of Medical Internet Research are distributed under the terms of the Creative Commons Attribution License (http://www.creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited, including full bibliographic details and the URL (see "please cite as" above), and this statement is included.