JMIR Publications

Select Journals for Content Updates

When finished, please click submit.

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

This paper is in the following e-collection/theme issue:

    Original Paper

    A Comparison of a Postal Survey and Mixed-Mode Survey Using a Questionnaire on Patients’ Experiences With Breast Care

    1Tranzo, Academic Research Centre for Health and Social Care, Tilburg University, Tilburg, Netherlands

    2Netherlands Institute for Health Services Research (NIVEL), Utrecht, Netherlands

    Corresponding Author:

    Marloes Zuidgeest, MSc

    Tranzo, Academic Research Centre for Health and Social Care

    Tilburg University

    PO Box 90153

    Tilburg, 5000 LE

    Netherlands

    Phone: 31 134663627

    Fax:31 134663637

    Email:


    ABSTRACT

    Background: The Internet is increasingly considered to be an efficient medium for assessing the quality of health care seen from the patients’ perspective. Potential benefits of Internet surveys such as time efficiency, reduced effort, and lower costs should be balanced against potential weaknesses such as low response rates and accessibility for only a subset of potential participants. Combining an Internet questionnaire with a traditional paper follow-up questionnaire (mixed-mode survey) can possibly compensate for these weaknesses and provide an alternative to a postal survey.

    Objective: To examine whether there are differences between a mixed-mode survey and a postal survey in terms of respondent characteristics, response rate and time, quality of data, costs, and global ratings of health care or health care providers (general practitioner, hospital care in the diagnostic phase, surgeon, nurses, radiotherapy, chemotherapy, and hospital care in general).

    Methods: Differences between the two surveys were examined in a sample of breast care patients using the Consumer Quality Index Breast Care questionnaire. We selected 800 breast care patients from the reimbursement files of Dutch health insurance companies. We asked 400 patients to fill out the questionnaire online followed by a paper reminder (mixed-mode survey) and 400 patients, matched by age and gender, received the questionnaire by mail only (postal survey). Both groups received three reminders.

    Results: The respondents to the two surveys did not differ in age, gender, level of education, or self-reported physical and psychological health (all Ps > .05). In the postal survey, the questionnaires were returned 20 days earlier than in the mixed-mode survey (median 12 and 32 days, respectively; P < .001), whereas the response rate did not differ significantly (256/400, 64.0% versus 242/400, 60.5%, respectively; P = .30). The costs were lower for the mixed-mode survey (€2 per questionnaire). Moreover, there were fewer missing items (3.4% versus 4.4%, P = .002) and fewer invalid answers (3.2% versus 6.2%, P < .001) in the mixed-mode survey than in the postal survey. The answers of the two respondent groups on the global ratings did not differ. Within the mixed-mode survey, 52.9% (128/242) of the respondents filled out the questionnaire online. Respondents who filled out the questionnaire online were significantly younger (P < .001), were more often highly educated (P = .002), and reported better psychological health (P = .02) than respondents who filled out the paper questionnaire. Respondents to the paper questionnaire rated the nurses significantly more positively than respondents to the online questionnaire (score 9.2 versus 8.4, respectively; χ21 = 5.6).

    Conclusions: Mixed-mode surveys are an alternative method to postal surveys that yield comparable response rates and groups of respondents, at lower costs. Moreover, quality of health care was not rated differently by respondents to the mixed-mode or postal survey. Researchers should consider using mixed-mode surveys instead of postal surveys, especially when investigating younger or more highly educated populations.

    J Med Internet Res 2011;13(3):e68)

    doi:10.2196/jmir.1241

    KEYWORDS



    Introduction

    In the Netherlands, health care policy stresses regulated competition between health care providers [1]. Efforts are made to enhance the transparency of health care quality, to stimulate informed decision making among consumers, and to improve the performance of health care providers. Comparative information about the performance of health care providers is needed for consumers to make informed decisions. This comparative information can be gathered in different ways. One possibility is to ask a sample of patients about their actual experiences concerning quality of care provided by health care providers.

    Measuring the quality of care from the patients’ perspective has been standardized in the Netherlands since 2006, using a new instrument called the Consumer Quality Index (CQ-index or CQI) [2]. CQI questionnaires are usually self-administered paper questionnaires (eg, CQI Rheumatoid Arthritis [3], CQI Breast Care [4]). Individual structured interviews are conducted in cases where a self-administered paper questionnaire is not feasible because of respondents’ visual, physical, or cognitive limitations (CQI Care for the Disabled [5], CQI Long-Term Care [6]). Postal surveys (with multiple reminders) and interviews are relatively expensive and time consuming. It would therefore be interesting to know whether other data collection methods can be applied in this field.

    The Internet is increasingly considered to be an efficient medium for assessing quality of care from a patient’s perspective. In populations that already use the Internet, Internet surveys have been found to be a useful means of conducting research [7-9]. Efficiency gains are found in shorter response times and field costs reductions (50%–80%) [10-12]. In contrast to paper questionnaires, Internet questionnaires can contain various interactive features that allow complex skip patterns that are invisible to respondents, and the Internet allows validation of responses by using an instant feedback function while respondents are still online [12,13]. Consequently, the quality of data collected with an Internet survey is higher. Some Internet surveys have shown promising response rates (up to 94% in Web forums) [10,12,14]. The extreme response in Web forums can be explained by a probable selection bias in these studies. Those who participate in Web forums are most likely people who are familiar with the Internet and frequently use the Internet, leading to a higher response rate to Internet questionnaires. This high response rate has not been realized in other studies; response rates ranged from 17% to 70% [15]. In CQI research, the response rate to paper questionnaires varied from 20% to 79% with an average response rate of 55% [16]. One CQI study compared an Internet questionnaire with a paper questionnaire. The response rate to the Internet questionnaire (8%) was considerably lower than to the paper questionnaire (35%) [17,18]. To increase the response rate one can send a prenotification or reminders, give an incentive, or use short questionnaires. A salient subject of a questionnaire also increases the response rate [19].

    The potential of Internet surveys should, however, be balanced against an equally large weakness. The Netherlands has the largest percentage of households with Internet access in the European Union, but there are still 1.2 million Dutch people (7.3% of the population) with no Internet access at home and 0.5 million Dutch people (3.1% of the population) who do not use the Internet [20]. People who use the Internet are more affluent, better educated, more often male, and younger than people who do not use Internet. Only a part of the population can thus be reached through the Internet [10,11,21]. To compensate for the selection of people in an Internet survey, a combination of data collection methods can be used such as combining an Internet questionnaire with a more traditional postal follow-up [19].

    It is known that the way questionnaire are administered has an effect on answers of respondents (so-called mode effects). For example, telephone respondents were found to be more likely to rate health care positively and their own health status negatively than postal respondents [22,23]. This finding is similar to a study where telephone respondents provide more positive ratings than Web respondents [24]. Another example is that students who completed a Web-based questionnaire responded more favorably on different scales (such as college challenge and learning, education, and personal and social gains) than students who filled out a paper questionnaire [25]. It is suggested that computer anxiety affects participants’ responses. Moreover, biases could occur in the way people perceive and process questions presented on screen versus on paper. A study that tested the difference in test–retest reliability and internal consistency between Internet and paper versions of the SF-36, however, found little or no evidence for mode effects [26]. Knowing that these mode effects exist, it is important to investigate whether the answers of respondents in a postal and mixed-mode survey differ.

    To examine whether a mixed-mode survey can be an alternative to postal survey, our research question is “What are the differences between a mixed-mode survey (Internet with paper follow-up) and a more traditional postal survey in terms of respondent characteristics, response rates and time, quality of data, costs, and mode effects?” The differences were examined within a sample of breast care patients who reported their experiences with health care using the CQI Breast Care questionnaire.


    Methods

    Sample

    Data were collected within a larger study assessing the usability of CQI Breast Care [27]. For the mixed-mode survey, 200 patients with a benign abnormality and 200 patients with breast cancer were selected from the reimbursement files of seven Dutch health insurance companies. Inclusion criteria were (1) being older than 18 years and (2) having received breast care in the last 24 months. We used the same procedure to select 3955 patients who received the questionnaire by mail only as part of another study. Of these 3955 patients, we selected 400 patients (200 with breast cancer and 200 with benign abnormalities) for the comparison of the two surveys. These 400 patients were not randomly selected, but were matched by age and gender to the respondents in the mixed-mode survey.

    Data Collection

    Patients received a letter from their health insurance company with the request to fill out a paper questionnaire (postal survey) or an Internet questionnaire with unique username and password (mixed-mode survey). A total of three reminders were sent and in both surveys nonrespondents received a paper version of the questionnaire in the third mail-shot. This data protocol was based on Dillman et al [28]. (See Figure 1 for detailed information on the mail-shots.) The data were collected in the Netherlands in the spring of 2008.

    Figure 1. Mail-shots sent to the patients.
    View this figure

    Questionnaire

    The CQI Breast Care contains items measuring the actual experiences of patients with breast examinations, surgery for breast cancer, other treatment, subsequent treatment, cooperation between health care providers, continuity of care, accessibility of care, and expertise of health care providers [4]. There are two versions of the CQ-index: one for patients with breast cancer (151 items) and one for patients with a benign abnormality (60 items). The questionnaire for patients with a benign abnormality is the same as the questionnaire for breast cancer, except that it does not contain questions about surgery and treatments. Both questionnaires have three scales in common, and the questionnaire for patients with breast cancer consists of 11 extra scales. Cronbach alpha for these scales varied between 0.74 and 0.93. Example items are presented in Table 1. The questionnaires additionally contain items on respondents characteristics (eg, age, education, ethnicity, and patient’s self-assessed physical and psychological health) and global ratings of health care providers general practitioner, hospital care in the diagnostic phase, surgeon, nurses, radiotherapy, chemotherapy, and hospital care in general). In the present study, we focused on the global ratings of the health care providers. These ratings ranged from 0 to 10, with a score of 0 indicating the worst possible health care or provider and a score of 10 indicating the best possible health care or provider. The respondents were asked to report their experiences in the last 24 months.

    Table 1. Scales in the Consumer Quality Index Breast Care, their reliability (Cronbach alpha for internal consistency), and example items
    View this table

    Statistical Analyses

    Respondent Characteristics

    To check whether our matching procedure was successful, we compared the selected patients within the two surveys on age and gender. Respondents were compared concerning age, level of education, self-reported physical and psychological health (Mann-Whitney test), and gender (χ2 test).

    Response Rate and Time

    Response rates were calculated as the number of valid received questionnaires divided by the number of patients in the starting sample. The response time was calculated as the number of days between the first letter (January 31, 2008) and the return date of the valid questionnaire. For the mixed-mode survey, the number of days between sending the paper questionnaire (February 28, 2008) and receiving the valid paper questionnaire was also calculated. The closing date of the data collection was April 1, 2008. A chi-square test was used to examine the differences in response rates between the two surveys because of the dichotomous variable (respondent/ nonrespondent). The differences in response time were determined using a Mann-Whitney test because the response time is a continuous variable.

    Quality of Data

    The percentage of items that were skipped while they needed to be answered (missing items) and the percentage of the items that were answered while they needed to be skipped (invalid answers) were calculated. The percentages were compared between the two surveys using a Mann-Whitney test because these percentages are continuous variables.

    Total Costs

    Expenses considered in cost calculations included setup costs (document layout, programming and testing of the questionnaire for each survey, and mailing supplies), field costs (postage, technical support, and project management staff), and scanning data costs (data entry of paper questionnaires). The costs per valid questionnaire received were calculated by dividing the total costs by the number of valid questionnaires received.

    Mode Effects

    We performed multilevel regression analyses to examine the mode effects. Multilevel regression analyses take into account the hierarchical structure of our data: individual patients (level 1) are nested within hospitals (level 2). The analyses were conducted using MLwiN version 2.02 software package (Centre for Multilevel Modelling, University of Bristol, Bristol, UK). Mode effects were examined by comparing the estimated mean scores on seven global ratings of general practitioner, hospital care in the diagnostic phase, surgeon, nurses, radiotherapy, chemotherapy, and hospital care in general using a chi-square test (P < .05 if χ2 > 3.8 and P < .001 if χ2 > 6.6). The mean scores were adjusted for the influence of age, education level, and self-reported health status of respondents.

    In addition, within the mixed-mode survey, we examined the differences in respondent characteristics, differences in response rates, and time and mode effects for respondents who filled out the Internet questionnaire and the paper questionnaire.


    Results

    Respondent Characteristics

    Characteristics of the sample are presented in Table 2. Our matching procedure was successful since age and gender of the selected patients did not differ between the postal and mixed-mode survey. Patients with benign abnormalities were younger then patients with breast cancer (P < .001).

    Table 2. Sample characteristics
    View this table

    Table 3 shows that also the characteristics of the respondents did not differ between the postal and mixed-mode survey.

    Table 3. Respondents’ age, gender, level of education, and self-reported physical and psychological health
    View this table

    Within the mixed-mode survey, differences were found between those who filled out the Internet questionnaire and those who filled out the paper questionnaire. Internet respondents were younger, were more often highly educated, and reported better psychological health compared with respondents who filled out the paper questionnaire (Table 4). Also, both paper and Internet respondents with benign abnormalities were younger than their counterparts with breast cancer (Ps < .001; not in table).

    Table 4. Respondents’ characteristics within the mixed-mode survey: age, gender, level of education, and self-reported physical and psychological health
    View this table

    Response Rates and Times

    The response rate did not differ between the two surveys and was 64.0% (256/400 patients) for the postal survey and 60.5% (242/400 patients) for the mixed-mode survey (P = .31; Table 5). While the response rates of patients with breast cancer and of patients with benign abnormalities did not differ in the postal survey (134/200, 67.0% versus 122/200, 61.0%, respectively; P = .21), the response rate was significantly higher for patients with breast cancer than for patients with benign abnormalities in the mixed-mode survey (132/200, 66.0% versus 110/200, 55.0%, respectively; P = .02).

    In the mixed-mode survey, 52.9% (128/242) of the respondents filled out the questionnaire online. The percentage of patients with benign abnormalities who filled out the questionnaire online was higher (64/110, 58%) than the percentage of patients with breast cancer (64/134, 49%). However, this difference was not significant (P= .13).

    Table 5. Response rates for each survey and for patients with breast cancer or benign abnormalities
    View this table

    Figure 2 and Figure 3 show the cumulative percentage of questionnaires received by days after the first mail-shot. The vertical lines in the graphs represent the reminders that were sent. In the postal survey, questionnaires were returned 20 days earlier than in the mixed-mode survey (z = –3.59, P < .001). The median number of days expired before the questionnaire was returned was 12 days (range 4–60 days) in the postal survey and 32 days (range 2–61 days) in the mixed-mode survey.

    In the mixed-mode survey, the paper questionnaires were sent in week 4 (second reminder). The median number of days expired before these paper questionnaires were returned was 7 days (range 4–33 days). The median number of days expired before online questionnaires were filled out was 9 days (range 2–59 days). In other words, the longer response time in the mixed-mode survey was mainly caused by the group who did not respond using the Internet.

    Figure 2. Percentage of received questionnaires by days after first mail-shot for the postal and mixed-mode surveys.
    View this figure
    Figure 3. Percentage of received questionnaires by days after first mail-shot for the Internet and paper questionnaires within the mixed-mode survey.
    View this figure

    Quality of Data

    The mean percentage of missing items per question differed significantly between the two surveys (z = –3.08, P = .002): the mean percentage of missing items was lower in the mixed-mode survey than in the postal survey (5.04/150, 3.4% versus 6.60/150, 4.4%, respectively). In addition, the mean percentage of invalid answers was twice as high in the postal survey as in the mixed-mode survey (4.99/81, 6.2% versus 2.50/81, 3.2%, respectively; z = –3.68, P < .001).

    Costs

    The costs per valid questionnaire returned were higher in the postal survey than in the mixed-mode survey (€25.8 versus €23.9 per valid questionnaire returned, respectively). Compared with the postal survey, the variable costs were reduced by 17% of the total costs in the mixed-mode survey, but the fixed costs were raised by 17% (Table 6).

    Table 6. Fixed and variable costs per valid questionnaire returned
    View this table

    Mode Effects

    In Table 7, the mean scores on seven global ratings of different health care providers are presented. These mean scores have been corrected for hospital, age, level of education, and self-reported health status. The scores are relatively high, ranging from 8.3 to 9.0. Respondents in the postal survey gave the radiotherapist a score of 9.0 and the total care in the hospital a score of 8.3. The respondents in the mixed-mode survey rated the general practitioner and chemotherapy care the highest (score = 8.8) and gave care at the hospital in the diagnosis phase and hospital care a score of 8.4. We found no significant differences in global ratings between the two surveys.

    Table 7. Mean scores on global ratings of different health care providers (corrected for hospital, age, education, and self-reported health status) for respondents to the postal survey and mixed-mode survey
    View this table

    Table 8 shows the differences in global ratings given by respondents to the paper and Internet questionnaires within the mixed-mode survey. The global rating of nurses differed significantly between these two groups: respondents filling out the paper questionnaire rated the nurses significantly more positively than respondents filling out the questionnaire online (score 9.2 versus 8.4, respectively; c2 > 3.8).

    Table 8. Mean scores on global ratings of different health care providers (corrected for hospital, age, education, and self-reported health status) for respondents to the postal or Internet questionnaire within the mixed-mode survey
    View this table

    Discussion

    This study examined whether a mixed-mode survey (Internet questionnaire with paper follow-up) is an alternative to the more traditional postal survey. The results showed that combining an Internet questionnaire with a paper follow-up improved the quality of data and was less expensive than a postal survey. However, the time before questionnaires were received was longer in the mixed-mode survey. No differences between the mixed-mode survey and postal survey were found concerning respondent characteristics, response rates, and global ratings of different health care providers.

    The findings showed that the characteristics of the respondents were the same for the two surveys. This means that mixed-mode surveys attract the same population as postal surveys. In total, 53% of respondents in the mixed-mode survey (128/242) filled out the questionnaire online. It appeared that in the mixed-mode survey Internet respondents were younger and more often highly educated and that they reported better psychological health than paper respondents. The younger people probably were more familiar with the Internet and were more likely to have access to the Internet than older people [11,21]. To overcome the problem of possible exclusion of the elderly and less highly educated, a mixed-mode survey should be chosen rather than an Internet survey [11,29].

    The response rate was relatively high for both surveys (over 60%). In other CQI surveys, the response rates varied between 20% and 79% [16]. Perhaps the relatively high response rate is due the subject under study, namely abnormality of the breast. The response rate among women referred for mammography in another study was comparably high, both for the Internet (64%) and for the paper questionnaire (77%) [26]. Breast abnormality is a disease that has a huge impact on the emotional and physical quality of life of patients [30]. A review showed that saliency of the subject of questionnaires yields higher response rates [19]. Our results confirm the result of that review. In the mixed-mode survey, the response rate for patients with breast cancer was higher than the response rate for patients with benign abnormalities, even though the questionnaire for breast cancer was longer.

    The response time for the questionnaires to be returned was longer in the mixed-mode survey than in the postal survey. This effect was unexpected because using the Internet can reduce the time taken to return a questionnaire [10-12]. Both groups in the mixed-mode survey (paper and Internet respondents) responded relatively quickly (median number of days 7 and 9 days, respectively), but respondents with no access to or interest in the Internet questionnaire only responded after 4 weeks when the paper questionnaire was sent. The relatively quick response by postal respondents in the mixed-mode survey could be explained by the fact that respondents had already been informed about the study. Use of prenotification has been shown to shorten response times [19,31]. Another method to reduce the return time is sending the paper questionnaire out earlier.

    Research has shown that an Internet survey results in more complete data compared with a postal survey [32]. This conclusion is confirmed in our study; the quality of data was higher in the mixed-mode survey than in the postal survey. One of the advantages of using the Internet for survey research is the technique of designing questionnaires so that complex skip patterns are invisible to respondents. As a consequence, the online questionnaire resulted in zero missing items and zero invalid answers (eg, answers to questions that had to be skipped). However, given the fact that some groups of people are underrepresented on the Internet (for instance, the elderly), conducting surveys through the Internet alone is not (yet) possible [11,21].

    One of the key potential advantages of using the Internet over paper questionnaires is reducing costs. This study showed that the costs per returned questionnaire was €2 lower in the mixed-mode survey than in the postal survey. In the present study, the information technology costs were, however, relatively high for the mixed-mode survey. This was due to the need to program two applications, one for scanning the paper questionnaires and one for the Internet questionnaires. In the future, more costs can possibly be saved by using one and the same program for the different data collection methods within a mixed-mode survey. In addition, the variable costs per questionnaire were lower and the fixed costs per questionnaire were higher in the mixed-method survey than in the postal survey. Fixed costs per questionnaire can be reduced if a larger sample is taken, because the fixed activities are divided over the number of returned questionnaires. In other words, the larger the sample, the more money can be saved by using a mixed-mode survey.

    Our study was the first to examine so-called mode effects between a mixed-method survey (Internet with paper follow-up) and postal survey. We found no differences between the two surveys concerning global ratings respondents gave to different health care providers. This is beneficial, because it implies that there is no bias in the scores that is a function of the manner of data collection. Other studies did find mode effects between the answers of telephone respondents and postal respondents [23], Internet respondents and telephone respondents [24], and Internet respondents and postal respondents [25,28,33]. One study investigated the differences between a postal and an Internet questionnaire, where a subset of the participants filled out also the alternative version (Internet and paper questionnaire, respectively). They found little or no evidence for a difference in test–retest reliability and internal consistency when they compared the Internet and paper versions of the questionnaire [26].

    We did not ask why respondents in the mixed-mode survey did not fill out the questionnaire online. In one study among nonrespondents of an Internet questionnaire, the nonrespondents indicated that they did not have a computer or access to the Internet. Other reasons were having no experience with the Internet or not trusting the Internet [31]. This corresponds with findings by other researchers, who showed that factors influencing response times are privacy concerns and computer anxiety [19,28,33

    Acknowledgments

    This study was financial supported by the Netherlands Organization for Health Research and Development (ZonMw).

    Conflicts of Interest

    None declared

    References

    1. Schut FT, Van de Ven WP. Rationing and competition in the Dutch health-care system. Health Econ 2005 Sep;14(Suppl 1):S59-S74. [CrossRef] [Medline]
    2. Delnoij DM, ten Asbroek G, Arah OA, de Koning JS, Stam P, Poll A, et al. Made in the USA: the import of American Consumer Assessment of Health Plan Surveys (CAHPS) into the Dutch social insurance system. Eur J Public Health 2006 Dec;16(6):652-659 [FREE Full text] [CrossRef] [Medline]
    3. Zuidgeest M, Sixma H, Rademakers J. Measuring patients' experiences with rheumatic care: the consumer quality index rheumatoid arthritis. Rheumatol Int 2009 Apr 16;30(2):159-167. [CrossRef] [Medline]
    4. Damman OC, Hendriks M, Sixma HJ. Towards more patient centred healthcare: a new Consumer Quality Index instrument to assess patients' experiences with breast care. Eur J Cancer 2009 Jun;45(9):1569-1577. [CrossRef] [Medline]
    5. Brandt H, Zuidgeest M, Sixma H. Development of CQ-index Care for the Handicapped: measuring the quality of care from a client perspective (in Dutch). Utrecht: NIVEL; 2007.   URL: http://www.nivel.nl/pdf/Pilot-ontwikkeling-CQ-index-Gehandicaptenzorg-2007.pdf [accessed 2010-12-06] [WebCite Cache]
    6. Wiegers TA, Stubbe JH, Triemstra M. Development of the CQ-Index Nursing Homes and Homecare: Quality of Life for Residents, Representatives and Clients. Utrecht: NIVEL; 2007.
    7. Couper MP. Web survey design and administration. Public Opin Q 2001;65(2):230-253. [Medline]
    8. Kaplowitz MD, Hadlock TD, Levine R. A comparison of web and mail survey response rates. Public Opin Q 2004;68(1):94-101.
    9. Sills SJ, Song CY. Innovations in survey research: an application of web-based surveys. Soc Sci Comput Rev 2002;20(1):22-30.
    10. Kiesler S, Sproull LS. Response effects in the electronic survey. Public Opin Q 1986;50(3):402-413.
    11. Kwak N, Radler R. A comparison between mail and web surveys: response pattern, respondent profile, and data quality. J Off Stat 2002;18(2):257-73.
    12. Schaefer DR, Dillman DA. Development of a standard e-mail methodology. Public Opin Q 1998;62(3):378-397.
    13. Schmidt CB. Casting the Net: surveying an Internet population. J Comput Mediat Commun 1997;3(1).
    14. Tse A. Comparing the response rate, response speed, and response quality of two methods of sending questionnaires: e-mail vs. mail. J Market Res Soc 1998;40(4):354-361.
    15. Leece P, Bhandari M, Sprague S, Swiontkowski MF, Schemitsch EH, Tornetta P, et al. Internet versus mailed questionnaires: a controlled comparison (2). J Med Internet Res 2004 Oct 29;6(4):e39 [FREE Full text] [CrossRef] [Medline]
    16. Zuidgeest M, Hendriks M, Spreeuwenberg J, Rademakers J. Usefulness of the CQ index: Report Volume 1: Methods of Data Collection for CQI Research (in Dutch). Utrecht: NIVEL; 2008.
    17. Boer de D, Hendriks M, Damman OC, Spreeuwenberg P, Rademakers J, Delnoij D, et al. Experiences of Insured With Care and Health insurer: CQ-Index Health and Health Insurance, Measurement 2007 (in Dutch). Utrecht: NIVEL; 2007.
    18. Slijkhuis R. Is Dillman Absolute? Research Into the Influence of Layout and Contact Process of CQI Surveys on the Quantitative Response, Qualitative Response and Cost of Data Collection (in Dutch) [Bachelors thesis]. Twente, Sweden: University Twente; 2008.
    19. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. Methods to increase response rates to postal questionnaires. Cochrane Database Syst Rev 2007(2):MR000008. [CrossRef] [Medline]
    20. Hoksbergen M. Centraal Bureau voor de Statistiek. 2008 Dec 17. 1.2 Million Dutch People Have no Internet Access at Home   URL: http:/​/www.​cbs.nl/​nl-NL/​menu/​themas/​vrije-tijd-cultuur/​publicaties/​artikelen/​archief/​2008/​2008-2641-wm.​htm [accessed 2011-05-11] [WebCite Cache]
    21. Madden M. Internet Penetration and Impact. Washington, DC: Pew Internet & American Life Project; 2009 Mar 24.   URL: http://www.pewinternet.org/~/media//Files/Reports/2006/PIP_Internet_Impact.pdf.pdf [accessed 2009-03-03] [WebCite Cache]
    22. van Campen C, Sixma H, Kerssens JJ, Peters L. Comparisons of the costs and quality of patient data collection by mail versus telephone versus in-person interviews. Eur J Public Health 1998:66-70.
    23. de Vries H, Elliott MN, Hepner KA, Keller SD, Hays RD. Equivalence of mail and telephone responses to the CAHPS Hospital Survey. Health Serv Res 2005 Dec;40(6 Pt 2):2120-2139. [CrossRef] [Medline]
    24. Christian LM, Dillman DA, Smyth JD. The Effects of Mode and Format on Answers to Scalar Questions in Telephone and Web Surveys. In: Lepkowski JM, Tucker C, Leeuw E, Japec L, Lavrakas P, Link M, et al, editors. Advances in Telephone Survey Methodology. Hoboken, NJ: Wiley-Interscience; 2007:250-275.
    25. Carini RM, Hayek JC, Kuh GD, Kennedy JM, Ouimet JA. College student responses to web and paper surveys: does mode matter? Res Higher Educ 2003;44(1):1-19 [FREE Full text] [WebCite Cache]
    26. Basnov M, Kongsved SM, Bech P, Hjollund NH. Reliability of short form-36 in an Internet- and a pen-and-paper version. Inform Health Soc Care 2009 Jan;34(1):53-58. [CrossRef] [Medline]
    27. Koopman L, Rademakers J. CQ-index Breast Care: Research Into the Discriminative Power: Quality of Care as Perceived by People With a Breast Abnormality From the Patients' Perspective (in Dutch). Utrecht: NIVEL; 2008.   URL: http://www.nivel.nl/pdf/Rapport-CQI-Mammacare.pdf [accessed 2010-12-06] [WebCite Cache]
    28. Dillman DA, Smuth JD, Cristian LM. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method. Hoboken, NJ: Wiley; 2009.
    29. Dickerson MD, Gentry JW. Characteristics of adopters and non-adopters of home computers. J Consumer Re 1983 Sep;10(2):225-235.
    30. Slijkhuis R. The Internet: Valid or Flawed: The Internet: Valid or Flawed: Study Into the Validity of the Internet as a Mode of Data Collection in a Survey Among the General Dutch Public [Masters thesis]. Twente, Sweden: University Twente; 2008.
    31. Ritter P, Lorig K, Laurent D, Matthews K. Internet versus mailed questionnaires: a randomized comparison. J Med Internet Res 2004 Sep 15;6(3):e29 [FREE Full text] [CrossRef] [Medline]
    32. Larsson J, Sandelin K, Forsberg C. Health-related quality of life and healthcare experiences in breast cancer patients in a study of Swedish women. Cancer Nurs 2010;33(2):164-170. [CrossRef] [Medline]
    33. Kongsved SM, Basnov M, Holm-Christensen K, Hjollund NH. Response rate and completeness of questionnaires: a randomized study of Internet versus paper-and-pencil versions. J Med Internet Res 2007;9(3):e25 [FREE Full text] [CrossRef] [Medline]

    Edited by G Eysenbach; submitted 24.03.09; peer-reviewed by NH Hjollund, P Hoonakker, D Dillman; comments to author 28.04.09; revised version received 20.04.10; accepted 20.08.10; published 27.09.11

    ©Marloes Zuidgeest, Michelle Hendriks, Laura Koopman, Peter Spreeuwenberg, Jany Rademakers. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.09.2011.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.