Published on in Vol 25 (2023)

Preprints (earlier versions) of this paper are available at, first published .
Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study

Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study

Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study

Original Paper

1Department of Biomedical Informatics, College of Medicine, The Ohio State University, Columbus, OH, United States

2The Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research (CATALYST), College of Medicine, The Ohio State University, Columbus, OH, United States

3Department of Family and Community Medicine, College of Medicine, The Ohio State University, Columbus, OH, United States

Corresponding Author:

Ann Scheck McAlearney, BAS, MS, ScD

The Center for the Advancement of Team Science, Analytics, and Systems Thinking in Health Services and Implementation Science Research (CATALYST)

College of Medicine

The Ohio State University

700 Ackerman Rd

Suite 4000

Columbus, OH, 43202

United States

Phone: 1 614 293 8973


Background:  Surveys of hospitalized patients are important for research and learning about unobservable medical issues (eg, mental health, quality of life, and symptoms), but there has been little work examining survey data quality in this population whose capacity to respond to survey items may differ from the general population.

Objective:  The aim of this study is to determine what factors drive response rates, survey drop-offs, and missing data in surveys of hospitalized patients.

Methods:  Cross-sectional surveys were distributed on an inpatient tablet to patients in a large, midwestern US hospital. Three versions were tested: 1 with 174 items and 2 with 111 items; one 111-item version had missing item reminders that prompted participants when they did not answer items. Response rate, drop-off rate (abandoning survey before completion), and item missingness (skipping items) were examined to investigate data quality. Chi-square tests, Kaplan-Meyer survival curves, and distribution charts were used to compare data quality among survey versions. Response duration was computed for each version.

Results: Overall, 2981 patients responded. Response rate did not differ between the 174- and 111-item versions (81.7% vs 83%, P=.53). Drop-off was significantly reduced when the survey was shortened (65.7% vs 20.2% of participants dropped off, P<.001). Approximately one-quarter of participants dropped off by item 120, with over half dropping off by item 158. The percentage of participants with missing data decreased substantially when missing item reminders were added (77.2% vs 31.7% of participants, P<.001). The mean percentage of items with missing data was reduced in the shorter survey (40.7% vs 20.3% of items missing); with missing item reminders, the percentage of items with missing data was further reduced (20.3% vs 11.7% of items missing). Across versions, for the median participant, each item added 24.6 seconds to a survey’s duration.

Conclusions:  Hospitalized patients may have a higher tolerance for longer surveys than the general population, but surveys given to hospitalized patients should have a maximum of 120 items to ensure high rates of completion. Missing item prompts should be used to reduce missing data. Future research should examine generalizability to nonhospitalized individuals.

J Med Internet Res 2023;25:e48236




Surveys facilitate data collection on unobservable constructs such as symptoms, psychological disorders, and patient experiences. Surveys also enable the collection of important patient information such as health behaviors, and knowledge and understanding of conditions, which may be otherwise difficult to obtain. In the inpatient setting, surveys have been used to assess patient satisfaction [1], patient perceptions of communication [2], and patient willingness to engage in hand hygiene [3]. However, surveys often have low response rates (eg, 6%, 58%, and 32% [1-3]) and high rates of participant drop-off midsurvey [4]. This can lead to low statistical power, increased Type II error [5], and nonresponse bias, defined as “systematic and significant variation between responders and nonresponders” [6]. These biases can result in misleading conclusions [7] and affect the validity of survey findings [8]. High response rates and survey completion rates are therefore essential.

Response rate, defined as the number of individuals who are offered a survey who begin the survey, tends to be low in studies that survey patients. Some research has shown that response rate may be affected by survey length, with longer surveys having a lower response rate compared to shorter surveys [9-11]. Other studies, however, have failed to show this effect [12,13]. Regardless, a high response rate does not guarantee that those who begin the survey provide complete, usable responses.

Drop-off is defined as the number of participants who respond to a survey but do not finish it; that is, the participant abandons the survey before the end. Item missingness is defined as items that a participant does not answer, which may be related to drop-off, but can also include items the participant skipped intentionally or unintentionally. Reasons for drop-off and item missingness are multifaceted; similar to response rate, one possible contributor is survey length and associated response burden. For example, one study found that 10% of participants dropped off a survey almost immediately and completion rates continued to decline for a loss of 2% of participants after every 100 items [4]. Other studies have found similar results, showing that drop-off rates and item missingness are higher for longer surveys compared to shorter surveys [14,15]. Further, poor survey design can also lead to missing data and drop-off [16,17]. At the same time, some aspects of survey design that are common options for web-based surveys can reduce survey drop-off. For instance, prior work has shown that providing motivational reminder statements when participants fail to answer a survey question can reduce the rates of missing data [18]. Additional research suggests that incorporating page breaks leads to higher survey completion rates as compared to using a scrolling design [19].

Prior Work

Because most of the work on survey response and completion rates has been done outside the hospital setting, little is known about how these findings apply to surveys of hospitalized patients. Hospitalized patients differ from the general population in ways that could either decrease or increase the response rate, drop-off, and item missingness. For instance, a decreased response rate and higher drop-off or item missingness may be expected as many hospitalized patients have low physical or cognitive capacity due to their illness, and this could impact their ability to conduct tasks such as fully completing a survey [20,21]. On the other hand, many hospitalized patients find themselves with free time [22] and in an environment that has limited opportunities for alternative activities that could compete with survey-taking, which could contribute to an increased response rate and lower drop-off or item missingness. While satisfaction surveys of hospitalized patients are a common subject of research, they are generally sent to patients post discharge [23], and thus do not provide insight into surveying patients while in the hospital. Of the few studies that have reported results of surveys conducted in the inpatient environment, patients’ family members or caregivers, rather than the patients themselves, have often been the subject of the survey [24,25]. It is therefore unknown how surveys of the hospitalized patient population can be optimized to increase response rate and reduce drop-off and response burden.

Research Questions

In this study, we sought to close this gap by examining survey response patterns and drop-off using a sample of over 3500 hospitalized patients to answer the following research questions: (1) Do response rates of hospitalized patients differ as a function of survey length? (2) What is the average survey (i) drop-off rate and (ii) rate of item missingness for hospitalized patients? (3) How does the trajectory of participant drop-off change over the course of a long survey (ie, what is the ideal length of a survey for this population)? (4) What electronic survey design features (eg, page breaks and missing item reminders) are associated with reduced item missingness and drop-off for this population? (5) What is the response burden in this population, in terms of duration (ie, how much time does it take hospitalized patients to complete a survey item)?


Participants were patients in a large, midwestern US hospital system composed of 6 noncancer hospitals. Data were collected in the context of a randomized controlled trial investigating the impact of an inpatient portal on patient experience [26]. Patients were eligible if they were aged 18 years or older, able to speak or read English, and not involuntarily confined or detained (further details of the sampling strategy are in McAlearney et al [26]).


Patients were provisioned Samsung tablets that provided access to the MyChart Bedside patient portal (Epic Systems). MyChart Bedside is an inpatient portal allowing hospitalized patients to conduct activities such as ordering meals, receiving health education, and taking surveys. Tablets were provisioned no sooner than 6 hours from patients’ admissions, and up to 10 days after admission. Patients were recruited in one of two ways: (1) via an embedded URL on the tablet, or (2) in person by a study team member. Patients were provided with a study overview including the goals of the study, the expectations of participation, and risks and benefits associated with study participation. Patient informed consent was collected via the web with an electronic signature, or via written signature (if patient preference or due to internet connectivity issues).

Description of the Survey

Survey Distribution

The survey was deployed via tablets, through Qualtrics. Up to 50 items appeared on each page of the survey in version A (described further below); versions B and C of the survey (described further below) had approximately 5-10 items per page. Participants were required to hit “next” to continue to proceed through each page of the survey. If participants had only a partial response, Qualtrics recorded their data through the last “next” button they hit. Participants could return to the survey at any time but had up to 2 weeks to complete their responses before Qualtrics closed their survey and stored their incomplete data. No items contained a forced response mechanism (ie, participants could skip any item and still move through the survey). A progress bar showing the percentage of survey completion appeared at the bottom of each page on all versions. At the end of the survey, there was 1 final “next” button, and then participants were shown a screen indicating that the survey was complete.

Survey Questions

Analysis of survey items and measures is not the focus of this study. However, the survey included a mixture of item types, including primarily categorical response options (eg, income groups; Likert-type scales), as well as some mark-all-that-apply items, and 3 free response questions. Version A also had one rank order item. See Table 1 for more details on survey topics and items. The survey was comprised of both validated measures (adapted to shorten the scale or revise wording for the patient population as needed) and measures that were internally developed. The purpose of this study was not to assess the reliability or validity of any individual measure in this population. To answer our research questions, items were treated individually, rather than in the context of scales (but grouped into similar concepts as delineated in Table 1, to provide context). As such, we did not compute reliability or assess validity.

Table 1. Description of items in each survey version.

Survey version ASurvey versions B and C
About my careItems 1-40: (40 items): Access to care, use of care, satisfaction with care, trust in provider, resilienceItems 1-22 (22 items): Access to care, use of care, satisfaction with care, trust in provider, where participants obtained health information
About my healthItems 41-58: (18 items): Health-related self-efficacy and locus of controlItems 23-40 (18 items): Health-related self-efficacy and locus of control
Technology in my lifeItems 59-76: (18 items): Access to and use of internet and technologyItems 41-46: (6 items): Access to and use of internet and technology
Using the internetItems 77-90: (14 items): Willingness to use internet, using internet to search for health informationItems 47-53 (7 items): Willingness to use internet, using internet to search for health information
Seeking health informationItems 91-92: (2 items): Where participants obtained health informationN/Aa
Using technology to manage my healthItems 93-123: (31 items): Willingness to exchange health information over the internet, and use of patient portalsItems 54-70 (17 items): Willingness to exchange health information over the internet, and use of patient portals
About meItems 124-174: (51 items): Demographics, health-related quality of life, health literacy and numeracyItems 71-111: (41 items): Demographics, health-related quality of life, health literacy and numeracy, resilience

aN/A: not applicable.

Survey Versions

As shown in Table 2, we examined 3 versions of the survey. Version A contained a maximum of 174 items, with a range of 173-174 depending on display logic for 1 item. Survey version B was reduced to a maximum of 111 items (ranging from 100 to 111 items, depending on display logic). Measures were in the same general order for survey versions A and B, but the number of items for each survey topic was reduced (Table 1). In version C, the items remained the same as in version B, but a reminder was added that prompted participants to complete items that were missing when they hit “next” to proceed to a new page. The missing item warning message was the default message developed by Qualtrics; specifically, it read “There are [n] unanswered questions on this page. Would you like to continue?” Participants could choose “continue without answering,” which routed them to the next page of the survey, or “answer the questions,” which took them back to the current page and indicated the incomplete items by highlighting them. If no items on a page were incomplete, the alert did not prompt.

Table 2. Survey versions tested in this studya.
Survey versionDescription of survey versionPurpose of changeResearch question (RQ)b tested
A174 itemsN/AcRQ1, RQ2, RQ3, RQ4, and RQ5
BReduced to 111 itemsTo reduce participant burdenRQ1, RQ2, RQ3, RQ4, and RQ5
CAdded prompt to notify participants when they had skipped an itemTo reduce missing dataRQ2, RQ3, RQ4, and RQ5

aSurvey versions were sequential; for example, version C reflected the changes that were made for versions A and B, etc.

bRQ (RQ1, RQ2(1), RQ2(2), RQ3, RQ4, and RQ5): research questions.

cN/A: not applicable.

Survey Implementation

Upon study enrollment, a survey was activated on the tablet within MyChart Bedside and was available throughout the patient’s hospital stay. As part of the randomized, controlled trial, members of the research team visited each enrolled patient up to 3 times to request that they do the survey. In addition, if participants went to the “Getting Started, Getting Involved,” item on the menu tab, a landing page reminded them that they had not completed (or begun) their survey. Participants who began the survey were entered into a monthly raffle for a US $100 gift card.

Data Analysis

Demographics, including gender, race, age, length of stay, and Charlson Comorbidity Index (CCI), were pulled from the institution’s Information Warehouse (IW) and linked to survey responses. Descriptive statistics for demographics and response rate (defined as the number of participants who were offered the survey and who completed at least 1 survey item), drop-off rate (defined as whether participants responded to the last survey item), and item missingness (defined as whether participants completed every item on the survey) were also computed. Missingness (ie, percentage of items not responded to) within each survey version was also examined via descriptive statistics. Chi-square tests were conducted to compare response rates, drop-off rates, and item missingness by survey version. Kaplan-Meyer Survival curves were computed for each survey version to assess the proportion of respondents who remained in the survey across the items. Distribution charts were developed to map response rates by item.

Duration of the survey response (in minutes) was computed for participants who responded to the last survey item, for each version. The duration was operationalized as time from opening the survey to ending the survey (hitting “submit” or timing out). This was inclusive of any breaks wherein the participant may have been interrupted or closed the survey and returned at a later date or time. This variable was highly negatively skewed, likely due to such breaks. Thus, the medians, IQRs, and the 25th percentiles are interpreted. Analyses were conducted in Stata (version 15; StataCorp) [27].

Ethical Considerations

The study was conducted in accordance with the Declaration of Helsinki and approved by the institutional review board of The Ohio State University (#2015B0272). Participants provided informed consent and HIPAA (Health Insurance Portability and Accountability Act) authorization.

Demographics, Response Rate, and Completion Rates

Overall, 3578 patients were offered the survey, and 2981 completed at least 1 item. Demographics are in Table 3.

The overall response rate was 83.3% (n=2981). Response rate by survey version is shown in Table 4. Response rate did not differ significantly between versions A (174 items) and B (111 items; χ21=0.39; P=.53), addressing research question (RQ) 1, and showing that survey length was not associated with response rate in this sample.

The overall percentage of participants who dropped off before the end of the survey was 29.3% (n=873), addressing RQ2(1). The overall percentage of participants who had any missing data (including drop-off but excluding items subject to display logic) was 50.7% (n=1512), addressing RQ2(2). Table 4 provides more details on item missingness by survey version, indicating that the mean percentage of items with missing data ranged from 11.7% (11.4 items; version C) to 40.7% (70.4 items; version A).

Drop-off was substantially reduced from version A (174 items; 65.7%) to version B (111 items; 20.2%). The percentage of participants who had missing data on at least 1 item was high for versions A (621/708, 87.7%) and B (291/377, 77.2%), and decreased substantially for version C (600/1896, 31.7%). Together, these findings suggest that shorter surveys yield less drop-off (χ21=203.9, P<.001 comparing version A to B for responding to the last item), addressing RQ3. In addition, missing item reminders were shown to reduce item missingness (χ21=273.7, P<.001 comparing version B to C for responding to all items), addressing RQ4.

To investigate RQ3 further, we computed Kaplan-Meier survival curves for each version (Figure 1). The chart indicates that the survival curves for each version trend down slowly, indicating participants slowly drop off as the survey progresses. For version A, the longest survey, which had 174 items, 25% (n=217) of participants had dropped off by approximately item number 120, and more than half of the participants (n=434) had discontinued the survey by item 158.

To address RQ4 more fully, we also examined the proportion of responses to each item in each version of the survey in the context of design features such as page breaks (Figure 2). For all versions, there is generally a downward trend of responses to items as the survey progresses. Another common trend is that missing data appears to increase as participants progress within a page, although this effect appears less pronounced in version C (the version where missing item reminders were added). Similarly, for versions A and B, the first items after a page break appear to have higher response rates compared to other items. Relatedly, there are early increases in missing data in versions A and B that recover after page breaks. Related to item type, we noted a higher rate of missing data in the free-response items. We also noted a dip in responses to the question that asks about income, which did not have a “prefer not to respond” option; this effect was present in all survey versions. These trends suggest that both page breaks and reminders about unanswered items are important, particularly in an electronic survey where display cues may not be optimal to prompt participants to scroll for more items.

The proportion of items that are complete versus missing when averaged across respondents differed by survey version. Version A had a large proportion of items that were missing, with a mean of 70.4 (40.7% of items), SD 52.8 (30.5%) items. The number and proportion of missing items decreased substantially in version B, with a mean of 19.9 (20.3% of items), SD 26.7 (27.2%) items, and there was an additional decrease in missing items in version C, with a mean of 11.4 (11.7% of items), SD 24.9 (26.4%) items.

Table 3. Demographic and clinical characteristics for patient enrollment admission, by survey version.
VariableTotal (N=2981)Survey version A (N=708)Survey version B (N=377)Survey version C, (N=1896)
Female, n (%)1784 (59.9)407 (57.5)233 (61.8)1144 (60.3)
Race, n (%)

Black527 (17.7)140 (19.8)84 (22)303 (16.0)

White2334 (78.3)538 (76.0)280 (74.3)1516 (80.0)

Other120 (4.0)30 (4.0)13 (4.0)77 (4.0)
Age (years), median (IQR)46 (34-58)48 (35.5-59)46 (35-57)46 (33-58)
Charlson Comorbidity Index, median (IQR)1 (0-3)a2 (0-3)a1 (0-3)1 (0-2)
Length of stay (days), median (IQR)5 (3-9)6 (3-10)5 (3-9)5 (3-8)

aOne patient admission had no associated Charlson Comorbidity Index.

Table 4. Response rates, completion rates, and item missingness by survey versiona.
ResultA: 174 itemsB: reduced to 111 itemsC: missing item reminders addedOverall
Overall response rateb and sample size for respondents, % (n/n)81.7 (708/867)83 (377/454)84 (1896/2257)83.3 (2981/3578)
Percent of participants who dropped offc, % (n/n)65.7 (465/708)20.2 (76/377)17.5 (332/1896)29.3 (873/2981)
Percent of participants with item missingnessd, % (n/n)87.7 (621/708)77.2 (291/377)31.7 (600/1896)50.7 (1512/2981)
Mean number of items with missing data, mean (SD)70.4 (52.8)19.9 (26.7)11.4 (25.9)26.5 (42.2)
Mean percentage of items with missing data (%), mean (SD)40.7 (30.5%)20.3 (27.21%)11.7 (26.4%)19.7 (30.1%)
Mean number of items with complete data, mean (SD)102.6 (52.8)78.1 (26.7)86.6 (25.9)89.3 (35.2)
Mean percentage of items with complete data (%), mean (SD)59.3 (30.5%)79.7 (27.2%)88.3 (26.4%)80.4 (30.1%)

aItems with display logic were excluded from this analysis.

bDefined as response to at least 1 survey item.

cParticipants who responded to the last survey item, divided by the number of participants who responded to at least 1 survey item.

dParticipants who responded to every survey item available to them, divided by the number of participants who responded to at least 1 survey item.

Figure 1. Survival curve showing participants’ drop-off point by survey version. The x-axis indicates the survey item number where participants dropped off the survey, and the y-axis indicates the proportion of participants in each version who still remained in the survey at a given item number. Time of failure=last item responded to. Note that this chart is not reflective of item missingness, that is, participants may have skipped some items throughout the survey before the point of total drop-off.
Figure 2. Distribution of item responses, by survey version. For questions with display logic, the denominator is participants who received that question, rather than all participants.


Duration for version A was the longest, with a median time to completion of 120.9 (IQR 39.8-1333.0) minutes. This corresponds to a median of 41.7 seconds per item. Versions B and C had median durations of 46.2 (IQR 22.7-1162.9), and 42.4 (IQR 24.1-361.0) minutes, respectively. This corresponds to a median of 26.0 seconds per item for version B and 22.8 seconds per item for version C. Overall, across the 3 versions, the median participant took 24.6 seconds per item, indicating that the median participant was able to complete 2-3 items per minute, addressing RQ5.

Principal Results

The response rate was not related to survey length. However, survey length was associated with completion rate, such that a significantly higher proportion of participants dropped off before the end of the survey in the 174-item version, compared to the 111-item version. The survival curves indicate that an ideal survey length—one in which at least 75% of the participants are retained—may be around 120 items for hospitalized patients. A lower number of items should be used if researchers wish to retain a higher proportion of their sample. Our drop-off rate was higher than that of Hoerger [4], which found an initial drop-off of 10% plus an additional 2% per every 100 items, as we found a drop-off of about 15%-20% in our first 100 items. This difference may be due to the populations included in these studies as Hoerger [4] focused on undergraduate students, who may have been more motivated to complete the survey (eg, as a component of an educational course). Further, it is possible that drop-off was more prevalent in our study due to factors related to our population, such as hospitalized patients possibly feeling too sick to finish the survey. Follow-up work should be done to better investigate this.

Additionally, we found that independent of drop-off, item missingness was prevalent until missing item prompts were added. Before including the prompts, item missingness seemed to occur frequently after the first item on each page, indicating that there may have been a visual issue as participants were not cued to scroll to see additional questions. Regardless of the missing item prompt, the income item was skipped about 20% of the time. This finding is similar to results from prior work showing that about 24% of participants tend to skip income survey questions [28].

Participants took a median of 120.9 minutes to complete the 174-item version of the survey, and 46.2 and 42.4 minutes to complete the different 111-item versions. An important consideration, however, is the context of these findings. Specifically, the duration variable was inclusive of any breaks a participant may have taken while completing their survey—whether for a few moments (eg, due to clinical care), or for hours (eg, if the patient’s tablet needed to be charged at the nursing station). While these durations may be considered more as maximums, they may be realistic in practice for surveys deployed in inpatient environments where interruptions are frequent [29]. These durations are, however, well beyond the recommended optimal survey length of 10-13 minutes [30,31]. Given this, it is surprising that our drop-off was only modest. This suggests that hospitalized patients may have a higher tolerance for longer surveys due to factors such as the lack of other activities in the hospital environment that could compete for their attention. In addition, it is our finding that the median participant was able to answer 2-3 survey questions per minute. Further work should be done to better understand these factors and their implications.

Based on our findings, we present best practices for survey design in Table 5. While these suggestions are developed to guide surveys of hospitalized patient populations, we also indicate the extent to which we believe each best practice can be generalized to other settings. We supplement these suggestions with references that provide additional evidence for each best practice.

Table 5. Best practices for survey design.
Best practice recommendation based on this studyPotential generalizability outside inpatient settingAdditional evidence
Limit survey length to approximately 120 itemsHospitalized patients may be a more captive audience than most; needs further study to generalize outside the inpatient settingHoerger [4]
Use reminder prompts to alert participants to items they have missedLikely generalizableAl Baghal and Lynn [18]; DeRouvray and Couper [32]
Have frequent page breaks such that participant does not need to scroll within a pageLikely generalizable, grounded in research outside the inpatient settingManfreda et al [33]; Nosek et al [34]; Peytchev et al [35]; Toepoel et al [36]
Give “prefer not to say” option on income questionLikely generalizable, grounded in research outside inpatient settingShah et al [28]
Expect that participants can respond to about 2-3 multiple-choice items per minuteNeeds further study to generalize outside the inpatient settingSurveyMonkey [37]

Limitations and Future Research

This study has several limitations. First, additional work is needed to examine its generalizability to outpatients and other settings. Second, this study used surveys that were completed on Samsung tablets. It is not clear how these findings generalize to paper surveys, or to surveys taken on a computer or mobile phone. Prior work has established equivalence in responses for tablet, mobile phone, and paper-based surveys [38], but others have found differing response rates [39] and durations [40] between web and mobile surveys. This should be tested in the current context. Further, this study used a survey that was completed by the participants themselves, yet particularly in a hospitalized patient context, family or other caregivers may assist patients with survey completion. Future work should examine differences in response rate, drop-off, missing data, and duration of response in these situations, as it is possible there may be differences. Last, several other factors (eg, demographic and clinical characteristics of patients, patient’s attitude toward surveys, sequence of items in the survey, and participants’ interest in the survey topic) may also impact response rates, survey drop-off, and missing data in the survey, but we were unable to examine these in this study.


We found that hospitalized patients had a higher tolerance for longer surveys than the general population, with most participants completing at least 120 items. Participants tolerated a median survey duration of 121 minutes for the longest version. In addition, the inclusion of missing item prompts substantially reduced the amount of missing data. Overall, the median participant was able to complete 2-3 items per minute. These findings can be informative for future research when designing surveys for use in the inpatient setting.


The authors wish to thank Seth Scarborough, affiliated with the authors’ organization at the time of this study, for his assistance with this project. They are also extremely grateful to all the participants in this study. MEG is now in the Department of Health Outcomes and Biomedical Informatics, University of Florida. This project was funded by grants from the Agency for Health Care Research and Quality (R01HS024091, R21HS024767, and P30HS024379). The funding agency was not involved with the study design, the collection, analysis, or interpretation of data, nor the writing of the paper.

Data Availability

The data sets generated and analyzed during this study are not publicly available due to patient privacy but are available from the corresponding author upon reasonable request.

Authors' Contributions

MEG contributed to conceptualization, formal analysis, methodology, visualization, writing the original draft, and reviewing and editing. LNS contributed to conceptualization, data curation, investigation, project administration, writing the original draft, and reviewing and editing. TRH contributed to conceptualization, funding acquisition, methodology, resources, visualization, and reviewing and editing. ASM contributed to conceptualization, funding acquisition, methodology, resources, supervision, visualization, and reviewing and editing.

Conflicts of Interest

None declared.

  1. Benninger CO. Increasing the response rate of the patient satisfaction survey of inpatients at National Naval Medical Center. Defense Technical Information Center. 1993. URL: [accessed 2023-08-01]
  2. Waxman MJ, Lozier K, Vasiljevic L, Novakofski K, Desemone J, O'Kane J, et al. Hospitalized patients' and family members' preferences for real-time, transparent access to their hospital records. Am J Manag Care. 2018;24(1):e17-e23. [FREE Full text] [Medline]
  3. Wu KS, Lee SSJ, Chen JK, Tsai HC, Li CH, Chao HL, et al. Hand hygiene among patients: attitudes, perceptions, and willingness to participate. Am J Infect Control. 2013;41(4):327-331. [CrossRef] [Medline]
  4. Hoerger M. Participant dropout as a function of survey length in internet-mediated university studies: implications for study design and voluntary participation in psychological research. Cyberpsychol Behav Soc Netw. 2010;13(6):697-700. [FREE Full text] [CrossRef] [Medline]
  5. Bartett JE, Kotrlik JW, Higgins CC. Organizational research: determining appropriate sample size in survey research appropriate sample size in survey research. Inf Technol Learn Perform J. 2001;19(1):43. [FREE Full text]
  6. Lewis EF, Hardy M, Snaith B. Estimating the effect of nonresponse bias in a survey of hospital organizations. Eval Health Prof. 2013;36(3):330-351. [CrossRef] [Medline]
  7. Guo Y, Kopec JA, Cibere J, Li LC, Goldsmith CH. Population survey features and response rates: a randomized experiment. Am J Public Health. 2016;106(8):1422-1426. [CrossRef] [Medline]
  8. Guadagnoli E, Cunningham S. The effects of nonresponse and late response on a survey of physician attitudes. Eval Health Prof. 2016;12(3):318-328. [CrossRef]
  9. Galesic M, Bosnjak M. Effects of questionnaire length on participation and indicators of response quality in a web survey. Public Opin Q. 2009;73(2):349-360. [CrossRef]
  10. Marcus B, Bosnjak M, Lindner S, Pilischenko S, Schütz A. Compensating for low topic interest and long surveys. Soc Sci Comput Rev. 2016;25(3):372-383. [CrossRef]
  11. Kato T, Miura T. The impact of questionnaire length on the accuracy rate of online surveys. J Market Anal. 2021;9(2):83-98. [CrossRef]
  12. Beebe TJ, Rey E, Ziegenfuss JY, Jenkins S, Lackore K, Talley NJ, et al. Shortening a survey and using alternative forms of prenotification: impact on response rate and quality. BMC Med Res Methodol. 2010;10:50. [FREE Full text] [CrossRef] [Medline]
  13. Grava-Gubins I, Scott S. Effects of various methodologic strategies: survey response rates among Canadian physicians and physicians-in-training. Can Fam Physician. 2008;54(10):1424-1430. [FREE Full text] [Medline]
  14. Ganassali S. The influence of the design of web survey questionnaires on the quality of responses. Surv Res Methods. 2008;2(1):21-32. [FREE Full text] [CrossRef]
  15. Liu M, Wronski L. Examining completion rates in web surveys via over 25,000 real-world surveys. Soc Sci Comput Rev. 2017;36(1):116-124. [CrossRef]
  16. Manfreda K, Vahovar V. Survey design features influencing response rates in web surveys. International Conference on Improving Surveys Proceedings. URL:¼10.¼rep1&type¼pdf [accessed 2023-09-26]
  17. Dillman DA, Sinclair MD, Clark JR. Effects of questionnaire length, respondent-friendly design, and a difficult question on response rates for occupant-addressed census mail surveys. Public Opin Q. 1993;57:289-304. [CrossRef]
  18. Al Baghal T, Lynn P. Using motivational statements in web-instrument design to reduce item-missing rates in a mixed-mode context. Public Opin Q. 2015;79(2):568-579. [CrossRef]
  19. Tangmanee C, Nitruttinanon P. Web survey's completion rates: effects of forced responses, question display styles, and subjects' attitude. Int J Res Bus Soc Justice. 2019;8(1):20-29. [CrossRef]
  20. Lepping P, Stanly T, Turner J. Systematic review on the prevalence of lack of capacity in medical and psychiatric settings. Clin Med (Lond). 2015;15(4):337-343. [FREE Full text] [CrossRef] [Medline]
  21. Fullam F, VanGeest JB. Surveys of patient populations. In: Johnson TP, editor. Handbook of Health Survey Methods. Hoboken, NJ. Wiley; 2014;561-583.
  22. Chu ES, Hakkarinen D, Evig C, Page S, Keniston A, Dickinson M, et al. Underutilized time for health education of hospitalized patients. J Hosp Med. 2008;3(3):238-246. [CrossRef] [Medline]
  23. Wong ELY, Coulter A, Cheung AWL, Yam CHK, Yeoh EK, Griffiths SM. Patient experiences with public hospital care: first benchmark survey in Hong Kong. Hong Kong Med J. 2012;18(5):371-380. [FREE Full text] [Medline]
  24. Addington-Hall JM, O'Callaghan AC. A comparison of the quality of care provided to cancer patients in the UK in the last three months of life in in-patient hospices compared with hospitals, from the perspective of bereaved relatives: results from a survey using the VOICES questionnaire. Palliat Med. 2009;23(3):190-197. [CrossRef] [Medline]
  25. Manna J. Implementing the pediatric family satisfaction in the Intensive Care Unit (ICU) survey in a pediatric cardiac ICU. Am J Crit Care. 2021;30(3):230-236. [CrossRef] [Medline]
  26. McAlearney AS, Walker DM, Sieck CJ, Fareed N, MacEwan SR, Hefner JL, et al. Effect of in-person vs video training and access to all functions vs a limited subset of functions on portal use among inpatients: a randomized clinical trial. JAMA Netw Open. 2022;5(9):e2231321. [FREE Full text] [CrossRef] [Medline]
  27. Stata release 15. StataCorp. College Station, TX.; 2017. URL: [accessed 2023-09-26]
  28. Shah S, Harris TJ, Rink E, DeWilde S, Victor CR, Cook DG. Do income questions and seeking consent to link medical records reduce survey response rates? a randomised controlled trial among older people. Br J Gen Pract. 2001;51(464):223-225. [FREE Full text] [Medline]
  29. Gawande A. Big med. The New Yorker. 2012. URL: [accessed 2023-08-01]
  30. Fan W, Yan Z. Factors affecting response rates of the web survey: a systematic review. Comput Hum Behav. 2010;26(2):132-139. [CrossRef]
  31. Revilla M, Ochoa C. Ideal and maximum length for a web survey. Int J Mark Res. 2017;59(5):557-565. [FREE Full text] [CrossRef]
  32. Derouvray C, Couper M. Designing a strategy for reducing “no opinion” responses in web-based surveys. Soc Sci Comput Rev. 2016;20(1):3-9. [CrossRef]
  33. Manfreda KL, Batageli Z, Vahivar V. Design of web survey questionnaires: three basic experiments. J Comput Mediat Commun. 2002;7(3):JCMC731. [FREE Full text]
  34. Nosek BA, Sriram N, Umansky E. Presenting survey items one at a time compared to all at once decreases missing data without sacrificing validity in research with internet volunteers. PLoS One. 2012;7(5):e36771. [FREE Full text] [CrossRef] [Medline]
  35. Peytchev A, Couper M, McCabe S, Crawford S. Web survey design: paging versus scrolling. Public Opin Q. 2006;70(4):596-607. [CrossRef]
  36. Toepoel V, Das M, Van Soest A. Design of web questionnaires: the effects of the number of items per screen. Field Methods. Apr 06, 2009;21(2):200-213. [CrossRef]
  37. How much time are respondents willing to spend on your survey? SurveyMonkey. 2023. URL: [accessed 2023-08-01]
  38. Brodey BB, Gonzalez NL, Elkin KA, Sasiela WJ, Brodey IS. Assessing the equivalence of paper, mobile phone, and tablet survey responses at a community mental health center using equivalent halves of a 'gold-standard' depression item bank. JMIR Ment Health. 2017;4(3):e36. [FREE Full text] [CrossRef] [Medline]
  39. de Bruijne M, Wijnant A. Comparing survey results obtained via mobile devices and computers: an experiment with a mobile web survey on a heterogeneous group of mobile devices versus a computer-assisted web survey. Soc Sci Comput Rev. 2013;31(4):482-504. [CrossRef]
  40. Buskirk TD, Andrus CH. Making mobile browser surveys smarter. Field Methods. 2014;26(4):322-342. [CrossRef]

CCI: Charlson Comorbidity Index
HIPAA: Health Insurance Portability and Accountability Act
IW: information warehouse
RQ: research question

Edited by T Leung; submitted 16.04.23; peer-reviewed by Y Duan, T Johnson; comments to author 18.07.23; revised version received 24.08.23; accepted 31.08.23; published 01.11.23.


©Megan E Gregory, Lindsey N Sova, Timothy R Huerta, Ann Scheck McAlearney. Originally published in the Journal of Medical Internet Research (, 01.11.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.