Differences in Mode Preferences, Response Rates, and Mode Effect Between Automated Email and Phone Survey Systems for Patients of Primary Care Practices: Cross-Sectional Study

Background: A growing number of health care practices are adopting software systems that link with their existing electronic medical records to generate outgoing phone calls, emails, or text notifications to patients for appointment reminders or practice updates. While practices are adopting this software technology for service notifications to patients, its use for collection of patient-reported measures is still nascent. Objective: This study assessed the mode preferences, response rates, and mode effect for a practice-based automated patient survey using phone and email modalities to patients of primary care practices. Methods: This cross-sectional study analyzed responses and respondent demographics for a short, fully automated, telephone or email patient survey sent to individuals within 72 hours of a visit to their regular primary care practice. Each survey consisted of 5 questions drawn from a larger study’s patient survey that all respondents completed in the waiting room at the time of their visit. Automated patient survey responses were linked to self-reported sociodemographic information provided on the waiting room survey including age, sex, reported income, and health status. Results: A total of 871 patients from 87 primary care practices in British Columbia, Ontario, and Nova Scotia, Canada, agreed to the automated patient survey and 470 patients (45.2%) completed all 5 questions on the automated survey. Email administration of the follow-up survey was preferred over phone-based administration, except among patients aged 75 years and older (P<.001). Overall, response rates for those who selected an emailed survey (369/606, 60.9%) were higher (P<.001) than those who selected the phone survey (101/265, 38.1%). This held true irrespective of age, sex, or chronic disease status of individuals. Response rates were also higher for email (range 57.4% [58/101] to 66.3% [108/163]) compared with phone surveys (range 36% [23/64] to 43% [10/23]) for all income groups except the lowest income quintile, which had similar response rates (email: 29/63, 46%; phone: 23/50, 46%) for phone and email modes. We observed moderate (range 64.6% [62/96] to 78.8% [282/358]) agreement between waiting room survey responses and those obtained in the follow-up automated survey. However, overall agreement in responses was poor (range 45.3% [43/95] to 46.2% [43/93]) for 2 questions relating to care coordination. Conclusions: An automated practice-based patient experience survey achieved significantly different response rates between phone and email and increased response rates for email as income group rose. Potential mode effects for the different survey modalities may limit multimodal survey approaches. An automated minimal burden patient survey could facilitate the integration of patient-reported outcomes into care planning and service organization, supporting the move of our primary care practices toward a more responsive, patient-centered, continual learning system. However, practices must be attentive to furthering inequities J Med Internet Res 2021 | vol. 23 | iss. 1 | e21240 | p. 1 https://www.jmir.org/2021/1/e21240 (page number not for citation purposes) Johnston et al JOURNAL OF MEDICAL INTERNET RESEARCH


Introduction
The development of an information infrastructure to support a learning health system in primary care has advanced significantly with the application of advanced analytics applied to data from electronic medical records and routinely collected administrative data [1]. However, in Canada most primary care is delivered in small community-based practices and, unlike the United Kingdom, there is no national or provincial infrastructure to measure and report patient experience data for primary care. Such data collection remains logistically challenging and relatively expensive for smaller practices [2]. While waiting room surveys often provide good response rates, they are costly, burdensome to practices, introduce a sampling bias toward older and more complex patients, and are limited to patients who physically attend a practice [1,3].
A growing number [4] of health care practices are adopting software systems [5] that link with their existing electronic medical records to generate outgoing phone calls, emails, or text notifications to patients for appointment reminders or practice updates. While practices are adopting this software technology for service notifications to patients, it is not clear whether such an approach would be acceptable to survey a practice's patients on experience or outcome measures selected by the practice to advance their quality improvement efforts. The data on response rates for electronic surveys in primary care is rudimentary compared with that for hospital surveys [4], but response rates of 20% to 30% [6,7] have been found recently for emailed surveys linked to primary care practices [6][7][8]. The objective of this study is to assess the mode preferences, response rates, and mode effect for a practice-based automated patient survey using phone and email modalities to patients of primary care practices.

Study Sample
This cross-sectional study analyzed mode preferences, response rates, and respondent demographics for a short, fully automated, telephone or email patient survey to consenting individuals who had recently attended their regular primary care practice. Within our larger study, Transforming Community-Based Primary Health Care Delivery through Comprehensive Performance Measurement and Reporting (TRANSFORMATION), patients from 87 primary care practices in British Columbia, Ontario, and Nova Scotia, Canada, were asked to complete a waiting room survey between 2014 and 2016. The automated patient survey system was tested on a convenience sample of those participants who consented to receiving an additional postvisit survey by email or phone. Eligible patients had to speak English or French and have a valid telephone number or email address. Patients were asked to specify their preferred contact modality, phone or email, and provide their name and contact information to an on-site research team member.
The contact information and unique identifying number for consenting patients was entered manually by survey administrators and uploaded to a cloud-based server via a software console. Upon receipt of the information, the administering information technology company collaborator, Cliniconex [9], programmed the appropriate survey mode and language (English or French) and randomly assigned the order of 5 survey questions. Once the survey was administered by Cliniconex, all contact information was deleted, and only the unique identifying number was retained on the server.

Survey Administration
Participants received an automated phone or email survey within 72 hours of visiting the practice. A phone survey response was recorded as completed only if the patient could be reached at the phone number on file, accepted the call, and completed all 5 survey questions. The phone survey was initially attempted twice, and then registered as incomplete if no answer was received. Partway through the study, the number of attempts was increased (4 call attempts) to facilitate higher response rates. For those who chose an email survey mode, an email was sent once containing the introduction and a web link to the survey. An email survey was recorded as complete if all 5 survey questions were answered.
Each survey consisted of 5 questions drawn from the TRANSFORMATION study's waiting room patient survey [10]. The questions were selected to relate to patients' experience with primary care and/or their practice. Two question prompts were modified from their original form in the paper waiting room survey to reflect the timing of the survey administration. When administered in the waiting room, questions 1 and 2 were prompted with "After seeing the family doctor or nurse today..."; on the automated patient survey, patients were prompted with "At your last visit with your family doctor or nurse practitioner...." See Multimedia Appendix 1 for the wording of the survey questions in the paper waiting room survey and the postvisit automated survey. Phone survey responses were stored in a secure password-protected site on a secure server. Email responses were sent directly to a hospital-based server and managed using Research Electronic Data Capture tools [11].
The unique identifying numbers were used to link automated patient survey responses to self-reported sociodemographic information on the paper waiting room survey, completed during the participant's visit to their practice, including age, sex, reported income, and health status.

Data Analysis
To detect any response bias inherent in using an automated email or phone survey system, we used Pearson chi-square tests to compare the sociodemographic profile of those who completed the automated patient survey (responders) with those who did not complete the automated patient survey (nonresponders). The comparison group of nonresponders contained those who either participated in the paper waiting room survey but refused the automated survey or agreed to the automated survey but did not complete all 5 questions. We also conducted Wilcoxon rank-sum tests on the paper waiting room survey responses, comparing differences in mean responses between those who completed the automated patient survey and those who did not. We conducted chi-square tests to compare automated patient survey mode preference (email or phone) and response rates both across and between patient sociodemographics. A Cochran-Armitage test for trends was also used to examine variation in mode preference by age and income. All analyses were performed using SAS software version 9.4 (SAS Institute Inc).
The primary outcome measure response rate was pooled across all practices as we were interested in differences across dependent variables of age and attributed socioeconomic status rather than regional variations.
To identify a potential mode effect, secondary analyses explored responses for each question across the 3 survey modes, email and phone (automated patient survey) and paper (waiting room). Test-retest analysis was undertaken, comparing each patient's responses from the waiting room survey to their responses to the corresponding automated survey question. The percentage of concordant and discordant responses were determined by comparing waiting room derived responses with those from subsequent survey data. Weighted kappas were calculated to compare this concordance in survey responses by survey mode. Mean responses were also compared (using Wilcoxon signed-rank test) across responses to the corresponding questions from the paper waiting room survey and the automated patient survey (total and by mode).

Response Bias
Of those who agreed to the automated patient survey, 69.6% (606/871) of participants chose to receive the survey by email compared with telephone. This group represented 45.15% (871/1929) of the participants who initially consented to completing a paper waiting room survey ( . Respondents to the automated patient survey tended to be older, were more likely to be women, had higher income, and reported a larger number of chronic conditions than those not completing the survey. There was no significant difference in paper waiting room survey responses between those who completed the automated patient survey and those who did not ( Table 2).

Response Rates
In this sample, email administration of the follow-up survey was preferred over phone-based administration, except among patients aged 75 years and older (Table 3). Among those who answered the automated patient survey, 97.1% (470/484) completed of all 5 questions. Thus, response rates include only those who answered all 5 questions. Overall, response rates for those who selected an emailed survey (369/606, 60.9%) were higher than those who received the phone survey (101/265, 38.1%). This held true irrespective of the age, sex, or chronic disease status of individuals. Response rates were also higher for email compared with phone surveys for all income groups except the lowest income quintile, which had similar response rates for phone and email modes. There was variation in response rates within email mode, with higher responses among more affluent individuals.

Mode Effect
We observed moderate agreement between waiting room survey responses and those obtained in the follow-up automated survey (see Multimedia Appendix 2). However, overall agreement in responses was poor for 2 questions relating to care coordination.
Among phone respondents, agreement in responses was generally poor, and phone responders were particularly critical with respect to care coordination (Table 4). Agreement between waiting room responses and subsequent email survey regarding interpersonal aspects of care was moderate and poor for items relating to care coordination.

Principal Findings
We successfully deployed an automated multimodal practice-based patient survey in 87 primary care practices. Overall, patient preference for the email survey mode was demonstrated; however, this was modified by age group and socioeconomic status. Indeed, completion rates for email were higher than most health care automated surveys [8] versus comparable response rates in the total sample [6]. However, it is unclear whether the lower consent rate (45.2%) from the total patient sample reflects lack of acceptability of an automated low-burden survey or survey fatigue among participants who had already completed a long waiting room survey. Despite this, the relatively high completion rate to the short email survey suggests this is a feasible and acceptable approach to collect patient reported data.
Our results show that the lowest income group had the lowest preference for the email mode and lowest response rate for the email survey while having the highest response rates for the phone survey. Our finding of the email responders being more likely to be female and of higher income echoes the pattern of a recent practice-based single-site email survey in Ontario [6]. A move to use email surveys to collect patient experience data would need to carefully monitor underrepresentation by lowest income groups to not exacerbate inequities in health care. The survey software, as it is used currently for appointment reminders, is usually deployed after linking with the electronic medical record to use patient contact information, so it is possible for automated surveys to track information such as approximate income based on postal code and oversample a population found to be underrepresented in responses.
Opportunities to match surveys to reported language preferences and the capacity to reach people by phone or email who do not frequently attend a practice or have a stable home address raises the potential for an automated survey to be particularly valuable in understanding the experience of some of the most vulnerable members of a practice population. However, there are still inequities in access to the internet, with lower income individuals and people living in rural areas having lower access [12][13][14]. Text messaging might be preferable to phone for some patients and increase the reach across sociodemographic groups.
The low concordance rate of responses on questions of care coordination between paper and automated survey, especially the phone survey, raises important questions about a mode effect and/or the role of true anonymity in responding to questions about one's health care provider or practice in a waiting room compared with online or automated phone response. It is also possible that the paper survey questions on care coordination sensitized participants to the issue, and after their visit, they were more aware of breakdowns in optimal care, accounting for their more negative responses with the automated survey following their practice visit. Additionally, the care coordination questions had negative phrasing, which may have been more confusing for phone responders.
Cost-effectiveness was not the focus of this study. However, at two-thirds the completion rate compared with email, a phone survey would cost one and a half times as much. The cost of deploying a tailored automated patient outreach message and linked survey from the software company we collaborated with includes a 1-time practice start-up fee of $500 CAN (US $390) and an annual per-provider fee of $600 CAN (US $468). For an average practice of 4 providers and 5200 patients, an email survey would cost about 25 cents (US 20 cents) if each patient were sent a message and survey twice per year or less than 15 cents (US 12 cents) if most patients were sent a survey 4 times per year. Higher response rates make the approach more cost-effective for the email mode since automated systems frequently charge per survey sent. For quality improvement data collection, practices would not need to seek prior consent to contact patients. However, efforts to enhance patient buy-in and achieve higher response rates would be key to the cost-effectiveness of this approach. As practices seek better ways to engage patients and collect patient-reported experience measures and patient-reported outcome measures, it is essential to be sensitive to the response burden on patients and promote a culture in which patients understand the purpose of surveys and feel their insights and time are valued [15]. This may help build a partnership with patients in practice-based surveying as a way to give patients more influence in the system and their care.
The capacity of this proposed system to link collection of patient-reported measures with clinical services, such as appointment reminders or preventive care reminders, could improve the response rates received on general surveys of patient experiences, improving quality and reducing costs [2]. Such an approach would have the benefit of being able to deploy surveys to all patients or ones with prespecified criteria (eg, people who just attended the practice, have not attended in over a year, have a recent hospital discharge). Such a survey could be linked with data automatically extracted from electronic medical records or a registry developed by providers, offering an even greater opportunity to understand patient experiences and outcomes. Additionally, an automated system can spread the burden of response across a wide and/or randomly selected segment of a practice's patient population, asking different questions to different patients on an ongoing or rolling basis, enhancing reach and reducing cost compared with traditional waiting room surveys.
Increasingly, electronic medical records are being used to collect patient-reported outcome measures that are inputted directly into the patients' chart. This approach offers the benefit of supporting a patients' immediate care. However, this approach creates a burden for the provider or practice to review data automatically put into a patient chart in a timely manner. Keeping a patient automated survey function distinct from clinical care may be attractive to providers and practices who need to manage their workflow and feel overburdened with data and data requests already.
As a survey method, an automated patient survey offers some attractive features. Response rates and sample bias can be easily calculated for parameters such as age, gender or income as estimated by postal code without adding to patient burden of filling this information in. Based on continually updated information on filled surveys, ongoing distribution (sampling) parameters can be set to minimize or account for any bias that may arise. Automated surveys can be deployed at regular intervals determined by the practice and would not burden practice staff, providers, or even patients during a visit, thus avoiding interruptions or additional work.
As more practices are collecting email addresses from their patients and patients expect email communication options, an automated patient engagement system with an embedded survey is feasible. Practices already using this or a similar technology to serve patients through outreach reminders may be more willing to participate in data collection initiatives that use this same infrastructure for quality improvement or research.

Limitations
There are some limitations to consider in interpreting the findings of this study. Initial recruitment into the TRANSFORMATION study was through a convenience sample of patients from primary care practices across British Columbia, Ontario, and Nova Scotia. As such, patients who were recruited into the study may not be representative of patients generally across Canada, potentially limiting generalizability. Additionally, potential for selection bias is further compounded by relatively low overall response rates by participants of the automated patient survey, who were recruited from the initial convenience sample of patients enrolled into the larger study.

Conclusions
An automated practice-based patient experience survey achieved significantly different response rates between phone and email and increased response rates for email as income group rose. The higher response rates of the email surveys make a phone approach less cost-effective. However, care must be paid to furthering inequities in health care by underrepresenting the experience of certain groups in decision making. Further, potential mode effects for the different survey modalities may limit multimodal survey approaches. An automated communication system will become even more valuable as the stock of high-quality and validated instruments to measure patient-reported outcomes grows over the next decade [16]. An automated system that enables targeted outreach surveys with minimal burden on patients and providers could facilitate the integration of patient-reported outcomes into care planning and service organization, supporting the move of our primary care practices toward a more responsive, patient-centered, continual learning system.