Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 11.01.21 in Vol 23, No 1 (2021): January

Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/21240, first published Jun 08, 2020.

This paper is in the following e-collection/theme issue:

    Original Paper

    Differences in Mode Preferences, Response Rates, and Mode Effect Between Automated Email and Phone Survey Systems for Patients of Primary Care Practices: Cross-Sectional Study

    1Department of Family Medicine, University of Ottawa, Ottawa, ON, Canada

    2Department of Family Medicine, Institu du Savoir, Hôpital Montfort, University of Ottawa, Ottawa, ON, Canada

    3Centre for Health Services and Policy Research, University of British Columbia, Vancouver, BC, Canada

    4Department of Family Medicine, Dalhousie University, Halifax, NS, Canada

    Corresponding Author:

    Sharon Johnston, LLM, MD

    Department of Family Medicine

    University of Ottawa

    43 Bruyere St

    Ottawa, ON, K1N5C8

    Canada

    Phone: 1 613 562 6262 ext 2931

    Email: sjohnston@bruyere.org


    ABSTRACT

    Background: A growing number of health care practices are adopting software systems that link with their existing electronic medical records to generate outgoing phone calls, emails, or text notifications to patients for appointment reminders or practice updates. While practices are adopting this software technology for service notifications to patients, its use for collection of patient-reported measures is still nascent.

    Objective: This study assessed the mode preferences, response rates, and mode effect for a practice-based automated patient survey using phone and email modalities to patients of primary care practices.

    Methods: This cross-sectional study analyzed responses and respondent demographics for a short, fully automated, telephone or email patient survey sent to individuals within 72 hours of a visit to their regular primary care practice. Each survey consisted of 5 questions drawn from a larger study’s patient survey that all respondents completed in the waiting room at the time of their visit. Automated patient survey responses were linked to self-reported sociodemographic information provided on the waiting room survey including age, sex, reported income, and health status.

    Results: A total of 871 patients from 87 primary care practices in British Columbia, Ontario, and Nova Scotia, Canada, agreed to the automated patient survey and 470 patients (45.2%) completed all 5 questions on the automated survey. Email administration of the follow-up survey was preferred over phone-based administration, except among patients aged 75 years and older (P<.001). Overall, response rates for those who selected an emailed survey (369/606, 60.9%) were higher (P<.001) than those who selected the phone survey (101/265, 38.1%). This held true irrespective of age, sex, or chronic disease status of individuals. Response rates were also higher for email (range 57.4% [58/101] to 66.3% [108/163]) compared with phone surveys (range 36% [23/64] to 43% [10/23]) for all income groups except the lowest income quintile, which had similar response rates (email: 29/63, 46%; phone: 23/50, 46%) for phone and email modes. We observed moderate (range 64.6% [62/96] to 78.8% [282/358]) agreement between waiting room survey responses and those obtained in the follow-up automated survey. However, overall agreement in responses was poor (range 45.3% [43/95] to 46.2% [43/93]) for 2 questions relating to care coordination.

    Conclusions: An automated practice-based patient experience survey achieved significantly different response rates between phone and email and increased response rates for email as income group rose. Potential mode effects for the different survey modalities may limit multimodal survey approaches. An automated minimal burden patient survey could facilitate the integration of patient-reported outcomes into care planning and service organization, supporting the move of our primary care practices toward a more responsive, patient-centered, continual learning system. However, practices must be attentive to furthering inequities in health care by underrepresenting the experience of certain groups in decision making based on the reach of different survey modes.

    J Med Internet Res 2021;23(1):e21240

    doi:10.2196/21240

    KEYWORDS



    Introduction

    The development of an information infrastructure to support a learning health system in primary care has advanced significantly with the application of advanced analytics applied to data from electronic medical records and routinely collected administrative data [1]. However, in Canada most primary care is delivered in small community-based practices and, unlike the United Kingdom, there is no national or provincial infrastructure to measure and report patient experience data for primary care. Such data collection remains logistically challenging and relatively expensive for smaller practices [2]. While waiting room surveys often provide good response rates, they are costly, burdensome to practices, introduce a sampling bias toward older and more complex patients, and are limited to patients who physically attend a practice [1,3].

    A growing number [4] of health care practices are adopting software systems [5] that link with their existing electronic medical records to generate outgoing phone calls, emails, or text notifications to patients for appointment reminders or practice updates. While practices are adopting this software technology for service notifications to patients, it is not clear whether such an approach would be acceptable to survey a practice’s patients on experience or outcome measures selected by the practice to advance their quality improvement efforts. The data on response rates for electronic surveys in primary care is rudimentary compared with that for hospital surveys [4], but response rates of 20% to 30% [6,7] have been found recently for emailed surveys linked to primary care practices [6-8]. The objective of this study is to assess the mode preferences, response rates, and mode effect for a practice-based automated patient survey using phone and email modalities to patients of primary care practices.


    Methods

    Study Sample

    This cross-sectional study analyzed mode preferences, response rates, and respondent demographics for a short, fully automated, telephone or email patient survey to consenting individuals who had recently attended their regular primary care practice. Within our larger study, Transforming Community-Based Primary Health Care Delivery through Comprehensive Performance Measurement and Reporting (TRANSFORMATION), patients from 87 primary care practices in British Columbia, Ontario, and Nova Scotia, Canada, were asked to complete a waiting room survey between 2014 and 2016. The automated patient survey system was tested on a convenience sample of those participants who consented to receiving an additional postvisit survey by email or phone. Eligible patients had to speak English or French and have a valid telephone number or email address. Patients were asked to specify their preferred contact modality, phone or email, and provide their name and contact information to an on-site research team member.

    The contact information and unique identifying number for consenting patients was entered manually by survey administrators and uploaded to a cloud-based server via a software console. Upon receipt of the information, the administering information technology company collaborator, Cliniconex [9], programmed the appropriate survey mode and language (English or French) and randomly assigned the order of 5 survey questions. Once the survey was administered by Cliniconex, all contact information was deleted, and only the unique identifying number was retained on the server.

    Survey Administration

    Participants received an automated phone or email survey within 72 hours of visiting the practice. A phone survey response was recorded as completed only if the patient could be reached at the phone number on file, accepted the call, and completed all 5 survey questions. The phone survey was initially attempted twice, and then registered as incomplete if no answer was received. Partway through the study, the number of attempts was increased (4 call attempts) to facilitate higher response rates. For those who chose an email survey mode, an email was sent once containing the introduction and a web link to the survey. An email survey was recorded as complete if all 5 survey questions were answered.

    Each survey consisted of 5 questions drawn from the TRANSFORMATION study’s waiting room patient survey [10]. The questions were selected to relate to patients’ experience with primary care and/or their practice. Two question prompts were modified from their original form in the paper waiting room survey to reflect the timing of the survey administration. When administered in the waiting room, questions 1 and 2 were prompted with “After seeing the family doctor or nurse today...”; on the automated patient survey, patients were prompted with “At your last visit with your family doctor or nurse practitioner....” See Multimedia Appendix 1 for the wording of the survey questions in the paper waiting room survey and the postvisit automated survey. Phone survey responses were stored in a secure password-protected site on a secure server. Email responses were sent directly to a hospital-based server and managed using Research Electronic Data Capture tools [11].

    The unique identifying numbers were used to link automated patient survey responses to self-reported sociodemographic information on the paper waiting room survey, completed during the participant’s visit to their practice, including age, sex, reported income, and health status.

    Data Analysis

    To detect any response bias inherent in using an automated email or phone survey system, we used Pearson chi-square tests to compare the sociodemographic profile of those who completed the automated patient survey (responders) with those who did not complete the automated patient survey (nonresponders). The comparison group of nonresponders contained those who either participated in the paper waiting room survey but refused the automated survey or agreed to the automated survey but did not complete all 5 questions. We also conducted Wilcoxon rank-sum tests on the paper waiting room survey responses, comparing differences in mean responses between those who completed the automated patient survey and those who did not. We conducted chi-square tests to compare automated patient survey mode preference (email or phone) and response rates both across and between patient sociodemographics. A Cochran-Armitage test for trends was also used to examine variation in mode preference by age and income. All analyses were performed using SAS software version 9.4 (SAS Institute Inc).

    The primary outcome measure response rate was pooled across all practices as we were interested in differences across dependent variables of age and attributed socioeconomic status rather than regional variations.

    To identify a potential mode effect, secondary analyses explored responses for each question across the 3 survey modes, email and phone (automated patient survey) and paper (waiting room). Test-retest analysis was undertaken, comparing each patient’s responses from the waiting room survey to their responses to the corresponding automated survey question. The percentage of concordant and discordant responses were determined by comparing waiting room derived responses with those from subsequent survey data. Weighted kappas were calculated to compare this concordance in survey responses by survey mode. Mean responses were also compared (using Wilcoxon signed-rank test) across responses to the corresponding questions from the paper waiting room survey and the automated patient survey (total and by mode).

    This study was approved by the behavioral research ethics boards at Fraser Health (RHREB 2015-017), University of British Columbia (H13-01237), Ottawa Health Science Network (20140485-01H), Bruyère Continuing Care (M16-14-029), and the Nova Scotia Health Authority (CDHA-RS/2015-150).


    Results

    Response Bias

    Of those who agreed to the automated patient survey, 69.6% (606/871) of participants chose to receive the survey by email compared with telephone. This group represented 45.15% (871/1929) of the participants who initially consented to completing a paper waiting room survey (Table 1). Of those who agreed to the survey, 55.6% (484/871) responded and 97.1% (470/484) completed all 5 questions (24.36% [470/1929] of those who completed the paper waiting room survey and 54.0% [470/871] of those who agreed to the automated patient survey). Respondents to the automated patient survey tended to be older, were more likely to be women, had higher income, and reported a larger number of chronic conditions than those not completing the survey. There was no significant difference in paper waiting room survey responses between those who completed the automated patient survey and those who did not (Table 2).

    Table 1. Comparison of those who completed the automated patient survey to those who did nota.
    View this table
    Table 2. Comparison of waiting room survey responses between those who completed the automated patient survey to those who did nota.
    View this table

    Response Rates

    In this sample, email administration of the follow-up survey was preferred over phone-based administration, except among patients aged 75 years and older (Table 3). Among those who answered the automated patient survey, 97.1% (470/484) completed of all 5 questions. Thus, response rates include only those who answered all 5 questions. Overall, response rates for those who selected an emailed survey (369/606, 60.9%) were higher than those who received the phone survey (101/265, 38.1%). This held true irrespective of the age, sex, or chronic disease status of individuals. Response rates were also higher for email compared with phone surveys for all income groups except the lowest income quintile, which had similar response rates for phone and email modes. There was variation in response rates within email mode, with higher responses among more affluent individuals.

    Table 3. Mode preference and response rates by subgroups.
    View this table

    Mode Effect

    We observed moderate agreement between waiting room survey responses and those obtained in the follow-up automated survey (see Multimedia Appendix 2). However, overall agreement in responses was poor for 2 questions relating to care coordination. Among phone respondents, agreement in responses was generally poor, and phone responders were particularly critical with respect to care coordination (Table 4). Agreement between waiting room responses and subsequent email survey regarding interpersonal aspects of care was moderate and poor for items relating to care coordination.

    Table 4. Comparison of responses to paper waiting room surveys and automated surveys among those who completed the automated patient survey.
    View this table

    Discussion

    Principal Findings

    We successfully deployed an automated multimodal practice-based patient survey in 87 primary care practices. Overall, patient preference for the email survey mode was demonstrated; however, this was modified by age group and socioeconomic status. Indeed, completion rates for email were higher than most health care automated surveys [8] versus comparable response rates in the total sample [6]. However, it is unclear whether the lower consent rate (45.2%) from the total patient sample reflects lack of acceptability of an automated low-burden survey or survey fatigue among participants who had already completed a long waiting room survey. Despite this, the relatively high completion rate to the short email survey suggests this is a feasible and acceptable approach to collect patient reported data.

    Our results show that the lowest income group had the lowest preference for the email mode and lowest response rate for the email survey while having the highest response rates for the phone survey. Our finding of the email responders being more likely to be female and of higher income echoes the pattern of a recent practice-based single-site email survey in Ontario [6]. A move to use email surveys to collect patient experience data would need to carefully monitor underrepresentation by lowest income groups to not exacerbate inequities in health care. The survey software, as it is used currently for appointment reminders, is usually deployed after linking with the electronic medical record to use patient contact information, so it is possible for automated surveys to track information such as approximate income based on postal code and oversample a population found to be underrepresented in responses.

    Opportunities to match surveys to reported language preferences and the capacity to reach people by phone or email who do not frequently attend a practice or have a stable home address raises the potential for an automated survey to be particularly valuable in understanding the experience of some of the most vulnerable members of a practice population. However, there are still inequities in access to the internet, with lower income individuals and people living in rural areas having lower access [12-14]. Text messaging might be preferable to phone for some patients and increase the reach across sociodemographic groups.

    The low concordance rate of responses on questions of care coordination between paper and automated survey, especially the phone survey, raises important questions about a mode effect and/or the role of true anonymity in responding to questions about one’s health care provider or practice in a waiting room compared with online or automated phone response. It is also possible that the paper survey questions on care coordination sensitized participants to the issue, and after their visit, they were more aware of breakdowns in optimal care, accounting for their more negative responses with the automated survey following their practice visit. Additionally, the care coordination questions had negative phrasing, which may have been more confusing for phone responders.

    Cost-effectiveness was not the focus of this study. However, at two-thirds the completion rate compared with email, a phone survey would cost one and a half times as much. The cost of deploying a tailored automated patient outreach message and linked survey from the software company we collaborated with includes a 1-time practice start-up fee of $500 CAN (US $390) and an annual per-provider fee of $600 CAN (US $468). For an average practice of 4 providers and 5200 patients, an email survey would cost about 25 cents (US 20 cents) if each patient were sent a message and survey twice per year or less than 15 cents (US 12 cents) if most patients were sent a survey 4 times per year. Higher response rates make the approach more cost-effective for the email mode since automated systems frequently charge per survey sent. For quality improvement data collection, practices would not need to seek prior consent to contact patients. However, efforts to enhance patient buy-in and achieve higher response rates would be key to the cost-effectiveness of this approach. As practices seek better ways to engage patients and collect patient-reported experience measures and patient-reported outcome measures, it is essential to be sensitive to the response burden on patients and promote a culture in which patients understand the purpose of surveys and feel their insights and time are valued [15]. This may help build a partnership with patients in practice-based surveying as a way to give patients more influence in the system and their care.

    The capacity of this proposed system to link collection of patient-reported measures with clinical services, such as appointment reminders or preventive care reminders, could improve the response rates received on general surveys of patient experiences, improving quality and reducing costs [2]. Such an approach would have the benefit of being able to deploy surveys to all patients or ones with prespecified criteria (eg, people who just attended the practice, have not attended in over a year, have a recent hospital discharge). Such a survey could be linked with data automatically extracted from electronic medical records or a registry developed by providers, offering an even greater opportunity to understand patient experiences and outcomes. Additionally, an automated system can spread the burden of response across a wide and/or randomly selected segment of a practice’s patient population, asking different questions to different patients on an ongoing or rolling basis, enhancing reach and reducing cost compared with traditional waiting room surveys.

    Increasingly, electronic medical records are being used to collect patient-reported outcome measures that are inputted directly into the patients’ chart. This approach offers the benefit of supporting a patients’ immediate care. However, this approach creates a burden for the provider or practice to review data automatically put into a patient chart in a timely manner. Keeping a patient automated survey function distinct from clinical care may be attractive to providers and practices who need to manage their workflow and feel overburdened with data and data requests already.

    As a survey method, an automated patient survey offers some attractive features. Response rates and sample bias can be easily calculated for parameters such as age, gender or income as estimated by postal code without adding to patient burden of filling this information in. Based on continually updated information on filled surveys, ongoing distribution (sampling) parameters can be set to minimize or account for any bias that may arise. Automated surveys can be deployed at regular intervals determined by the practice and would not burden practice staff, providers, or even patients during a visit, thus avoiding interruptions or additional work.

    As more practices are collecting email addresses from their patients and patients expect email communication options, an automated patient engagement system with an embedded survey is feasible. Practices already using this or a similar technology to serve patients through outreach reminders may be more willing to participate in data collection initiatives that use this same infrastructure for quality improvement or research.

    Limitations

    There are some limitations to consider in interpreting the findings of this study. Initial recruitment into the TRANSFORMATION study was through a convenience sample of patients from primary care practices across British Columbia, Ontario, and Nova Scotia. As such, patients who were recruited into the study may not be representative of patients generally across Canada, potentially limiting generalizability. Additionally, potential for selection bias is further compounded by relatively low overall response rates by participants of the automated patient survey, who were recruited from the initial convenience sample of patients enrolled into the larger study.

    Conclusions

    An automated practice-based patient experience survey achieved significantly different response rates between phone and email and increased response rates for email as income group rose. The higher response rates of the email surveys make a phone approach less cost-effective. However, care must be paid to furthering inequities in health care by underrepresenting the experience of certain groups in decision making. Further, potential mode effects for the different survey modalities may limit multimodal survey approaches.

    An automated communication system will become even more valuable as the stock of high-quality and validated instruments to measure patient-reported outcomes grows over the next decade [16]. An automated system that enables targeted outreach surveys with minimal burden on patients and providers could facilitate the integration of patient-reported outcomes into care planning and service organization, supporting the move of our primary care practices toward a more responsive, patient-centered, continual learning system.

    Acknowledgments

    The authors thank Stephanie Blackman, Martha Foley, and Jonathan Beaumier for their help in writing the manuscript.

    Authors' Contributions

    SJ conceived the study, oversaw the implementation and analyses, and wrote the manuscript. WH helped conceive the study, oversaw the implementation, and contributed to the writing of the manuscript. SW contributed to the analyses and reviewed and approved the final manuscript. FB contributed to the analyses and reviewed and approved the final manuscript. SP led the analyses and reviewed and approved the final manuscript.

    Conflicts of Interest

    None declared.

    Multimedia Appendix 1

    Survey questions and response options for the automated patient surveys and paper waiting room surveys.

    DOCX File , 16 KB

    Multimedia Appendix 2

    Concordance of automated patient survey responses compared with paper waiting room survey responses.

    DOCX File , 17 KB

    References

    1. Green ME, Hogg W, Savage C, Johnston S, Russell G, Jaakkimainen RL, et al. Assessing methods for measurement of clinical outcomes and quality of care in primary care practices. BMC Health Serv Res 2012;12:214 [FREE Full text] [CrossRef] [Medline]
    2. Peters M, Crocker H, Jenkinson C, Doll H, Fitzpatrick R. The routine collection of patient-reported outcome measures (PROMs) for long-term conditions in primary care: a cohort survey. BMJ Open 2014 Feb 21;4(2):e003968 [FREE Full text] [CrossRef] [Medline]
    3. Hogg W, Johnston S, Russell G, Dahrouge S, Gyorfi-Dyke E, Kristjanssonn E. Conducting waiting room surveys in practice-based primary care research: a user's guide. Can Fam Physician 2010 Dec;56(12):1375-1376 [FREE Full text] [Medline]
    4. Khanbhai M, Flott K, Darzi A, Mayer E. Evaluating digital maturity and patient acceptability of real-time patient experience feedback systems: systematic review. J Med Internet Res 2019 Jan 14;21(1):e9076. [CrossRef] [Medline]
    5. McLean SM, Booth A, Gee M, Salway S, Cobb M, Bhanbhro S, et al. Appointment reminder systems are effective but not optimal: results of a systematic review and evidence synthesis employing realist principles. Patient Prefer Adherence 2016;10:479-499 [FREE Full text] [CrossRef] [Medline]
    6. Slater M, Kiran T. Measuring the patient experience in primary care: comparing e-mail and waiting room survey delivery in a family health team. Can Fam Physician 2016 Dec;62(12):e740-e748 [FREE Full text] [Medline]
    7. Poppelwell E, Esplin J, Doust E, Swansson J. Evaluation of the primary care patient experience survey tool. New Zealand Ministry of Health. 2018 Apr 18.   URL: https://www.hqsc.govt.nz/assets/Health-Quality-Evaluation/PES/MoH-PES-report-18April2018_2.pdf [accessed 2020-12-02]
    8. Falconi M, Johnston S, Hogg W. A scoping review to explore the suitability of interactive voice response to conduct automated performance measurement of the patient’s experience in primary care. Prim Health Care Res Dev 2015 Aug 5;17(03):209-225. [CrossRef] [Medline]
    9. Cliniconex.   URL: http://cliniconex.com/ [accessed 2020-12-02]
    10. Wong S, Burge F, Johnston S, Hogg W, Haggerty J. The TRANSFORMATION primary health care patient experiences survey in French and English: a technical report. UBC Centre for Health Services and Policy Research. 2019.   URL: http://chspr.sites.olt.ubc.ca/files/2019/05/TRANSFORMATION-Pt-Exp-Survey-2019.pdf [accessed 2020-12-02]
    11. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009 Apr;42(2):377-381 [FREE Full text] [CrossRef] [Medline]
    12. CIRA: Canadian Internet Registration Authority.   URL: https://cira.ca/factbook/2014/the-canadian-internet.html [accessed 2020-12-02]
    13. Ramirez V, Johnson E, Gonzalez C, Ramirez V, Rubino B, Rossetti G. Assessing the use of mobile health technology by patients: an observational study in primary care clinics. JMIR Mhealth Uhealth 2016;4(2):e41 [FREE Full text] [CrossRef] [Medline]
    14. Allen M. Consumption of culture by older Canadians on the internet. Statistics Canada. 2013.   URL: http://www.statcan.gc.ca/pub/75-006-x/2013001/article/11768-eng.htm [accessed 2020-12-02]
    15. Primary care patient experience survey: support guide. Health Quality Ontario. 2015 Apr.   URL: http:/​/www.​hqontario.ca/​Portals/​0/​documents/​qi/​primary-care/​primary-care-patient-experience-survey-support-guide-en.​pdf [accessed 2020-12-02]
    16. Cella D, Riley W, Stone A, Rothrock N, Reeve B, Yount S, PROMIS Cooperative Group. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005-2008. J Clin Epidemiol 2010 Nov;63(11):1179-1194 [FREE Full text] [CrossRef] [Medline]


    Abbreviations

    TRANSFORMATION: Transforming Community-Based Primary Health Care Delivery Through Comprehensive Performance Measurement and Reporting


    Edited by G Eysenbach; submitted 08.06.20; peer-reviewed by H van Marwijk, T Ungar; comments to author 25.08.20; revised version received 23.09.20; accepted 28.10.20; published 11.01.21

    ©Sharon Johnston, William Hogg, Sabrina T Wong, Fred Burge, Sandra Peterson. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 11.01.2021.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.