This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
The task complexity involved in connecting to telehealth video visits may disproportionately impact health care access in populations already experiencing inequities. Human intermediaries can be a strategy for addressing health care access disparities by acting as
We conducted a cognitive load theory–informed pilot intervention involving
Early into the COVID-19 pandemic a telehealth helping session was offered to patients at FQHC via phone. Graduate students led the sessions on conducting a telehealth video test run or helping with patient portal log-in. They systematically recorded their recruitment efforts, intervention observations, and daily reflection notes. Following the intervention, we asked the intervention participants to participate in an interview and all patients who had telehealth visits during and 4 weeks before and after the intervention period to complete a survey. Electronic health records were reviewed to assess telehealth visit format changes. Descriptive and inferential statistical analyses of the recruitment records, electronic health record data, and surveys were performed. Through integrative analysis, we developed process-related themes and recommendations for future equity-focused telehealth interventions.
Of the 239 eligible patients, 34 (14.2%) completed the intervention and 3 (1.2%) completed subsequent interviews. The intervention participants who completed the survey (n=15) had lower education and less technological experience than the nonintervention survey participants (n=113). We identified 3 helping strategies for cognitive load reduction:
Although a limited number of people participated in the intervention, it may have reached individuals more likely to need technology assistance. We postulate that significant differences between intervention and nonintervention participants were rooted in baseline differences between the groups’ education level, technology experience, and technology use frequency; however, small sample sizes limit conclusions. The barriers encountered during the intervention suggest that patients at FQHC may require both improved access to web-based technologies and human intermediary support to make telehealth video visits feasible. Future large, randomized, equity-focused studies should investigate blended strategies to facilitate video visit access.
The unprecedented uptake of telehealth services during the COVID-19 pandemic [
For patients to successfully conduct a telehealth video visit, they must have access to up-to-date devices and broadband internet [
Equitable telehealth access requires reducing the cognitive load introduced by the complex tasks involved in conducting telehealth video visits. Cognitive load is the effort and mental resources required to complete a task [
Previous research has demonstrated that psychological stress can tax one’s cognitive load and emotional resources [
The COVID-19 pandemic has spurred an increased uptake of intermediaries who serve as “digital navigators” for telehealth technologies [
We extend prior telehealth and intermediary work [
We conducted a cognitive load theory–informed pilot intervention involving
The study’s participating FQHC offered telephone and video telehealth visits. However, the FQHC’s recent addition of multiple technologies introduced various novel, challenging tasks and processes for phone and video telehealth sessions. For instance, the FQHC’s new electronic health records (EHRs) vendors did not offer video visit capabilities at the beginning of the pandemic. To address this limitation, the FQHC adopted 2 telehealth platforms that supported video visits but lacked EHR integration. Notably, one vendor continues to only offer full video visit capabilities on iPhone Operating System devices, which tend to be adopted by higher-income populations [
In addition, the patients were asked to complete the telehealth consent form through their patient portal before their first telehealth visit. However, the FQHC had limited patient portal uptake owing to the transition to a new EHR system just 5 months before the COVID-19 pandemic: of the 9333 patients who had an appointment from October 2019 to March 13, 2020, 2311 (24.77%) had enrolled, 1134 (12.15%) had declined, and 418 (4.48%) had used the patient portal. Therefore, the FQHC recommended that in addition to the video test run, portal setup and telehealth consent completion be included in the intervention design, as they were both important tasks for telehealth video visits and novel to most patients at the FQHC.
Given the multiple patient-facing technologies involved in a telehealth video visit, the intermediary intervention focused on assisting patients with three tasks: (1) logging into the patient portal, (2) signing a telehealth consent form in the patient portal, and (3) testing telehealth video visits using a platform that was not integrated into the patient portal of FQHC.
We conducted a pilot study to develop and test the feasibility of an equity-focused telehealth intermediary intervention. Pilot studies are exploratory to inform the development of larger-scale randomized studies and do not strive for a large sample size that can be used for inferential statistics [
Notably, our equity-focus design adds to the current intervention study approaches. Intervention designs often perpetuate the unjust allocation of resources and widen health disparities because of selection bias that favors the dominant population [
The different forms of memory referenced in the cognitive load literature [
As illustrated in
Theoretical foundations of intervention.
We designed the intervention to be a 10-30 minute
Helpers were proficient in English or Spanish and paid graduate student research assistants trained as FQHC volunteers. Resources to help prepare helpers for the helping sessions included (1) a 1-hour orientation session, (2) a walkthrough of the intervention, (3) an intervention script (
Design of intervention.
As the right side (green rounded boxes) of
In addition, we posited that the intervention would impact participants’ visit experiences as follows: (1) reducing cognitive load by improving the performance of tasks related to video visits; (2) changing the visit modality from phone to video; (3) decreasing technical problems during video visits; and (4) improving satisfaction with video visits, including more willingness to recommend video visits to others.
Participants were patients from an FQHC in Metropolitan Detroit, who spoke either Spanish or English, and were over the age of 18 years. The helpers attempted to contact all patients scheduled for a phone or telehealth video visit during a 2-week period in August 2020. We established a limited time period because of the urgency of providing an alternative to face-to-face visits, while also recognizing that students would have more limited availability when their fall semester began. The FQHC partner did not provide advance notice to the patients before offering helping sessions. However, helpers called from a phone account that identified the organization’s name on the call display and the introductory script named the FQHC and the university as sponsors of the intervention. If helpers could not reach a patient by phone, a brief message was sent via voicemail or with the person who answered the call. Before the helping sessions, the intervention participants were asked to provide their oral consent and were mailed or emailed a copy of the informed consent documents.
This study was reviewed and received ethics approval by the University of Michigan’s Institutional Review Board (#HUM00182442 and #HUM00152878).
The multiple methods that we used for data collection are listed in
Methods for data collection.
Methods | Description | Completed by | Evaluation measures |
Intervention recruitment tracking | Microsoft Excel was used to track attempts to contact patients for the helping session. Information recorded included date and time of message and the results from a phone call (ie, voicemail, in-person message, no answer, busy, request for helper to call back, or refused to participate) | Helpers | Process evaluation |
Structured observations | Completed during and after a helping session. Contained structured questions on visit modality and the locations of patients’ issues and open-ended questions on communication with patients and provided guidance | Helpers | Process evaluation |
Reflection notes | Open-ended questions were completed at the end of each intervention day. These prompted helpers about their experiences, interactions and communication with patients, and their emergent techniques for completing intervention activities to reduce patient cognitive load | Helpers | Process evaluation |
Telehealth experience survey | The survey was available in English and Spanish and contained validated measures on perceived difficulty as a measure of cognitive load, perceived usefulness, perceived ease of use, confidence in using telehealth, and satisfaction with visit. Designed to compare intervention and nonintervention participants | Intervention and nonintervention patient participants | Preliminary impact on intervention participant and visit experience |
Telehealth experience survey: intervention-specific questions | A set of 4 questions were added at the end of the posttelehealth survey which asked helpees about their experiences with the technology helpers | Intervention patient participants | Preliminary impact on intervention participant |
Semistructured interview experiences with patients who had helping sessions | Completed after a telehealth visit and asked patients about personal experiences with tasks, processes, and supporting technologies and their experiences with the helping sessions. Interviews were conducted in English or Spanish | Intervention patient participants | Process evaluation and preliminary impact on intervention participant |
EHRa data | Analyzed EHRs using manual chart review to assess whether the modality for the telehealth visit changed after the helping session and reasons provided for the modality change and to extract information on prior telehealth experience among intervention participants. Determined the number of portal users in providing details on the sociotechnical context | Research analyst | Preliminary impact on visit experience |
aEHR: electronic health record.
To assess the preliminary impact of the helping sessions and reach, we sent a survey link via an EHR-based text and portal “campaign” to (1) all adult patients at FQHC who spoke English or Spanish, (2) had consented to receive texts from the FQHC or had a patient portal, and (3) had completed a telehealth visit (phone or video) during or 4 weeks before and after the helping intervention pilot period. Accordingly, we contacted all eligible patients.
The survey contained validated measures on perceived usefulness [
Survey intervention participants were asked if they could be contacted for follow-up interviews. Those who indicated interest and intervention participants who did not complete the survey were contacted by phone or text for a 60-minute semistructured interview about their experiences with telehealth and the helping session. All interviews were recorded and transcribed, and Spanish transcripts were translated into English.
In addition, the EHR charts were manually reviewed to assess patients’ previous experiences with telephone or video visits and whether the final format of the scheduled telehealth visit had changed following the helping session. As applicable, we extracted the “reason” why patients opted for a phone visit from a drop-down list within the EHR visit note template accessed by all providers.
Three team members (GM, JJG, and LKB) analyzed survey data using descriptive statistics and inferential statistical tests, including the Wilcoxon rank-sum test, Pearson chi-square test, and Fisher exact test, using R software (R Foundation for Statistical Computing). Excel was used to categorize the data from the structured observations and chart reviews. One researcher (MA) used NVivo (QSR International) to code the data from the daily reflection notes, using both inductive and deductive coding (using codes informed by the cognitive load reduction literature) [
To inform future large-scale equity-focused interventions, we present the results on demographics for assessing recruitment and reach, in-depth reporting of our qualitative results for process delivery, and comparator data for evaluating the preliminary impact of the intervention [
Demographics of intervention participants (N=34).
Demographics | Female (n=24) | Male (n=8) | Preferred not to answer or missing (n=2) | ||||
|
|||||||
|
18-25 | 2 (8) | 0 (0) | 0 (0) | |||
|
26-35 | 1 (4) | 2 (25) | 0 (0) | |||
|
36-45 | 10 (42) | 1 (13) | 0 (0) | |||
|
46-55 | 7 (29) | 3 (38) | 0 (0) | |||
|
56-65 | 4 (17) | 1 (13) | 0 (0) | |||
|
66-75 | 0 (0) | 1 (13) | 0 (0) | |||
|
>76 | 0 (0) | 0 (0) | 0 (0) | |||
|
Preferred not to answer or missing | 0 (0) | 0 (0) | 2 (100) | |||
|
|||||||
|
Black | 12 (50) | 4 (50) | 0 (0) | |||
|
White | 5 (21) | 3 (38) | 1 (50) | |||
|
Other | 7 (29) | 0 (0) | 1 (50) | |||
|
Preferred not to answer or missing | 0 (0) | 1 (13) | 0 (0) | |||
|
|||||||
|
Hispanic | 8 (33) | 1 (13) | 1 (50) | |||
|
Non-Hispanic | 16 (66) | 6 (75) | 1 (50) | |||
|
Preferred not to answer or missing | 0 (0) | 1 (13) | 0 (0) | |||
|
|||||||
|
English | 17 (71) | 7 (88) | 2 (100) | |||
|
Spanish | 7 (29) | 1 (13) | 0 (0) |
Demographic information and basic technological experience for survey participants (N=128).
Variable | Overall (N=128a) | No intervention (n=113a) | Intervention (n=15a) | |||
|
.90 | |||||
|
18-25, n (%) | 14 (11.6) | 12 (11.2) | 2 (14.3) |
|
|
|
26-35, n (%) | 18 (14.9) | 17 (15.9) | 1 (7.1) |
|
|
|
36-45, n (%) | 34 (28.1) | 29 (27.1) | 5 (35.7) |
|
|
|
46-55, n (%) | 26 (21.5) | 22 (20.5) | 4 (28.6) |
|
|
|
56-65, n (%) | 20 (16.5) | 19 (17.8) | 1 (7.1) |
|
|
|
66-75, n (%) | 9 (7.4) | 8 (7.5) | 1 (7.1) |
|
|
|
Preferred not to answer or missing, n | 7 | 6 | 1 |
|
|
|
.006 | |||||
|
≥High school, n (%) | 99 (81.8) | 91 (85.8) | 8 (53.3) |
|
|
|
<High school, n (%) | 22 (18.2) | 15 (14.2) | 7 (46.7) |
|
|
|
Preferred not to answer or missing, n | 7 | 7 | 0 |
|
|
Spanish-speaking survey participant, n (%) | 23 (18) | 18 (15.9) | 5 (33.3) | .14 | ||
|
.005 | |||||
|
Years, mean (SD) | 14.1 (8.6) | 14.9 (8.4) | 8.4 (8.4) |
|
|
|
Preferred not to answer or missing, n | 17 | 16 | 1 |
|
|
|
.008 | |||||
|
Daily, n (%) | 103 (85.2) | 95 (88.8) | 8 (57.1) |
|
|
|
Several days a week, n (%) | 8 (6.6) | 5 (4.7) | 3 (21.4) |
|
|
|
Every few weeks to never, n (%) | 10 (8.3) | 7 (6.5) | 3 (21.4) |
|
|
|
Preferred not to answer or missing, n | 7 | 6 | 1 |
|
|
|
.27 | |||||
|
Never, n (%) | 102 (83.6) | 91 (85) | 11 (73.3) |
|
|
|
Not never, n (%) | 20 (16.4) | 16 (14.9) | 4 (26.7) |
|
|
|
Preferred not to answer or missing, n | 6 | 6 | 0 |
|
|
|
.03 | |||||
|
>1, n (%) | 81 (66.9) | 74 (69.8) | 7 (46.7) |
|
|
|
1, n (%) | 39 (32.2) | 32 (30.2) | 7 (46.7) |
|
|
|
0, n (%) | 1 (0.8) | 0 (0) | 1 (6.7) |
|
|
|
Preferred not to answer or missing, n | 7 | 7 | 0 |
|
|
|
.54 | |||||
|
No, n (%) | 69 (60.5) | 61 (62.2) | 8 (53.3) |
|
|
|
Yes, n (%) | 45 (39.5) | 38 (38.8) | 7 (46.7) |
|
|
|
Preferred not to answer or missing, n | 14 | 15 | 0 |
|
|
|
.26 | |||||
|
No, n (%) | 48 (40) | 40 (38.1) | 8 (53.3) |
|
|
|
Yes, n (%) | 72 (60) | 65 (61.9) | 7 (46.7) |
|
|
|
Preferred not to answer or missing, n | 8 | 8 | 0 |
|
|
|
.13 | |||||
|
No, n (%) | 1 (0.8) | 0 (0) | 1 (6.7) |
|
|
|
Yes, n (%) | 118 (99.1) | 104 (100) | 14 (93.3) |
|
|
|
Preferred not to answer or missing, n | 9 | 9 | 0 |
|
|
|
.23 | |||||
|
No, n (%) | 68 (59.1) | 57 (57) | 11 (73.3) |
|
|
|
Yes, n (%) | 47 (40.9) | 43 (43) | 4 (26.7) |
|
|
|
Preferred not to answer or missing, n | 13 | 13 | 0 |
|
|
|
.44 | |||||
|
No, n (%) | 97 (85.1) | 86 (86) | 11 (78.6) |
|
|
|
Yes, n (%) | 17 (14.9) | 14 (14) | 3 (21.4) |
|
|
|
Preferred not to answer or missing, n | 14 | 13 | 1 |
|
|
|
.99 | |||||
|
No, n (%) | 111 (97.4) | 98 (97) | 13 (100) |
|
|
|
Yes, n (%) | 3 (2.6) | 3 (3) | 0 (0) |
|
|
|
Preferred not to answer or missing, n | 14 | 12 | 2 |
|
aPearson chi-square test; Wilcoxon rank-sum test; Fisher exact test.
bBonferroni correction not used. Sample size discrepancy considered within tests.
As detailed in
The 3 participants who completed the intervention, survey, and interview had a mean age of 43 years and identified as a Black non-Hispanic man, a White Hispanic woman, and a White non-Hispanic man.
As shown in
We found that intervention recruitment was more successful when we could reach people directly over the phone versus via voicemail (
Recruitment for the intervention.
Feasibility of recruitment for intervention: timing and tracking of calls.
|
Participated in intervention (n=34) | Did not participate in intervention (n=205) | |||
|
|||||
|
0 (same day) | 12 (35.3) | 35 (17.1) | ||
|
1 | 14 (41.2) | 80 (39) | ||
|
2 to 3 | 3 (8.8) | 31 (15.1) | ||
|
4 to 6 | 5 (14.7)a | 25 (12.2) | ||
|
More than 6 days | 0 (0) | 34 (16.6) | ||
|
|||||
|
Immediately participated in helping session | 16 (47.1) | N/Ab | ||
|
Gave an immediate response not to participate | N/A | 43 (21) | ||
|
Requested a call back | 15 (44.1) | 34 (16.6) | ||
|
Message was left with a person | 2 (5.9) | 8 (3.9) | ||
|
|||||
|
Voicemail | 0 (0) | 77 (37.6) | ||
|
No answer | 1 (2.9) | 29 (14.1) | ||
|
Missing | 0 (0) | 14 (6.8) |
aOne visit was canceled after completing the helping session.
bN/A: not applicable.
Recruitment for the telehealth experience survey and interviews.
The helping sessions ranged from 5 to 65 minutes, with an average time of 34 minutes. In the 8 sessions that were 50 minutes or longer, 7 resulted in the successful completion of all 3 tasks.
During the helping session, 11 participants attempted to sign the telehealth consent in the patient portal, 5 of whom were successful (
During the helping session, 24 intervention participants attempted a telehealth video test run (
Tasks attempted and completed by participants during helping session (N=34).
Task | Number of participants, n (%) | Successful on task, n (%) | Not successful on task, n (%) | |
|
||||
|
Attempted to resolve patient portal issue | 23 (68) | 15 (65)a | 8 (35)a |
|
Did not attempt to resolve a patient portal issue | 11 (32) | N/Ab | N/A |
|
||||
|
Attempted to sign telehealth consent | 11 (32) | 5 (45)c | 6 (55)c |
|
Did not attempt to sign telehealth consent | 23 (68) | N/A | N/A |
|
||||
|
Attempted telehealth video test run | 24 (71) | 16 (67)d | 8 (33)d |
|
Did not attempt telehealth video test run | 10 (29) | N/A | N/A |
an=23.
bN/A: not applicable.
cn=11.
dn=24.
Elements within the 3 tasks introduced distinct language, novel processes, and context-switching between platforms, which could possibly influence task completion.
In addition,
They kept telling me their email handle was email.com instead of gmail.com. I found out it was Gmail when I asked who provided the email service she uses.
He entered his email and confirmed it multiple times, but he was not able to see an email from [FQHC] in his email. He wasn’t very familiar with the Gmail app that was on his phone.
The sessions that focused on completing the task of resetting their password and logging into the patient portal often involved the introduction of a new set of elements and novel processes for signing telehealth consent in the patient portal (
The providers had different preferences for using the 2 telehealth video platforms, both of which lacked patient portal integration. One of the telehealth video platforms offers a simplified process involving a single step of clicking on a link sent via a text message. However, during the telehealth video test runs, the intervention participants had to switch context between different areas of their phones because they were required to navigate between the telehealth platform and device settings for sound and video adjustments (
Number of elements and platform switches required for patient portal log-in tasks. 1The verbs used for each element illustrates the variety of actions required for each three tasks; 2number of boxes in the element row; 3number of rows of shaded boxes in the platform; and 4number of arrows; each arrow indicates a platform switch; dual arrows indicate there are 2 possible pathway.
Number of elements and platform switches required for telehealth consent tasks. 1The verbs used for each element illustrates the variety of actions required for each three tasks; 2number of boxes in the element row; 3number of rows of shaded boxes in the platform; and 4number of arrows; each arrow indicates a platform switch; dual arrows indicate there are 2 possible pathway.
Number of elements and platform switches required for telehealth tasks. 1The verbs used for each element illustrates the variety of actions required for each three tasks; 2number of boxes in the element row; 3number of rows of shaded boxes in the platform; and 4number of arrows; each arrow indicates a platform switch; dual arrows indicate there are 2 possible pathway.
Helpers had a limited ability to minimize external factors that could impact a participant’s cognitive load. Although helpers encouraged intervention participants to find a quiet physical environment for the sessions, this was not always possible because of background noise from others in the household. In addition, household members may have been at home during the early days of the COVID-19 pandemic, which could have potentially introduced distractions during telehealth video test runs. A helper noted how external factors impacted the session:
[The participant was] unable to focus attention on the task at hand. I had to repeat questions and processes multiple times, patient repeatedly spoke to other people.
The intervention participants experienced internet access disruptions during the helping sessions, which may have increased during the high service times of the day. In addition, they needed to use their data plans for the telehealth video test runs. Most intervention participants used mobile phones as their only devices. Thus, the required tasks involved using a small screen to switch between email, SMS text messages, patient portal, and telehealth platform. Often, these mobile devices were older and no longer offered current technical documentation for troubleshooting. The screen was damaged in at least one case. One helper noted these varied issues in the structured observation form:
[The] phone was cracked and hard to use. Was trying to get a new phone to be able to do visits better. Doesn’t have Wi-Fi so was planning on using data for the [video visit] call...Call disconnected 2-3 times. Had bad service.
As the intervention progressed, helpers demonstrated that their ability to assist patients was not from knowing every single technical solution detailed in a Wiki document or manual. Rather, they developed tactics to target complex and uncertain technical pathways. Themes based on analyses of helper-completed forms and reflection notes demonstrated how the 4 helping session activities expanded into the techniques of
The step-by-step guidance offered during the helping sessions supported configuration and learning. During an interview, an intervention participant noted the importance of this approach when they said:
[The helper’s] step by step [guidance on] how to do the link...how to fill out the questionnaire...and where to go and where to send the link and everything, and how the process went...and little by little she explained it to me.
Helpers demonstrated using a stepwise approach during configuration and setup by intentionally pausing so that they could confirm with intervention participants that they were both at the same stage. One helper also demonstrated how confirmation of each step helped determine why something may not be working for the intervention participant:
I walked him through the steps again to connect to [the video platform] and checked in after each step. For example, did he receive the link via text?...click on the link?...Eventually we figured out the blockage was that his phone said “another app is accessing the camera and or microphone.”...[We] disconnect[ed] the phone call and see if that allowed him to get onto [the video platform] and it did.
A “step-by-step” approach was also helpful in guiding participants through context switches between platforms and device configurations. One helper illustrated that storing information from a previous step while the participant moved to the next step could be helpful when switching between platforms:
I told him that if he wanted help remembering the security code to get into his account, he could tell it to me, and I could repeat it back to him when he went to plug the number in for verification. This proved to be helpful as he forgot the number when we left the text message.
Given the warm accompaniment design of the intervention, helpers accorded particular attention to building rapport with the intervention participants. Helpers noted that many intervention participants were interested in obtaining more information about their upcoming telehealth visits, whereas others demonstrated negative affect. For example, helpers described intervention participants as “interested but somber,” or initially “hesitant,” “cautious” or “distrusting.” In such situations, building a rapport is of even greater importance.
Helpers observed that rapport was often built during impromptu moments, which allowed for more natural conversation, shared cultural connections, and a shift to the participant’s priorities. Examples included when a helper “flubbed the lines” with the participant, they shared “a good laugh,” and from that point on the session “felt more conversational.” During another session that began in English, the participant picked up cues from the helper that they were both Spanish-speaking, and they switched to speaking “
The shared emotions of the intervention participants and helpers completing novel tasks illustrated another source for building rapport. In the initial days of the intervention, helpers wrote about being “frustrated” and felt like they were “stabbing in the dark” when trying to obtain the signed telehealth consent form. However, helpers began noting how the intervention participants reciprocated patience and the subsequent sense of accomplishment in working together through this shared problem. Helpers saw that the helping session had not only “built [participants’] confidence in knowing what to expect” but that the sessions were also “a confidence builder for me [the helper].” Helpers also shared moments of happiness when completing a task with participants who initially doubted their abilities. This was demonstrated by a helper’s reflection note:
Finally SUCCESS!! We were both so happy.” During a follow-up interview, an intervention participant demonstrated a helper’s success in building rapport when they noted that it was “...as if she was a friend...she was calming [and] I always felt comfortable”
Helpers’ efforts to “being on the same page” was both a literal and figurative endeavor in being able to see the same view on the screen as patients and in developing a shared understanding with them. During the intervention, the importance of “being on the same page” became evident when helpers discovered that they had a different view of the patient portal than the intervention participants. After encountering this issue early in the intervention, helpers got on the same page by learning to switch to a mobile phone view during the intervention.
In addition, “being on the same page” involved confirming the terms and words used during the helping sessions. A helper related how they, “talk[ed] back [to the patient to]...make sure we meant the same thing.” In contrast, the introduction of unfamiliar technical terms demonstrated that unfamiliar words could disrupt processes for “being on the same page.” The interviews offered an example of this when a participant did not know how to respond to questions about using “the patient portal” and responded with confusion “what?” and was then able to answer the question when asked whether they use “the online web site...for test results and upcoming visits.” Notably, encountering novel terms can be a potential source of complexity that can increase the cognitive load [
Postintervention data collection focused on gathering feedback about the helping session through surveys and interviews. As detailed in
Our limited survey responses and interviews with intervention participants affected our ability to assess the preliminary impact of the intervention on the participants. However, given that pilot studies are exploratory, we present the preliminary findings below and in
Evaluation of the preliminary impact of the intervention (Telehealth Experience Survey Results; N=128).
|
Overall (n=61) | No intervention (n=54) | Intervention (n=7) | |||||||
|
||||||||||
|
|
|||||||||
|
|
Value, mean (SD) | 4.4 (0.8) | 4.5 (0.7) | 3.6 (1.3) | .02 | ||||
|
|
Preferred not to answer, n | 1 | 1 | 0 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 3.8 (1.1) | 3.8 (1.1) | 3.3 (1.1) | .24 | ||||
|
|
Preferred not to answer, n | 3 | 3 | 0 |
|
||||
|
||||||||||
|
|
|||||||||
|
|
Value, mean (SD) | 92.6 (20.4) | 94.1 (16.0) | 79.6 (44.5) | .27 | ||||
|
|
Preferred not to answer, n | 12 | 10 | 2 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 85.2 (26.0) | 87.1 (23.2) | 73.0 (40.6) | .35 | ||||
|
|
Preferred not to answer, n | 15 | 14 | 1 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 91.3 (21.6) | 92.4 (17.9) | 83.3 (40.8) | .86 | ||||
|
|
Preferred not to answer, n | 14 | 13 | 1 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 90.0 (24.3) | 89.3 (25.5) | 96.0 (8.9) | .66 | ||||
|
|
Preferred not to answer, n | 13 | 11 | 2 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 88.6 (21.8) | 90.3 (18.5) | 75.7 (39.1) | .26 | ||||
|
|
Preferred not to answer, n | 10 | 9 | 1 |
|
||||
|
||||||||||
|
|
|||||||||
|
|
Value, mean (SD) | 4.4 (0.8) | 4.5 (0.7) | 3.7 (1.3) | .047 | ||||
|
|
Preferred not to answer, n | 0 | 0 | 0 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 3.7 (1.2) | 3.8 (1.2) | 3.0 (1.2) | .07 | ||||
|
|
Preferred not to answer, n | 3 | 3 | 0 |
|
||||
|
||||||||||
|
|
|||||||||
|
|
Value, mean (SD) | 4.3 (0.9) | 4.3 (0.9) | 4.6 (0.5) | .30 | ||||
|
|
Preferred not to answer, n | 61 | 54 | 7 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 4.2 (0.9) | 4.2 (0.9) | 4.4 (0.5) | .67 | ||||
|
|
Preferred not to answer, n | 62 | 54 | 8 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 4.4 (1.0) | 4.5 (0.9) | 3.3 (1.2) | .002 | ||||
|
|
Preferred not to answer, n | 70 | 61 | 9 |
|
||||
|
|
|||||||||
|
|
Value, mean (SD) | 4.5 (0.6) | 4.6 (0.6) | 4.3 (0.8) | .27 |
aWilcoxon rank-sum test.
In terms of cognitive load and learning,
From the 6 survey responses about the helper enhancing
On average, participants indicated high certainty of being able to connect to video visits in a range of scenarios, although scores were lower on the hypothesized new device (
As for our measure related to cognitive load when performing tasks (
Of the 34 telehealth visits, before the helping session 74% (n=25) were scheduled as video visits and 26% (n=9) were scheduled as phone visits. The percentage of visit modalities recorded after the telehealth visit was 53% (18/34) via video and 47% (16/34) via phone. Overall, the EHR notes had the following reasons for the 16 phone visits: no video available at the time of the visit (n=6), technical difficulties (n=4), patient refusal or preference (n=4), and no reason given (n=2).
Following the helping session, 30% (13/34) of the 3 intervention participants’ visit modalities had changed. Of the 9 intervention participants initially scheduled for a phone visit, 3 (33%) switched to video for their telehealth visit. Among the 3 intervention participants, 2 (66%) had prior phone visits, 1 (33%) had a prior video visit, and 1 (33%) had a successful video visit test run during a helping session. Of the 25 intervention participants initially scheduled for a video visit, 10 (40%) ultimately had a phone visit. Of those who switched from video to phone, 5 had prior phone visits and 1 had a prior video visit. Of the 10 intervention participants initially scheduled for video visits and switched to phone, 7 had conducted a successful telehealth video test run during the helping session. For those who had successful video test runs but still switched to the phone, the EHR notes stated that the phone was due to video not being available (n=2), technical difficulties (n=2), patient refusal or preference (n=1), and no reason given (n=2). There was 1 individual who did not have a device to support video visits at the time of the session, but the helper successfully walked the participant through the process to prepare them for when their new device arrived.
In addition, 62% (21/34) of patients did not experience a change in visit modality, of whom 15 had video telehealth visits and 6 had phone telehealth visits. Of these 15 video telehealth visits, 6 participants had prior phone visits and 6 participants had prior video visits. In addition, 6 participants who had a telehealth video visit completed a video visit test run during a helping session, of whom 4 had never had a prior video visit.
Although there was no significant relationship, overall, the survey participants reported more problems during video visits (9/66, 14%) than during phone visits (3/72, 4%). We found that most of the video visit problems reported in the survey occurred during the visit and were concerned with audiovisual quality (
As for satisfaction (
Our pilot study evaluated the reach delivery process and the preliminary impact of a telehealth intervention for preparing patients at FQHC for an upcoming telehealth visit. Despite the limited sample size, our study found that people were likely to benefit from the intervention. Of the 3 tasks, most helping sessions focused on logging into the patient portal and conducting a telehealth test run, of which approximately two-thirds resulted in successful task completion. We found less success in helping sessions that involved signing the telehealth consent form; however, this may have been because of intervention participants previously signing the form, and thus was no longer visible.
Helpers’ emergent techniques in targeting the cognitive load experienced by intervention participants included:
The success of our equity-focused study was that we reached people with less technological experience, who are often underrepresented in technological intervention studies [
We attempted to contact every patient scheduled for a telehealth visit within 2 weeks. Although our calls appeared from the FQHC, a weakness of our approach may have been the need for prior notice of upcoming calls from helpers. Thus, an additional step is to make the patients aware of the intervention before reaching out. However, steps should be in place so that every patient is informed about the call and clinicians do not unknowingly introduce disparities by predetermining who is an ideal candidate [
Although recent reviews have evaluated different recruitment modalities (eg, face-to-face, email, and phone) [
Our comparison of intervention and nonintervention participants demonstrated that we successfully recruited people who would likely benefit from helping sessions. Representation is an ongoing issue in health informatics research, and a systematic review from 2011 on consumer health informatics studies found that participation samples are predominantly White [
The design of our 2-week helping session intervention with patients at FQHC resulted in connection with over half (124/239, 51.9%) of the patients by phone. A California study also offered a 2-week telehealth intermediary intervention with an urban safety net that reached 67.8% (202/298) of participants [
The required time for the session was a notable difference in the design of the 2 studies that may explain this difference; in our study, patients were informed that sessions would be 10-30 minutes versus 5-10 minutes for the California study [
Equity-focused telehealth studies have measured telehealth uptake disparities in large academic health care systems that have their telehealth and patient portal partially [
During the helping sessions, intervention participants often experienced challenges when performing complex tasks and processes. The challenges helpers noted parallel prior research on cognitive load reduction: working memory can be taxed when needing to hold verbal text (such as PINs) [
Given that many intervention participants used cell phones during the session, completing tasks on devices with small screens may have heightened cognitive load [
It has been well-established that increased cognitive load makes it difficult to learn new tasks and processes [
In our study, we found that intervention participants experienced barriers more commonly among socioeconomically disadvantaged people, including poor internet connections [
The impact of negative emotions when interacting with complex patient-facing technologies has broader implications, as patients may already be experiencing fear and worry from the news they may receive during their health care visit. Preliminary research suggests that negative emotions may detract from learning and decision-making and, as such, increase cognitive load [
Although 15 patients had successful telehealth test runs during the helping session, analyses of the final telehealth visit modality revealed that more people switched from video to phone than vice versa. In some cases, technical problems and the unavailability of videos influenced the visit modality, even when a video visit test run was successfully completed. Provider preferences may also have shaped visit modality to an unknown degree, as previous patient portal reviews note that providers’ endorsement and perceptions of who uses technology can influence patients’ technology adoption [
Findings related to the impact of the intervention showed that intervention participants found telehealth software more difficult to learn and were less satisfied with their video visit experience than nonintervention participants. There were no significant differences in the difficulty of using technology for telehealth tasks or in self-efficacy. As this was an observational study without randomization, we postulate that these differences were rooted in the baseline differences between the groups regarding education, technology experience, and technology use frequency.
We found that 3 tasks were possibly too many for a single session, and the particular task of signing telehealth consent may have introduced unique elements that did not facilitate learning. Before launching a helping session intervention, we recommend using multiple types of devices and platforms to map out the elements and context switches involved in each proposed task. This process can help to structure helping sessions by identifying (1) the elements used for future applications, and thus when to focus on learning, and (2) the nonrecurring elements that tax working memory, and thus, when to introduce helping techniques to target cognitive load reduction.
The challenges encountered during the intervention suggest that making telehealth video visits feasible for patients at FQHC may require both human intermediary support and improved access to web-based technologies. The recently proposed digital inclusion-informed efforts provide a conceptual framework for designing blended strategies for future randomized equity-focused telehealth interventions [
Although we did not have a direct measure of cognitive load, we applied subjective measures of difficulty, which have previously shown a strong relationship with cognitive load [
This equity-focused pilot study on preparing patients at FQHC for an upcoming video telehealth visit builds on literature regarding telehealth access promotion strategies by targeting cognitive load through the
Helping documents for intervention participants.
Script for helping sessions.
Wiki document for helpers.
Structured forms used by helpers.
Tracking of decline to participate.
Examples of 3 helping strategies.
Summary statistics phone versus video telehealth visits.
electronic health record
Federally Qualified Health Center
The research team extends their thanks to all the patients who participated in our study, especially during the early, uncertain times of the COVID-19 pandemic. SG and SP were the community partners in the study. TCV, LB, and TD designed this study. TCV oversaw the selection and development of the data collection instruments. AB, AW, EA, GM, IW, SG, and VK delivered the helping sessions. AW conducted interviews and verified the transcripts. GM, JJ-G, and LKB conducted the quantitative analyses. MGA conducted qualitative and mixed analyses. MGA and TCV drafted the manuscript, and all other authors reviewed and revised the manuscript for critical content. The study was funded by the NSF RAPID COVID-19 Award #2031662.
None declared.