Published on in Vol 24, No 7 (2022): July

Preprints (earlier versions) of this paper are available at, first published .
Implementation of Web-Based Psychosocial Interventions for Adults With Acquired Brain Injury and Their Caregivers: Systematic Review

Implementation of Web-Based Psychosocial Interventions for Adults With Acquired Brain Injury and Their Caregivers: Systematic Review

Implementation of Web-Based Psychosocial Interventions for Adults With Acquired Brain Injury and Their Caregivers: Systematic Review


Corresponding Author:

Melissa Miao, BAppSc(Hons)

University of Technology Sydney

PO Box 123 Broadway, Ultimo

Sydney, 2007


Phone: 61 2 9514 1448


Background: More than 135 million people worldwide live with acquired brain injury (ABI) and its many psychosocial sequelae. This growing global burden necessitates scalable rehabilitation services. Despite demonstrated potential to increase the accessibility and scalability of psychosocial supports, digital health interventions are challenging to implement and sustain. The Nonadoption, Abandonment, Scale-Up, Spread, and Sustainability (NASSS) framework can offer developers and researchers a comprehensive overview of considerations to implement, scale, and sustain digital health interventions.

Objective: This systematic review identified published, peer-reviewed primary evidence of implementation outcomes, strategies, and factors for web-based psychosocial interventions targeting either adults with ABI or their formal or informal caregivers; evaluated and summarized this evidence; synthesized qualitative and quantitative implementation data according to the NASSS framework; and provided recommendations for future implementation. Results were compared with 3 hypotheses which state that complexity (dynamic, unpredictable, and poorly characterized factors) in most or all NASSS domains increases likelihood of implementation failure; success is achievable, but difficult with many complicated domains (containing multiple interacting factors); and simplicity (straightforward, predictable, and few factors) in most or all domains increases the likelihood of success.

Methods: From a comprehensive search of MEDLINE, EMBASE, PsycINFO, CINAHL, Scopus, speechBITE, and neuroBITE, we reviewed primary implementation evidence from January 2008 to June 2020. For web-based psychosocial interventions delivered via standard desktop computer, mobile phone, tablet, television, and virtual reality devices to adults with ABI or their formal or informal caregivers, we extracted intervention characteristics, stakeholder involvement, implementation scope and outcomes, study design and quality, and implementation data. Implementation data were both narratively synthesized and descriptively quantified across all 7 domains (condition, technology, value proposition, adopters, organization, wider system, and their interaction over time) and all subdomains of the NASSS framework. Study quality and risk of bias were assessed using the 2018 Mixed Methods Appraisal Tool.

Results: We identified 60 peer-reviewed studies from 12 countries, including 5723 adults with ABI, 1920 carers, and 50 health care staff. The findings aligned with all 3 hypotheses.

Conclusions: Although studies were of low methodological quality and insufficient number to statistically test relationships, the results appeared consistent with recommendations to reduce complexity as much as possible to facilitate implementation. Although studies excluded individuals with a range of comorbidities and sociocultural challenges, such simplification of NASSS domain 1 may have been necessary to advance intervention value propositions (domain 3). However, to create equitable digital health solutions that can be successfully implemented in real-world settings, it is recommended that developers involve people with ABI, their close others, and health care staff in addressing complexities in domains 2 to 7 from the earliest intervention design stages.

Trial Registration: PROSPERO International Prospective Register of Systematic Reviews CRD42020186387;

International Registered Report Identifier (IRRID): RR2-10.1177/20552076211035988

J Med Internet Res 2022;24(7):e38100




More than 135 million people worldwide live with acquired brain injuries (ABIs), such as stroke and traumatic brain injury (TBI) [1]. The number of people with ABI is projected to grow [2], increasing global need for rehabilitation services [1,2], including supports to manage the complex and ongoing psychosocial impact of ABI on relationships [3,4], mental health [5,6], and employment [7,8]. For these rehabilitation services to be provided at scale, they must be effectively integrated into health care systems [1].

Longstanding challenges in the implementation of evidence-based care have led to the emergence of implementation science research [9]. This includes a specific focus on digital health implementation [10,11]. Despite demonstrated potential to increase the accessibility and scalability of psychosocial supports [12,13], digital health interventions are challenging to implement and sustain [11,14,15]. Current evidence indicates that digital health implementation challenges are predominantly organizational, systemic, and sociotechnical in nature, including interrelated challenges of resources, workflows, interoperability, and legislation [10,15,16]. Therefore, understanding and addressing these challenges require a comprehensive, complexity-based approach, in which the complex, adaptive nature of health care systems, actors, and technologies, as well as the interactions between them, are recognized [17-19].

From this complexity paradigm [18-20], the Nonadoption, Abandonment, Scale-up, Spread, and Sustainability (NASSS) framework [17] offers developers, practitioners, and researchers a comprehensive synthesis of considerations to implement, scale, and sustain digital health interventions, to ensure that critically important systemic and organizational factors are not overlooked. The NASSS framework includes 7 domains of digital health implementation: condition, technology, value proposition, adopters, organization, wider system, and their interaction over time [17]. Each domain includes multiple subdomains, with published definitions of how each specific subdomain can be made simple, complicated, or complex [17]. A complexity paradigm has not yet been adopted for digital interventions targeting ABI, despite both the prevalence of this condition [1] and the value of a condition-specific focus from both theoretical [20] and stakeholder perspectives [21].

To date, the NASSS framework has been used to narratively synthesize digital health implementation findings from informal care [22], mixed home care [23], and video consultations [24] in various populations. However, it has not yet been used to underpin deductive extraction and analysis of qualitative data [25] or quantitative analyses in relation to current hypotheses concerning the potential role of complexity in implementation success [20]. Digital health implementation reviews to date have also relied on generic implementation frameworks [26-28], despite their poor fit to digital health [29]. There is therefore a need to examine existing implementation evidence specific to digital health, ABI, and its psychosocial sequelae, and to do so using a comprehensive framework that acknowledges the complexity of implementing, scaling, and sustaining digital health interventions in real-world settings, if we are to enable these interventions to succeed at a scale that can reach and support current and future global needs.


Based on a previously published protocol [30], the aims of this review were as follows:

1. Identify, evaluate, and summarize the strength and nature of implementation evidence for web-based psychosocial interventions targeting either people with ABI or their caregivers or both.

2. Synthesize qualitative and quantitative implementation data according to the NASSS framework.

3. Provide recommendations for future implementation based on this synthesis.

A subsequently introduced aim was as follows:

4. Compare findings with 3 hypotheses concerning the NASSS framework [20], which state:

  • Hypothesis 1: “If most or all of the domains can be classified as simple, an intervention is likely to be easy to implement and to be achieved on time and within budget”;
  • Hypothesis 2: “If many domains are classified as complicated, the intervention will be achievable but difficult, and likely to exceed its timescale and budget”;
  • Hypothesis 3: “If multiple domains are complex, the chances of the intervention succeeding at all are limited.”

Review Registration and Protocol

This systematic review was prospectively registered in PROSPERO (International Prospective Register of Systematic Reviews; CRD42020186387) [31]. A published protocol [30], including the search strategy and selection criteria, was developed a priori. Subsequent protocol adjustments, with rationales, are reported in accordance with PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines [32].

Search Strategy and Selection Criteria

A comprehensive search of 7 databases (MEDLINE, EMBASE, PsycINFO, CINAHL, Scopus, speechBITE, and neuroBITE) was conducted in mid-June 2020 as per the published protocol [30]. The original Population, Intervention, Comparison, Outcome, Study design–based search encompassed multiple neurological conditions (search strategy is available in Multimedia Appendix 1) and returned 17,545 results (refer to Figure 1 for PRISMA flow diagram). After removing duplicates, a total of 9512 titles and abstracts were independently screened using Covidence (Veritas Health Innovation) software [33] by 2 authors (MM and either MB, RR, EP, or DD), applying exclusion criteria in hierarchical order (as listed in Figure 1). There was 96.4% (9170/9512) agreement at the title and abstract level (ie, 3.6% disagreement, or 342/9512 conflicts, including agreed exclusions for conflicting reasons). Conflicts were resolved through consensus discussion by at least 3 authors. A total of 609 records were screened at the full-text level. Due to the high yield of full texts, a pragmatic protocol adjustment was required. Therefore, all full texts were screened by the first author (MM), and a second author (RR, EP, DD, or MB) independently screened 25.1% (153/609) rather than 100% of full texts. There was 82.4% (126/153) agreement in this quarter of full texts (ie, 17.6% disagreement, or 27/153 conflicts, including agreed exclusions for conflicting reasons). The conflicts were discussed by at least 3 authors and resolved by consensus. The team agreed that the reliability of the screening process was adequate for the first author to proceed independently.

Given the high yield of full texts, additional records were not sought as originally planned in the protocol [30]. Instead, additional criteria were selected to increase the review’s clinical relevance to our own implementation of web-based psychosocial interventions delivered via standard desktop computers and smart devices to adults with ABI and their communication partners [34]. These narrowed exclusion criteria were introduced in the following hierarchical order (Figure 1):

  1. Less than 50% of the web-based intervention was delivered remotely; that is, web-based interventions accessed in a laboratory or clinic were excluded. For example, although Connor et al [34] examined a web-based brain training game, the study was excluded because it focused on in-person delivery on-site, accompanied by face-to-face treatment by a speech-language pathologist.
  2. Less than 100% of the intervention was psychosocial in nature, that is, providing cognitive, behavioral, educational, communicational, or supportive care to both the person with the condition or their caregivers. Therefore, interventions with physical rehabilitation (eg, exercise programs, or physical therapy) or health informatics (eg, symptom monitoring, interprofessional communication, or care planning) components were excluded.
  3. Less than 100% of participants were diagnosed with a neurological condition or the caregiver of such a person or the results of participants meeting this criterion could not be extracted.
  4. The intervention required bespoke or highly specialized hardware beyond standard desktop computer, television, mobile phone, tablet, or virtual reality devices.
  5. The record was a study protocol.
  6. Less than 100% of intervention recipients were people with ABI or their caregivers, with <75% of the population having had a stroke or TBI or the caregiver of someone with these conditions.
  7. Participants were aged <16 years.

The refinement in focus from neurological conditions in criterion 3 to the condition of ABI in criterion 6 aimed to reduce complexity in the first domain of the NASSS framework [20] by “scaling back on the kinds of illness or condition for which the technology is claimed to be useful.” It was also introduced to reflect stakeholders’ prioritization of the condition of ABI compared with other NASSS domains [21].

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 flow diagram. ABI: acquired brain injury; TBI: traumatic brain injury.
View this figure


In total, 60 records were included for extraction against the NASSS framework [17]. Due to the high yield requiring in-depth application of the NASSS framework, a pragmatic deviation was required. Therefore, a second author (RR, MB, or DD) checked 25% (15/60) of extractions by the first author (MM) rather than independently extracting all full texts. Additional details were added or changes made in 1.18% (26/2205) of fields, confirmed via written consensus between the 2 rating authors and a third rater if necessary. To ensure consistency, the first author (MM) used a standardized extraction form (Multimedia Appendix 2 [17,36-38]), which included embedded logic via REDCap (Research Electronic Data Capture; the REDCap Consortium) [39], and the published definitions of (1) complex, complicated, or simple for each subdomain of the NASSS framework [17]; (2) each implementation outcome [36]; and (3) each question in the critical appraisal tool [37]. Text was extracted verbatim, with minimal paraphrasing as required for context.

The data extraction form drafted in our study protocol [30] was updated (Multimedia Appendix 2) to reflect the refined exclusion criteria. New extraction items were also added from a published taxonomy of digital health intervention features [38] to both consistently capture the diversity of interventions and technologies and incorporate implementation considerations identified by stakeholders in a concurrent study [21]. These included the order in which intervention contents were presented and the potential benefit of peer interaction. As described in our protocol [30], study quality, including sampling and nonresponse bias as applicable, was assessed across various study designs using the Mixed Methods Appraisal Tool (MMAT) [37]. Extraction was completed successively via REDCap, ensuring blinding to any emerging patterns until data from all 60 records had been extracted.


The high yield of this review enabled the quantification of complexity using descriptive statistics. The numbers of complex, complicated, and simple subdomains and domains and those containing no information were each subtotaled. Subdomains were classified according to their published definitions [17] as part of the extraction process (Multimedia Appendix 2). As no domain can be simpler than its constituent parts, each domain for each record was operationally classified according to the most complex subdomain present within that domain (Table 1). Implementation success or failure for each study was defined as whether the authors succeeded in achieving their specific implementation aims. Can’t tell was selected (Multimedia Appendix 2) for ambiguous implementation outcomes, such as inconclusive or insufficiently reported implementation results or conflicting implementation and effectiveness results. Such records were excluded, resulting in 75% (45/60) of the records being included in the descriptive analysis of complexity.

In accordance with our protocol [30], all quantitative results were analyzed in REDCap and Microsoft Excel (Microsoft Corporation) using descriptive statistics, and all qualitative results were narratively synthesized according to the NASSS framework [17].

Table 1. Operational classification of domain complexity according to subdomain complexity.
Domain complexityGeneral definition [17]Subdomain complexity
Complex“Dynamic, unpredictable, not easily disaggregated into constituent components”Complex only; complex and complicated; or complex, complicated, and simple
Complicated“Multiple interacting components or issues”Complicated only or complicated and simple
Simple“Straightforward, predictable, few components”Simple only

Note Regarding Style

Due to the high number of citations per descriptor, only essential in-text citations are provided. An appended Microsoft Excel spreadsheet containing all bibliographic and extracted data is provided as Multimedia Appendix 3.

Implementation Evidence

More than two-thirds (41/60, 68%) of the reviewed studies were published in 2016 or later (Figure 2). Studies originated from 12 countries (Figure 3), with 3% (2/60) involving international collaboration [40,41]. The most studies were conducted in the United States (21/60, 35%), Australia (16/60, 27%), and Canada (6/60, 10%).

Figure 2. Publication year of included studies.
View this figure
Figure 3. Country of origin of included studies.
View this figure

Overall, 13% (8/60) of studies used an implementation framework or theory. A generic framework was used in less than half (3/8, 38%) of these studies; Pitt et al [42,43] reported that their intervention had been developed according to a guide for complex intervention development that was not specific to digital health, while Rietdijk et al [44] referred to a generic framework for feasibility to inform study design. Almost twice as many (5/8, 63%) referred to a framework specific to the development of digital interventions, including web-based education [45], web-based programs [46], and user-centered design [47-49].

The most frequently used study design was mixed methods (22/60, 37%), followed by quantitative descriptive (17/60, 28%), quantitative nonrandomized trials (11/60, 18%), quantitative randomized controlled trials (RCTs; 8/60, 13%), and qualitative research (2/60, 3%). The MMAT definition for mixed methods inherently requires mixed methods studies to meet all 5 quality criteria. However, given the reviewed studies that used both qualitative and quantitative methods were typically of poor quality (refer to Study Quality section), a deviation was made in which the quality criteria for mixed methods were used to describe how these studies fell short of the definition, rather than immediately classifying all these studies as Other (Multimedia Appendix 2).

Among hybrid effectiveness implementation studies [50], Type 2 hybrids were the most common (22/60, 37%), followed by Type 1 (17/60, 28%), and Type 3 (4/60, 7%) [51-54]. Of the 60 studies reviewed, 3 (5%) were qualitative studies of implementation [42,55,56], and 14 (23%) used other nonhybrid designs. According to definitions by Proctor et al [36], the most frequently collected implementation measure was feasibility (37/60, 62%), followed by adherence or fidelity (27/60, 45%), satisfaction (22/60, 37%), usability (19/60, 32%), and acceptability (17/60, 28%). Appropriateness (9/60, 15%), cost-effectiveness (2/60, 3%), and other measures (5/60, 8%) were less frequently observed.

In terms of implementation outcomes, most studies (37/60, 62%) achieved the investigators’ specific implementation aims, while a minority (8/60, 13%) failed to do so (Multimedia Appendix 3). There were ambiguous or unsubstantiated implementation outcomes in a quarter (15/60, 25%) of the studies reviewed.

Study Quality

Overall, study quality, as assessed using the MMAT, was low. A quarter of the reviewed studies (15/60, 25%) studies passed all 5 questions in the MMAT, 22% (13/60) of studies passed 4 questions, and 20% (12/60) of studies passed 3 questions. Overall, 12% (7/60) of studies passed 2 questions and 3% (2/60) of studies passed 1 question. Due to a lack of justification for the use of mixed methods and an absence of rigor in qualitative methods, 18% (11/60) of studies did not pass any of the 5 questions in the MMAT. There were clear research questions in 62% (37/60) of included studies, and 65% (39/60) appeared to have clear alignment between implementation aims and outcome measures. Finally, it was noted that studies often excluded individuals based on a range of comorbidities and sociocultural factors typically associated with ABI, and focused on the chronic stage of recovery (refer to Domain 1: Condition section). This presents a possible sampling bias, with some studies acknowledging limited generalizability. The higher proportion of implementation successes than failures may also indicate potential publication bias.

Stakeholder Involvement

Formal input was obtained from people with ABI (33/60, 55%), clinicians (15/60, 25%), and caregivers (14/60, 23%) in some of the reviewed studies. However, this was overwhelmingly obtained during intervention evaluation (40/42, 95%) rather than development (16/42, 38%) stages. Rietdijk et al [44] informally incorporated qualitative feedback from the users of a previous iteration of the intervention into the intervention design, in addition to formally obtaining evaluation feedback. None of the reviewed studies (0/60, 0%) involved stakeholders in coproducing research, some did not involve stakeholders at all (13/60, 22%), and a minority (5/60, 8%) provided no information.


The most common intervention type was web-based education (10/60, 17%), including an intervention that combined education with cognitive rehabilitation [57]. Other interventions for which implementation data were available included cognitive exercises and games (8/60, 13%), Communication Partner Training (4/60, 7%), and Cognitive Behavioral Therapy (3/60, 5%). Interventions classified as Other (Multimedia Appendix 2; 35/60, 58%) included aphasia groups [42,43,58], emotional regulation training [54,59], and metacognitive rehabilitation [60,61]. Interventions were usually completely (26/60, 43%) or partly (21/60, 35%) individualized to the recipient’s specific needs and preferences. This individualization typically occurred via the clinician (35/47, 74%) or user-selected preference (30/47, 64%). Individualization also or instead occurred through automation, such as embedded logic or artificial intelligence (15/47, 32%). This was in contrast to generic interventions that were not modified according to individual needs and preferences (7/60, 12%). Some studies provided no information about the degree of individualization (6/60, 10%). Most studies included human interaction, primarily with the clinician (39/60, 65%); however, some studies provided opportunities for peer interaction among people with ABI (14/60, 23%) and among caregivers (4/60, 7%). Some interventions involved no interaction (12/60, 20%) or only interaction with artificial intelligence (9/60, 15%).

The NASSS Framework

Domain 1: Condition

This review examined implementation evidence for interventions targeting adults with stroke (37/60, 62%), TBI (24/60, 40%), and aphasia of unspecified origin (4/60, 7%), or the formal (eg, clinicians and support workers) and informal (eg, family and partners) caregivers of this population. The psychosocial conditions under treatment included cognitive impairments, social communication difficulties, and language impairments among people with ABI and depression and caregiver burden among carers.

In each study, the nature of ABI and the psychosocial condition under treatment was almost always (56/60, 93%) complicated because it was “not fully characterized or understood” [17], yet not quite complex because participants were usually recruited in the chronic stages of injury to control for spontaneous recovery. In the remaining 7% (4/60) of studies, the condition shifted into the “unpredictable or high risk” definition of complex when investigators documented that participants experienced multiple neurological events [46,62,63] and responded unexpectedly to intervention during the chronic stage of injury [62,64].

Comorbidities and sociocultural factors were often (34/60, 57%) simple in studies because investigators set them as exclusion criteria. For example, investigators typically excluded participants with sensory or physical disabilities (including hemiparesis secondary to stroke), which would affect device use; people with intellectual disabilities; people with photosensitive epilepsy and other neurological conditions; people with mental illnesses or substance dependency; and people without carer support, device and computer proficiency, or internet access. These exclusions were acknowledged by some as a sampling bias. From among the 40% (24/60) of studies that considered or managed these comorbidities as complicated and complex, example considerations and learnings are provided in Textbox 1.

Examples with citations from the reviewed studies of ways to accommodate common comorbidities.

Vision and hearing impairment [43]

  • Opportunity to perform training task with researcher. Multimodal training and intervention sessions, including both auditory and visual components. All training and intervention materials used at least 14-point font, white space, and clear images. Headset microphones were used to allow users to control volume, potentially reducing the impact of background noise [43].

Cognition, memory, language, and attention [43,49,63,65,66]

  • Use of desktop shortcuts to enter videoconferencing software and aphasia-friendly training materials with significant use of white space [43].
  • Explicit categorization; repetition of important units of information; use of plain language and text made suitable for Australian grade-5 reading age; and following health education guidelines for people with dysphasia, such as font size and number of words per screen [46].
  • Uniformity in screen design regarding backgrounds, colors, and layout. Use of accessibility and usability guidelines. Use of simple interaction methods, such as mouse clicks on big buttons, to facilitate the possibility of using the same interface for touchscreen devices [49].

Psychomotor, fine motor, and mobility [43,46,63]

  • Stroke survivors suggested that the program can be developed to take into account physical ability limitations or restrictions that were due to other comorbidities [46].
  • Ensuring that software or processes did not require quick responses. Appropriate positioning of required equipment in initial training session to ensure access [43].
  • Participants with hemiparesis were able to make required responses with their other hand [62].
Textbox 1. Examples with citations from the reviewed studies of ways to accommodate common comorbidities.
Domain 2: Technology

Implementation evidence was included from web-based interventions on a range of devices including desktop computers (35/60, 58%), tablets (7/60, 12%), desktop computers and tablets (6/60, 10%), or desktop computers and mobile devices (tablet and smartphone; 6/60, 10%). Few interventions were both tablet and smartphone apps (2/60, 3%) or a solely smartphone app (2/60, 3%). Only 3% (2/60) of studies provided no information. Half of the reviewed interventions were delivered via telehealth videoconferencing (30/60, 50%). Interventions also used a combination of text (25/60, 42%), images (19/60, 32%), audio (13/60, 22%), and video (12/60, 20%); interactive games (15/60, 25%); virtual reality (2/60, 3%); productivity tools (eg, calendar and note-taking application; 4/60, 7%); and electronic communication systems, such as instant messaging (7/60, 12%), forums or message boards (5/60, 8%), and email (11/60, 18%). Reflecting the dominance of videoconferencing, interventions were frequently clinician-led (27/60, 45%). However, almost as many (23/60, 38%) interventions were completely automated or self-guided, reflecting the substantial proportion of cognitive exercises and games represented. Other interventions (10/60, 17%) were partly automated and partly telehealth.

The technology was often (37/60, 62%) a simple off-the-shelf or preinstalled solution, including hardware provision and software installation by the researchers. Similarly, the technology supply model was often off-the-shelf or a software requiring minimal customization (29/60, 48%). Approximately one-third of technologies were complicated in that they were not yet fully developed or interoperable (20/60, 33%), requiring significant customization or bespoke solutions (22/60, 37%). Some studies did not provide information on supply models (9/60, 15%) or the complexity of the technology (3/60, 5%).

Although some technologies required no support or only simple instructions for use (9/60, 15%), most technologies (42/60, 70%) required detailed initial training and ongoing troubleshooting support. Data were mostly self-entered (39/60, 65%), but also entered by the clinician (26/60, 43%) or automatically (18/60, 30%). A minority of studies (5/60, 8%) provided no information. Most often (29/60, 48%), these data were complicated in that they only “partially and indirectly measured changes in the condition” [17]. In some studies (9/60, 15%), the connection between data and the condition was simple, “directly and transparently” measuring change [17]. In other studies (8/60, 13%), it was complex, when the link between data generated and changes in the condition were “unpredictable or contested” [17]. Almost a quarter of the studies (14/60, 23%) provided no information.

Domain 3: Value Proposition

With the exception of only 3 studies [64,67,68], almost all (57/60, 95%) reviewed studies included some value proposition for their intervention, including a supply-side case (48/60, 80%). However, almost all supply-side cases (43/48, 90%) were underdeveloped [17]. Value propositions were more likely to be simple in relation to demand-side value to end users (Figure 4). Solana et al [49] uniquely demonstrated simplicity in both demand-side desirability to end users and a formally calculated supply-side economic benefit.

Figure 4. Quantification of complexity reported in each of the domains and subdomains of the Nonadoption, Abandonment, Scale-Up, Spread, and Sustainability (NASSS) framework in the included studies, showing (1) frequent controlling of comorbidities and sociocultural factors in domain 1; (2) contentious links between clinical changes in the condition and knowledge captured by the technology in domain 2; (3) particularly complex demands of patients and clinicians to use the interventions in domain 4; and (4) limited data in domains 5 to 7, documenting the challenge of workflow changes, a need for implementation work, and some complexity in relation to technology and regulations.
View this figure
Domain 4: Adopters

Adopters included in this review included 5723 adults with ABI (5424, 95% intervention recipients and 299, 5% controls), 1920 formal and informal caregivers (1729, 90% recipients and 191, 10% controls), and 50 staff (4, 8% healthy recipients, such as volunteers, who trialed the intervention; 13, 26% administrative staff in 1 study [49]; and at least 33, 66% clinicians delivering interventions, as Anderson et al [69] did not specify how many clinicians were consulted). Reviewed interventions typically targeted only the person with ABI (44/60, 73%); however, some included both an informal carer and the person with ABI (7/60, 12%), informal carers only (6/60, 10%), or either formal or informal carers together with a person with ABI (2/60, 3%) [70,71]. None of the interventions targeted only formal caregivers (ie, clinicians or support workers), but Lee et al [72] (1/60, 2%) targeted both speech-language pathology students and people with ABI.

Of the 60 studies, there were 5 (8%) studies [44,45,47,67,68] reporting complex requirements of clinicians in terms of expanded or altered responsibilities and scope of practice. These included new implementation responsibilities to provide constant remote monitoring and troubleshooting [45,67,68] or overall resistance to the concept of telehealth with its perceived limitations in nonverbal communication and rapport building compared to face-to-face delivery [44,47]. Complicated involvement (36/60, 60%) required new training, skills, and personnel. Most obviously, synchronous telehealth interventions required clinicians to learn to deliver care via the internet rather than face-to-face, including videoconferencing, instant messaging, and web-based avatars. It also required them to at least be available for technical troubleshooting. Clinicians typically delivered therapy remotely from their own home, or a private office or clinic space. There were 3 studies (3/60, 5%) in which clinician involvement was not required [12,72,73] and 16 studies (16/60, 27%) provided no information.

Expectations of people with ABI were typically (53/60, 88%) high. Most were complicated (39/60, 65%), with minimum expectations to log on, enter data, and converse via the internet. Almost a quarter (14/60, 23%) of the reviewed studies described more complex requirements, such as reflective goal setting and adjustment in response to self-monitored progress. Only 1 intervention [55] simply targeted carers without involving individuals with ABI.

Although falling short of complex demands, carers were often (23/60, 38%) assumed or required to be available whenever needed, with requirements ranging from the provision of on-demand technical support to the person with ABI to participation in more intensive programs targeting the dyad or carer themselves. A minority of studies (5/60, 8%) reported that caregiver input was not required. However, more than half of the reviewed studies (32/60, 53%) provided no information about carer involvement.

Domain 5: Organization

All except 2 (58/60, 97%) studies [72,74] examined remote delivery to homes. A few had additional availability in community health (3/60, 5%), hospitals (4/60, 7%), or university or workplace settings (3/60, 5%). Implementation scope varied from single (20/60, 33%) and multiple sites (16/60, 27%) to state-wide (9/60, 15%), national (13/60, 22%), and—rarely—international (2/60, 3%) implementation.

The organizational role in implementation was mentioned in multiple studies (38/60, 63%). However, these data were sparse (Figure 4). The sole study [69] to specify an organization’s overall capacity to innovate described a progression in service delivery from face-to-face to a hybrid telephone, self-directed, and clinic-based service, to videoconference delivery using existing facilities.

A total of 6 studies (6/60, 10%) referred to organizational readiness for technological change. Readiness was made simple by the use of existing infrastructure [69,75], staff experience using technology with the clinical population [41], and organizational support for the shift [76]. However, readiness was complicated by workflow changes [63] and became complex in a public health system [77], where investigators anticipated challenges “relating to treatment space, technology availability or concerns about data protection,” but found the latter especially problematic due to internal regulations regarding approved information systems.

Of the reviewed studies, 15% (9/60) described an organization’s adoption and funding decision, which was rarely (1/9, 11%) simple [69], and most commonly (8/9, 89%) complicated, usually by partnerships with multiple organizations. Workflow changes were described in a quarter of the reviewed studies (15/60, 25%). None of the workflow changes were simple, and 40% (6/15) of studies reporting workflow change described them as complex [41,45,52,63,68,74] due to new demands on space, time, and skill (refer to Domain 4: Adopters section).

Domain 6: Wider System

Investigators mentioned the wider system of implementation in some studies (24/60, 40%), but data were again limited (Figure 4). The wider system was mostly mentioned in relation to the position of professional bodies (13/60, 22%), followed by comments on sociocultural factors (12/60, 20%) such as internet access and technological acceptance.

In addition, a minority of studies (5/60, 8%) commented on the regulatory context. While one study made passing note of potential barriers to billing telehealth [57], the regulatory context was primarily (4/5, 80%) described in relation to security. Data security was typically (3/4, 75%) complex [44,63,77], with health data considerations in relation to both bespoke and off-the-shelf platforms such as Facetime (Apple Inc.) and Skype (Microsoft Corporation). The only study where this subdomain was simple used a legally compliant platform [49].

The only explicit mention of the political context was in relation to the widespread use of telemedicine by the American Department of Veterans Affairs [57]. Studies from several countries acknowledged government funding sources, but no other political information was available beyond state and country names.

Domain 7: Embedding and Adaptation Over Time

A minority (4/60, 7%) of studies [41,47,69,78] included data on the seventh domain. Although 3 studies [41,47,78] described a strong scope for adapting and coevolving the intervention using end user input, no information on organizational resilience was available. Inversely, although Anderson et al [69] did not provide information on adapting the intervention, theirs was the sole study to document organizational resilience in managing unforeseen complications. These included increasing bandwidth allocation when it was insufficient in the organizational network and dispatching a research assistant to resolve ad hoc technical issues on-site.


A descriptive comparison of the records reporting successful implementation (37/45, 82%) and the those reporting failures (8/45, 18%) appears consistent with the following 3 hypotheses by Greenhalgh et al [20].

Hypothesis 1: If Most or All of the Domains Can Be Classified as Simple, an Intervention Is Likely to Be Easy to Implement and to Be Achieved on Time and Within Budget

If most or all were to be mathematically defined as 4 to 7 out of 7 domains, none of the reviewed studies met this definition. Therefore, it was not possible to confirm that an intervention with most or all simple domains was likely to succeed. Indeed, it was rare (4/60, 7%) to experience simplicity in >1 domain.

However, the mean number of simple domains and subdomains was higher for successes (mean 0.6 for domains and 3.3 for subdomains) than for failures (mean 0.1 for domains and 2.1 for subdomains), aligning with the possibility that more simple domains may increase the likelihood of implementation success.

Among failures, 88% (7/8) of the studies had no simple domains at all. There was only a single study (1/8, 13%) containing 1 simple domain. Among successes, 49% (18/37) had no simple domains, 43% (16/37) had 1 simple domain, 5% (2/37) had 2 simple domains, and 3% (1/37) had 3 simple domains. This also suggests that, even if it were true that an intervention with most or all simple domains is likely to succeed, the threshold for success may be much lower, and may necessarily be so, given that simplicity was so rare.

Hypothesis 2: If Many Domains Are Classified as Complicated, the Intervention Will Be Achievable but Difficult and Likely to Exceed Its Timescale and Budget

If many domains were to be defined as 3 to 7 domains, the hypothesis that implementation is still achievable with this many domains was supported; studies with as many as 6 (out of 7) complicated domains and 10 (out of 21) complicated subdomains still achieved implementation success. However, the hypothesis that implementation might be difficult could be reflected in that the degree of complication was similar for successes and failures at both the domain (mean 3.6 for successes and 3.4 for failures) and subdomain levels (mean 6.5 for successes and 6.6 for failures).

Hypothesis 3: If Multiple Domains Are Complex, the Chances of the Intervention Succeeding at All Are Limited

The mean number of complex domains was indeed higher among failures (mean 1.25 at both the domain and subdomain levels) than among successes (mean 0.8 at both the domain and subdomain levels). This appears to be consistent with the notion that more complex domains may hinder implementation.

Principal Findings

In this review, we identified and appraised 60 published, peer-reviewed primary studies of implementation outcomes, strategies, or factors for web-based psychosocial interventions targeting either people with ABI, their formal or informal caregivers, or both. We narratively synthesized and quantified implementation data according to the NASSS framework to provide recommendations for future implementation. Recorded limitations of the evidence were the exclusion of individuals with a range of comorbidities and psychosocial challenges; poor methodological quality, particularly in the use of mixed methods; inconsistent use of implementation terminology; lack of theoretical underpinning; and limited data describing organizational, systemic, and long-term considerations. Our quantification of complexity across more than a decade of implementation evidence was consistent with all 3 of the following hypotheses: (1) simplicity facilitates implementation success; (2) successful implementation is more difficult, but still possible with more complicated domains; and (3) complexity makes implementation challenging and prone to failure [20]. These results align with recommendations to reduce complexity as much and in as many domains as possible to facilitate successful implementation [20].

Complexity and Implementation

A complexity paradigm posits that digital health implementation has many domains of complexity and that they interact. This review is the first to quantify the specific domains in which complexity has occurred and identify potential targets to improve implementation. As seen in Figure 4, most complexity in the implementation of web-based interventions for people with ABI and their caregivers was reported in the intervention demands of people with ABI and their clinicians (domain 4); the ability of the technology to measure, convey, and enable responses to health data (domain 2; Figure 4); the changes introduced to clinical workflows (domain 5); and the need to manage health data regulations (domain 6). While confirming previous findings [10,15,26,27], a complexity paradigm provides new insight that these complexities may be counterbalanced by simplifying other domains and subdomains to enable implementation success. This included simplifying domain 1 (the condition) by excluding certain comorbidities and sociocultural factors and simplifying domain 2 (the technology) by selecting off-the-shelf products.

The Importance of Intervention Value Propositions

In particular, a complexity paradigm highlights the relative simplicity observed in domain 3 (the value proposition) and the potential role of this simplicity in overall implementation success. Our results corroborate other digital health implementation reviews in finding that, despite their critical importance to sustainability [14], economic cases and business models for digital interventions were rarely articulated [22-24]. Financial viability remains a key challenge in digital health implementation [11], as the development and implementation of digital health interventions incur both up-front and ongoing costs. The low number of studies examining implementation beyond initially controlled studies may reflect overall sustainability challenges [11] in financially progressing past initial product development and testing [14]. Investigators seeking to improve the communication of this supply-side value proposition (domain 3) may benefit from both collaboration with stakeholders to identify and articulate an intervention’s economic value and interdisciplinary knowledge in health economics, business, and marketing. In the absence of such an economic case, a complexity perspective highlights that demonstrations of demand-side value, including measures of participant satisfaction, acceptability, and feasibility, become especially important to initially establish. The reviewed evidence primarily contributed to the value proposition in this subdomain, thus simplifying domain 3.

Stakeholder Inclusion

The paradox is that demand-side value propositions were effectively undermined in the reviewed evidence by reductions in scope. Given the simplification of conditions (domain 1) is not possible in real-world settings, research participant exclusions based on specific comorbidities and sociocultural factors threaten the external validity of interventions [79] and reduce their potential reach due to an unrepresentative population. The exclusion of individuals with comorbidities can in fact disqualify the overwhelming majority of, and sometimes almost all, individuals with a target condition [79-81]. Such exclusions are therefore considered an increasingly untenable practice even in efficacy research, particularly given global population aging and increasing multimorbidity [80]. Continuation of this practice effectively creates a mismatch between the available evidence and clinical populations with ABI [81], the majority of which present with comorbidities [81] and psychosocial challenges [6-8,82]. Currently, this leaves clinicians and researchers ill-equipped to adequately understand how web-based psychosocial interventions may or may not be implemented with a real-world population.

Our finding of the simplification of domain 1 (the condition) corroborates reports in other systematic reviews of an unaddressed digital divide and noticeable lack of data pertaining to individuals with comorbidities in digital health implementation research more broadly [26,83]. In a recent review of digital mental health implementation [83], the most established category of digital psychosocial support [84], Barnett et al [83] discovered an absence of evidence pertaining to individuals with comorbidities and limited data on ethnic minorities. Therefore, researchers or practitioners seeking implementation data pertaining to the excluded conditions will need to seek implementation studies where such excluded conditions are the primary, rather than comorbid condition. For example, a recent systematic review of digital health implementation for individuals with psychosis or bipolar disorder [27] identified that the complexity of digital health interventions can be challenging for people with psychiatric symptoms. Given that our review identified similarly complex demands of interventions for individuals with a primary diagnosis of ABI (domain 4; Figure 4), it may be possible to see how comorbid psychosis and ABI diagnoses can create further complex interactions between domains 1 (the condition) and 4 (the adopters) that were not identified in the reviewed evidence.

The underrepresentation of clinical complexities also raises ethical questions of equity [26,83] in the development of interventions for people with ABI. If no interventions are designed or tested with these excluded populations, there is a risk of perpetuating and exacerbating the disadvantages experienced by individuals already at greater risk of digital exclusion [26,83]. In the words of one of our research collaborators, Mrs Erin Elizabeth Hill, who has living experience of ABI:

If your aim is to help people with ABI, then you can’t exclude a whole group of us, when there are more of us that have these conditions than don’t. You can’t say, “Oh, we considered ‘some’ of you.” You need to be as inclusive as possible.

Similar reviews in other populations have concluded that there is an urgent need for data pertaining to marginalized groups in digital health implementation research [26,83]. Sampling and publication biases across a body of reviewed evidence risk reducing the visibility of individuals who can be “forgotten when taking the findings of this review into consideration” [26]. Therefore, it may be helpful for researchers and clinicians to be mindful of this gap at the study design stage [79,85], not only to facilitate real-world implementation, but also to ensure that future digital health intervention design, implementation, and research does not create and perpetuate inequities [26,83].

Investigators who made proactive efforts to maximize inclusion in the reviewed studies provided valuable data about how real-world implementation may or may not be achieved. In a study that originally excluded people with ABI who had a history of falls [61], researchers recognized during recruitment that such a history was common in the population that may benefit from the intervention. They subsequently adjusted their screening criteria, based on expert panel advice, to focus on self-awareness rather than fall history, enabling previously excluded people with ABI to participate. In another study that accommodated medical complexity [63], investigators documented how readmission for subsequent strokes influenced a person with ABI’s adherence to a tablet-based intervention. Although the participant eventually discontinued use, implementation data concerning dropouts and reasons for discontinuation contribute an important real-world understanding of implementation. Moreover, they enabled people with ABI who wished to participate in research and receive interventions to do so to the extent that they were able. This illustrates how, in addition to empirical questions of external validity and ethical questions of equity, population exclusions may imply a divergence between the priorities and realities of researchers and stakeholders [21], thus requiring a priori effort if it is to be overcome.

Stakeholder Collaboration

Facilitating implementation despite the real-world complexity of ABI (domain 1) presents researchers with the challenge of simplifying as many of the remaining domains (domains 2-7) as possible. Direct collaboration with stakeholders may be key to this endeavor, over and above inclusion in participant samples. In this review, population disparities were magnified by the consultation of the already unrepresentative study populations during intervention evaluation rather than development, with no studies coproduced with stakeholders. However, it is frequently recommended that stakeholders are engaged from the outset of research, rather than only in intervention evaluation [27,85-87]. To support similar efforts, we have previously published [21,34] methodological guidance on how to leverage the NASSS framework to facilitate implementation input from people with ABI, their clinicians and close others.

The need for stakeholder collaboration is further supported by our finding that the most complexity was reported in the interventional and technological requirements of people with ABI and clinicians in domain 4 (Figure 4). Simplifying an intervention’s demands of people with ABI presents the largest current target to reduce complexity, with stakeholder collaboration providing opportunity to identify ways to accommodate comorbidities (domain 1; Textbox 1), co-design (domain 4), and continue to streamline (domain 7) interventions. Additionally, digital health interventions introduce complex new demands on clinicians (domain 4; Figure 4), including many tasks specific to digital health, such as remote monitoring, intervention adjustment, and providing and receiving ongoing technical support. It also introduces new space, equipment, and privacy requirements for telehealth sessions, with subsequent challenges at the organizational level (domain 5). These are critical considerations given the importance of workflow in the success or failure of digital health implementation [15]. In particular, the role of carers was underreported in this review, and only a single study [49] included the input of administrative staff who may be required to support or implement these functions, suggesting that these stakeholder groups are less visible and consulted. Additional implementation work and the staff who will likely undertake such work (domain 5; Figure 4) can also be easily overlooked due to both limited data at the organizational level (domain 5) and potential for investigators to absorb implementation work in the context of a research study. Therefore, our findings highlight the importance of recognizing health care staff adopters as stakeholders. Again, these collaborations will require significant resources and funding [88], indicating a need for researchers to upskill in the communication of value propositions to funders (domain 3) and for funders to create and invest in in the resource-intensive process of stakeholder collaboration.

Theoretical Frameworks

Finally, it is recommended that investigators use implementation frameworks, and specifically digital health implementation frameworks such as user-centered design or the NASSS framework, to underpin implementation research. User-centered design may be especially pertinent given a need to simplify domain 4 (Figure 4). The NASSS framework may also facilitate consideration across multiple domains. Our results aligned with the NASSS framework’s wide-ranging inclusion of considerations in digital health implementation, with the reviewed evidence revealing challenges in all 7 domains. Our findings were consistent with other reviews applying the NASSS framework [22-24] and those using generic implementation frameworks [26-29] in revealing a general lack of implementation evidence at the organizational level and beyond (domains 5-7; Figure 4). This may reflect the concurrent finding of limited theoretical underpinning in the reviewed studies. The limited data on organizational, systemic, and long-term aspects of implementation in this and other reviews reveal a significant gap in research evidence to date, which should be addressed to support the ecological validity and implementation of future interventions.

Study Strengths and Limitations

Reviews to date have relied on narrative syntheses when examining digital health implementation [28] and complexity [22-24]. This review was the first to both deductively analyze [25] the evidence in relation to each of the 7 domains of a complexity-based framework and quantify complexity across more than a decade of digital health implementation data against published hypotheses [20]. This investigation has enabled new understanding of the complex interrelationships between domains.

This registered review followed a published protocol [30], with deviations and rationales that are reported transparently. All extracted data, search strategies, and extraction forms are transparently appended, and results are reported according to PRISMA 2020 guidelines. To further increase replicability, data were consistently extracted and appraised using published definitions [36], taxonomies [38], and tools [37]. Although other reviews have not included information on stakeholder involvement [26] and used generic implementation frameworks [26-28], our search and synthesis were both informed by stakeholder input [21] and theoretically underpinned by an implementation framework specific to digital health [17]. Unlike reviews that focused more exclusively on scale-up [24] or implementation strategies [11], this review included and classified a wide range of data on implementation factors, outcomes, and strategies. The inclusiveness of our search thus allowed previously unreviewed implementation data to be included for the first time.

The high yield and detailed extraction from >21 theoretical subdomains presented substantial feasibility challenges, requiring pragmatic protocol deviations. Extraction was challenging due to the heterogeneity and inconsistent reporting of interventions and implementation outcomes. For instance, Pierce and Steiner [89] collectively reported satisfaction, acceptability, usability, and feasibility measures as usability, and Anderson et al [69] measured satisfaction using feasibility and acceptability measures. Therefore, standardized definitions [36] were required in the extraction form (Multimedia Appendix 2). Implementation evidence also has limited alignment with the clinical paradigm of PRISMA guidelines. Given our need for an innovative, theory-based meta-synthesis and quantification of complexity, an implementation-specific review checklist with a complexity paradigm may need to be developed as scientific understanding of both implementation and complexity grows.

As our review examined prepandemic evidence published up to June 2020, it will be possible in future to compare COVID-related and postpandemic evidence with our findings. Given that the global COVID-19 pandemic has brought domain 6 to the fore, a future update of our review that allows sufficient time for implementation effort and publication from these periods may offer a unique opportunity to meta-synthesize and directly compare data from 2 distinct epochs, allowing scrutiny of the impact of national and international shifts in this domain.

A key limitation of the reviewed evidence was that the overall quality of reviewed studies was low, with challenges for external validity due to study populations that were unrepresentative of the real-world population with ABI. The resulting potential for sampling bias may reflect pressures upon researchers to establish a demand-side value proposition in domain 3. The high proportion of successes may also indicate possible publication bias, which may be exacerbated by the same pressure. Another limitation was that the English-speaking research team was restricted to English publications. Finally, although the results of this descriptive analysis appear consistent with all 3 hypotheses by Greenhalgh et al [20], definitive relationships could not yet be established because:

  1. The sample size of this study was not adequately powered to detect an association between complexity and implementation success or failure. In particular, the limited number of failures available for review may reflect (1) the controlled nature of clinical trials, which inherently aim to minimize complexity and (2) potential publication bias toward reporting implementation success. The intensive process of deductively coding 21 subdomains of complexity in a highly heterogeneous evidence base also reduced the feasibility of obtaining a sufficiently large sample size.
  2. Although some studies reported information concerning all 7 domains of complexity, none of the studies reported data from all subdomains. Information about complexity was thus incomplete for all records.
  3. In the absence of a formal definition [11], implementation success or failure of each study was defined as whether the authors succeeded in achieving their specific implementation aims, which varied in scope and type (eg, usability, cost-effectiveness, and satisfaction are distinct constructs). Studies generally also did not report planned or actual budgets and timescales as described in the hypotheses by Greenhalgh et al [20].

Given these limitations, the eventual alignment between a flexible definition of success and all 3 hypotheses [20] was unexpected. It may suggest some correlation between the various implementation measures. For example, a highly appropriate intervention may also be more acceptable to recipients, receive high satisfaction ratings, and experience increased adherence and fidelity from motivated actors. This, in turn, may improve its feasibility and cost-effectiveness. Alternatively, or perhaps concurrently, the concept of complexity may be informative for a range of implementation constructs, with further research needed to understand these relationships. For example, a multiarm RCT comparing different levels of complexity might assist our understanding of complexity within implementation science [90]. Parallel reviews of digital health implementation for other health conditions and intervention types (eg, health informatics) may also be informative; we have transparently supplied our search terms (Multimedia Appendix 1) and extraction template (Multimedia Appendix 2) to support this endeavor.


This study is the first attempt to deductively analyze and quantify complexity from more than a decade of primary digital health implementation evidence. Results were consistent with recommendations to facilitate implementation success by reducing complexity in as many domains as possible. To date, simplifications appear to have been made in the first domain of the NASSS framework (the condition) to advance the value proposition of interventions (domain 3). However, this may hinder the development of equitable, real-world solutions for which implementation data and end user support are currently needed. It is recommended that intervention developers collaborate with stakeholders, including people with ABI, their close others, and clinicians, from the earliest design stages, rather than only at end-evaluation, to target real-world complexities in domains 2 to 7. Recommended future research includes parallel reviews for other populations and intervention types, multiarm RCTs to test the role of complexity in digital health implementation, and reviews of evidence obtained during or after the COVID-19 pandemic to understand the impact of the wider context of digital health implementation (domain 6).


MM would like to thank Mrs Erin Elizabeth Hill for her expert advice based on her living experience of acquired brain injury, Dr Mikaela Jorgensen for her advice on statistical analysis, and Ms Rebecca Dale for her advice on search strategy. MM is supported by an Australian National Health and Medical Research Council Postgraduate (PhD) Scholarship Grant (GNT1191284) and Australian Research Training Program Scholarship. LT is supported by an Australian National Health and Medical Research Council Senior Research Fellowship. RR is supported by funding from icare NSW. The Australian National Health and Medical Research Council and icare NSW were not involved in the conception, design, or conduct of the review.

Data Availability

The search strategy is available as Multimedia Appendix 1, data extraction form is included as Multimedia Appendix 2, and data extracted from included studies and used for analyses are provided as Multimedia Appendix 3.

Authors' Contributions

MM, EP, LT, RR, and MB conceptualized the study. All authors contributed to study design. MM, MB, RR, EP, and DD independently screened records at both the abstract and full-text levels. MM completed extraction, and MB, DD, and RR checked extraction. All authors participated in consensus discussions. MM completed the analysis and drafted the manuscript. All authors critically revised the manuscript and approved the final version for submission.

Conflicts of Interest

LT is the director of speechBITE. EP and MB are board members of speechBITE.

Multimedia Appendix 1

Search strategy.

PDF File (Adobe PDF File), 240 KB

Multimedia Appendix 2

REDCap (Research Electronic Data Capture) extraction form.

PDF File (Adobe PDF File), 106 KB

Multimedia Appendix 3

Extracted data and analysis of complexity.

XLSX File (Microsoft Excel File), 508 KB

  1. Cieza A, Causey K, Kamenov K, Hanson SW, Chatterji S, Vos T. Global estimates of the need for rehabilitation based on the Global Burden of Disease study 2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet 2020 Dec 19;396(10267):2006-2017 [FREE Full text] [CrossRef] [Medline]
  2. GBD 2016 Neurology Collaborators. Global, regional, and national burden of neurological disorders, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol 2019 May;18(5):459-480 [FREE Full text] [CrossRef] [Medline]
  3. Finch E, French A, Ou RJ, Fleming J. Participation in communication activities following traumatic brain injury: a time use diary study. Brain Inj 2016 Jun;30(7):883-890. [CrossRef] [Medline]
  4. Hilari K, Northcott S. “Struggling to stay connected”: comparing the social relationships of healthy older people and people with stroke and aphasia. Aphasiology 2016 Aug;31(6):674-687. [CrossRef]
  5. Mitchell AJ, Sheth B, Gill J, Yadegarfar M, Stubbs B, Yadegarfar M, et al. Prevalence and predictors of post-stroke mood disorders: a meta-analysis and meta-regression of depression, anxiety and adjustment disorder. Gen Hosp Psychiatry 2017 Jul;47:48-60. [CrossRef] [Medline]
  6. Prisnie JC, Sajobi TT, Wang M, Patten SB, Fiest KM, Bulloch AG, et al. Effects of depression and anxiety on quality of life in five common neurological disorders. Gen Hosp Psychiatry 2018 May;52:58-63. [CrossRef] [Medline]
  7. Douglas JM, Bracy CA, Snow PC. Return to work and social communication ability following severe traumatic brain injury. J Speech Lang Hear Res 2016 Jun 01;59(3):511-520. [CrossRef] [Medline]
  8. Langhammer B, Sunnerhagen KS, Sällström S, Becker F, Stanghelle JK. Return to work after specialized rehabilitation-an explorative longitudinal study in a cohort of severely disabled persons with stroke in seven countries. Brain Behav 2018 Jul;8(8):e01055 [FREE Full text] [CrossRef] [Medline]
  9. Buchan H. Gaps between best evidence and practice: causes for concern. Med J Aust 2004 Mar 15;180(S6):S48-S49. [CrossRef] [Medline]
  10. Ahmed B, Dannhauser T, Philip N. A systematic review of reviews to identify key research opportunities within the field of eHealth implementation. J Telemed Telecare 2019 Jun;25(5):276-285. [CrossRef] [Medline]
  11. Varsi C, Solberg Nes L, Kristjansdottir OB, Kelders SM, Stenberg U, Zangi HA, et al. Implementation strategies to enhance the implementation of eHealth programs for patients with chronic illnesses: realist systematic review. J Med Internet Res 2019 Sep;21(9):e14255 [FREE Full text] [CrossRef] [Medline]
  12. Munsell M, De Oliveira E, Saxena S, Godlove J, Kiran S. Closing the digital divide in speech, language, and cognitive therapy: cohort study of the factors associated with technology usage for rehabilitation. J Med Internet Res 2020 Feb;22(2):e16286 [FREE Full text] [CrossRef] [Medline]
  13. Titov N, Dear BF, Nielssen O, Wootton B, Kayrouz R, Karin E, et al. User characteristics and outcomes from a national digital mental health service: an observational study of registrants of the Australian MindSpot Clinic. Lancet Digit Health 2020 Nov;2(11):e582-e593 [FREE Full text] [CrossRef] [Medline]
  14. Christie HL, Martin JL, Connor J, Tange HJ, Verhey FR, de Vugt ME, et al. eHealth interventions to support caregivers of people with dementia may be proven effective, but are they implementation-ready? Internet Interv 2019 Dec;18:100260 [FREE Full text] [CrossRef] [Medline]
  15. Granja C, Janssen W, Johansen MA. Factors determining the success and failure of eHealth interventions: systematic review of the literature. J Med Internet Res 2018 May;20(5):e10235 [FREE Full text] [CrossRef] [Medline]
  16. Papoutsi C, Wherton J, Shaw S, Morrison C, Greenhalgh T. Putting the social back into sociotechnical: case studies of co-design in digital health. J Am Med Inform Assoc 2021 Feb;28(2):284-293 [FREE Full text] [CrossRef] [Medline]
  17. Greenhalgh T, Wherton J, Papoutsi C, Lynch J, Hughes G, A'Court C, et al. Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies. J Med Internet Res 2017 Nov;19(11):e367 [FREE Full text] [CrossRef] [Medline]
  18. Cohn S, Clinch M, Bunn C, Stronge P. Entangled complexity: why complex interventions are just not complicated enough. J Health Serv Res Policy 2013 Jan;18(1):40-43. [CrossRef] [Medline]
  19. Papoutsi C, Shaw J, Paparini S, Shaw S. We need to talk about complexity in health research: findings from a focused ethnography. Qual Health Res 2021 Jan;31(2):338-348 [FREE Full text] [CrossRef] [Medline]
  20. Greenhalgh T, Abimbola S. The NASSS framework - a synthesis of multiple theories of technology implementation. Stud Health Technol Inform 2019 Jul;263:193-204. [CrossRef] [Medline]
  21. Miao M, Power E, Rietdijk R, Debono D, Brunner M, Salomon A, et al. Coproducing knowledge of the implementation of complex digital health interventions for adults with acquired brain injury and their communication partners: protocol for a mixed methods study. JMIR Res Protoc 2022 Jan;11(1):e35080 [FREE Full text] [CrossRef] [Medline]
  22. Bastoni S, Wrede C, da Silva MC, Sanderman R, Gaggioli A, Braakman-Jansen A, et al. Factors influencing implementation of eHealth technologies to support informal dementia care: umbrella review. JMIR Aging 2021 Oct;4(4):e30841 [FREE Full text] [CrossRef] [Medline]
  23. Renyi M, Lindwedel-Reime U, Blattert L, Teuteberg F, Kunze C. Collaboration applications for mixed home care - a systematic review of evaluations and outcomes. Int J Technol Assess Health Care 2020 Aug;36(4):395-403 [FREE Full text] [CrossRef] [Medline]
  24. James HM, Papoutsi C, Wherton J, Greenhalgh T, Shaw SE. Spread, scale-up, and sustainability of video consulting in health care: systematic review and synthesis guided by the NASSS framework. J Med Internet Res 2021 Jan;23(1):e23775 [FREE Full text] [CrossRef] [Medline]
  25. Kyngäs H, Kaakinen P. Deductive content analysis. In: Kyngäs H, Mikkonen K, Kääriäinen M, editors. The Application of Content Analysis in Nursing Science Research. Cham, Switzerland: Springer International Publishing; 2020:23-30.
  26. Appleton R, Williams J, Vera San Juan N, Needle JJ, Schlief M, Jordan H, et al. Implementation, adoption, and perceptions of telemental health during the COVID-19 pandemic: systematic review. J Med Internet Res 2021 Dec;23(12):e31746 [FREE Full text] [CrossRef] [Medline]
  27. Aref-Adib G, McCloud T, Ross J, O'Hanlon P, Appleton V, Rowe S, et al. Factors affecting implementation of digital health interventions for people with psychosis or bipolar disorder, and their family and friends: a systematic review. Lancet Psychiatry 2019 Mar;6(3):257-266 [FREE Full text] [CrossRef] [Medline]
  28. Ross J, Stevenson F, Lau R, Murray E. Factors that influence the implementation of e-health: a systematic review of systematic reviews (an update). Implement Sci 2016 Oct;11(1):146 [FREE Full text] [CrossRef] [Medline]
  29. Christie HL, Bartels SL, Boots LM, Tange HJ, Verhey FR, de Vugt ME. A systematic review on the implementation of eHealth interventions for informal caregivers of people with dementia. Internet Interv 2018 Sep;13:51-59 [FREE Full text] [CrossRef] [Medline]
  30. Miao M, Power E, Rietdijk R, Brunner M, Togher L. Implementation of online psychosocial interventions for people with neurological conditions and their caregivers: a systematic review protocol. Digit Health 2021 Sep;7:20552076211035988 [FREE Full text] [CrossRef] [Medline]
  31. Miao M, Power E, Rietdijk R, Brunner M, Togher L. Implementation of online psychosocial interventions for people with neurological conditions and their caregivers: a systematic review protocol. PROSPERO International prospective register of systematic reviews. 2020 Jul.   URL: [accessed 2020-09-24]
  32. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. PLoS Med 2021 Mar;18(3):e1003583 [FREE Full text] [CrossRef] [Medline]
  33. Covidence. Melbourne, Australia: Veritas Health Innovation; 2022.   URL: [accessed 2020-05-14]
  34. Miao M, Power E, Rietdijk R, Brunner M, Debono D, Togher L. A Web-based service delivery model for communication training after brain injury: protocol for a mixed methods, prospective, hybrid type 2 implementation-effectiveness study. JMIR Res Protoc 2021 Dec;10(12):e31995 [FREE Full text] [CrossRef] [Medline]
  35. Connor BB, Shaw CA. Case study series using brain-training games to treat attention and memory following brain injury. J Pain Manag 2016;9(3):217-226 [FREE Full text]
  36. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health 2011 Mar;38(2):65-76 [FREE Full text] [CrossRef] [Medline]
  37. Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Improving the content validity of the mixed methods appraisal tool: a modified e-Delphi study. J Clin Epidemiol 2019 Jul;111:49-59.e1 [FREE Full text] [CrossRef] [Medline]
  38. Bewick BM, Ondersma SJ, Høybye MT, Blakstad O, Blankers M, Brendryen H, et al. Key intervention characteristics in e-Health: steps towards standardized communication. Int J Behav Med 2017 Apr;24(5):659-664 [FREE Full text] [CrossRef] [Medline]
  39. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform 2009 Apr;42(2):377-381 [FREE Full text] [CrossRef] [Medline]
  40. Cikajlo I, Cizman Staba U, Vrhovac S, Larkin F, Roddy M. A cloud-based virtual reality app for a novel telemindfulness service: rationale, design and feasibility evaluation. JMIR Res Protoc 2017 Jun;6(6):e108 [FREE Full text] [CrossRef] [Medline]
  41. Steele RD, Baird A, McCall D, Haynes L. Combining teletherapy and on-line language exercises in the treatment of chronic aphasia: an outcome study. Int J Telerehabil 2014 Jan;6(2):3-20 [FREE Full text] [CrossRef] [Medline]
  42. Pitt R, Hill AJ, Theodoros D, Russell T. “I definitely think it’s a feasible and worthwhile option”: perspectives of speech-language pathologists providing online aphasia group therapy. Aphasiology 2018 Jun;32(9):1031-1053 [FREE Full text] [CrossRef]
  43. Pitt R, Theodoros D, Hill AJ, Russell T. The development and feasibility of an online aphasia group intervention and networking program - TeleGAIN. Int J Speech Lang Pathol 2019 Sep;21(1):23-36 [FREE Full text] [CrossRef] [Medline]
  44. Rietdijk R, Power E, Brunner M, Togher L. A single case experimental design study on improving social communication skills after traumatic brain injury using communication partner telehealth training. Brain Inj 2019;33(1):94-104. [CrossRef] [Medline]
  45. Oh H, Park J, Seo W. Development of an evidence-based intervention program for patients with mild cognitive impairment after stroke: a home-based, online cognitive rehabilitation program. Top Geriatr Rehabil 2017 Apr;33(2):140-151 [FREE Full text] [CrossRef]
  46. Denham AM, Guillaumier A, McCrabb S, Turner A, Baker AL, Spratt NJ, et al. Development of an online secondary prevention programme for stroke survivors: prevent 2nd stroke. BMJ Innov 2019 Mar;5(1):35-42 [FREE Full text] [CrossRef]
  47. Simic T, Leonard C, Laird L, Cupit J, Höbler F, Rochon E. A usability study of Internet-based therapy for naming deficits in aphasia. Am J Speech Lang Pathol 2016 Nov;25(4):642-653 [FREE Full text] [CrossRef] [Medline]
  48. Caunca MR, Simonetto M, Hartley G, Wright CB, Czaja SJ. Design and usability testing of the stroke caregiver support system: a mobile-friendly website to reduce stroke caregiver burden. Rehabil Nurs 2020 May;45(3):166-177 [FREE Full text] [CrossRef] [Medline]
  49. Solana J, Cáceres C, García-Molina A, Opisso E, Roig T, Tormos JM, et al. Improving brain injury cognitive rehabilitation by personalized telerehabilitation services: Guttmann neuropersonal trainer. IEEE J Biomed Health Inform 2015 Jan;19(1):124-131 [FREE Full text] [CrossRef] [Medline]
  50. Curran GM, Bauer M, Mittman B, Pyne JM, Stetler C. Effectiveness-implementation hybrid designs. Med Care 2012 Mar;50(3):217-226 [FREE Full text] [CrossRef] [Medline]
  51. Ng EM, Polatajko HJ, Marziali E, Hunt A, Dawson DR. Telerehabilitation for addressing executive dysfunction after traumatic brain injury. Brain Inj 2013 May;27(5):548-564 [FREE Full text] [CrossRef] [Medline]
  52. Raina KD, Morse JQ, Chisholm D, Leibold ML, Shen J, Whyte E. Feasibility of a cognitive behavioral intervention to manage fatigue in individuals with traumatic brain injury: a pilot study. J Head Trauma Rehabil 2016 Sep;31(5):E41-E49 [FREE Full text] [CrossRef] [Medline]
  53. Sharma B, Tomaszczyk JC, Dawson D, Turner GR, Colella B, Green RE. Feasibility of online self-administered cognitive training in moderate-severe brain injury. Disabil Rehabil 2017 Jul;39(14):1380-1390 [FREE Full text] [CrossRef] [Medline]
  54. Tsaousides T, D'Antonio E, Varbanova V, Spielman L. Delivering group treatment via videoconference to individuals with traumatic brain injury: a feasibility study. Neuropsychol Rehabil 2014 May;24(5):784-803 [FREE Full text] [CrossRef] [Medline]
  55. Damianakis T, Tough A, Marziali E, Dawson DR. Therapy online: a Web-based video support group for family caregivers of survivors with traumatic brain injury. J Head Trauma Rehabil 2016 Jul;31(4):E12-E20 [FREE Full text] [CrossRef] [Medline]
  56. Withiel TD, Sharp VL, Wong D, Ponsford JL, Warren N, Stolwyk RJ. Understanding the experience of compensatory and restorative memory rehabilitation: a qualitative study of stroke survivors. Neuropsychol Rehabil 2020 Apr;30(3):503-522. [CrossRef] [Medline]
  57. Riegler L, Wade SL, Narad M, Sarver L. POWER-A telehealth rehabilitation program for veterans: feasibility and preliminary efficacy. Perspect ASHA SIGs 2017 Jan;2(18):3-14 [FREE Full text] [CrossRef]
  58. Walker JP, Price K, Watson J. Promoting social connections in a synchronous telepractice, aphasia communication group. Perspect ASHA SIGs 2018 Jan;3(18):32-42 [FREE Full text] [CrossRef]
  59. Tsaousides T, Spielman L, Kajankova M, Guetta G, Gordon W, Dams-OʼConnor K. Improving emotion regulation following Web-based group intervention for individuals with traumatic brain injury. J Head Trauma Rehabil 2017 Sep;32(5):354-365 [FREE Full text] [CrossRef] [Medline]
  60. Beit Yosef A, Jacobs JM, Shenkar S, Shames J, Schwartz I, Doryon Y, et al. Activity performance, participation, and quality of life among adults in the chronic stage after acquired brain injury-the feasibility of an occupation-based telerehabilitation intervention. Front Neurol 2019 Dec;10:1247 [FREE Full text] [CrossRef] [Medline]
  61. Kringle EA, Setiawan IM, Golias K, Parmanto B, Skidmore ER. Feasibility of an iterative rehabilitation intervention for stroke delivered remotely using mobile health technology. Disabil Rehabil Assist Technol 2019 Jun;15(8):908-916 [FREE Full text] [CrossRef] [Medline]
  62. Peers PV, Astle DE, Duncan J, Murphy FC, Hampshire A, Das T, et al. Dissociable effects of attention vs working memory training on cognitive performance and everyday functioning following fronto-parietal strokes. Neuropsychol Rehabil 2020 Jul;30(6):1092-1114 [FREE Full text] [CrossRef] [Medline]
  63. Pugliese M, Ramsay T, Shamloul R, Mallet K, Zakutney L, Corbett D, et al. RecoverNow: a mobile tablet-based therapy platform for early stroke rehabilitation. PLoS One 2019 Jan;14(1):e0210725 [FREE Full text] [CrossRef] [Medline]
  64. Withiel TD, Wong D, Ponsford JL, Cadilhac DA, Stolwyk RJ. Feasibility and effectiveness of computerised cognitive training for memory dysfunction following stroke: a series of single case studies. Neuropsychol Rehabil 2020 Jun;30(5):829-852. [CrossRef] [Medline]
  65. Getz H, Snider S, Brennan D, Friedman R. Successful remote delivery of a treatment for phonological alexia via telerehab. Neuropsychol Rehabil 2016 Aug;26(4):584-609 [FREE Full text] [CrossRef] [Medline]
  66. Topolovec-Vranic J, Cullen N, Michalak A, Ouchterlony D, Bhalerao S, Masanic C, et al. Evaluation of an online cognitive behavioural therapy program by patients with traumatic brain injury and depression. Brain Inj 2010;24(5):762-772. [CrossRef] [Medline]
  67. Zannino GD, Zabberoni S, Murolo R, Caltagirone C, Carlesimo GA. Picture and spoken word presentation in repetition training for anomia: does stimulus order matter? Evidence obtained from 12 individuals with chronic aphasia using a computer-based telemedicine approach. Aphasiology 2020;34(3):275-299 [FREE Full text] [CrossRef]
  68. Wentink MM, Meesters J, Berger MA, de Kloet AJ, Stevens E, Band GP, et al. Adherence of stroke patients with an online brain training program: the role of health professionals' support. Top Stroke Rehabil 2018 Jul;25(5):359-365. [CrossRef] [Medline]
  69. Anderson J, Godwin KM, Petersen NJ, Willson P, Kent TA. A pilot test of videoconferencing to improve access to a stroke risk-reduction programme for Veterans. J Telemed Telecare 2013 May;19(3):153-159 [FREE Full text] [CrossRef] [Medline]
  70. Rietdijk R, Power E, Attard M, Heard R, Togher L. Improved conversation outcomes after social communication skills training for people with traumatic brain injury and their communication partners: a clinical trial investigating in-person and telehealth delivery. J Speech Lang Hear Res 2020 Feb;63(2):615-632 [FREE Full text] [CrossRef]
  71. Rietdijk R, Power E, Attard M, Togher L. Acceptability of telehealth-delivered rehabilitation: experiences and perspectives of people with traumatic brain injury and their carers. J Telemed Telecare 2022 Feb;28(2):122-134. [CrossRef] [Medline]
  72. Lee JJ, Finch E, Rose T. Exploring the outcomes and perceptions of people with aphasia who conversed with speech pathology students via telepractice: a pilot study. Speech Lang Hear 2019 Dec;23(2):110-120 [FREE Full text] [CrossRef]
  73. McLaughlin KA, Glang A, Beaver SV, Gau JM, Keen S. Web-based training in family advocacy. J Head Trauma Rehabil 2013;28(5):341-348 [FREE Full text] [CrossRef] [Medline]
  74. Sander AM, Clark AN, Atchison TB, Rueda M. A Web-based videoconferencing approach to training caregivers in rural areas to compensate for problems related to traumatic brain injury. J Head Trauma Rehabil 2009;24(4):248-261. [CrossRef] [Medline]
  75. McDonald S, Trimmer E, Newby J, Grant S, Gertler P, Simpson GK. Providing on-line support to families of people with brain injury and challenging behaviour: a feasibility study. Neuropsychol Rehabil 2021 Apr;31(3):392-413. [CrossRef] [Medline]
  76. Quaco C. Using a telehealth service delivery approach to working with an undergraduate student with a traumatic brain injury: a case study. Work 2017 Sep 14;58(1):17-21. [CrossRef] [Medline]
  77. Woolf C, Caute A, Haigh Z, Galliers J, Wilson S, Kessie A, et al. A comparison of remote therapy, face to face therapy and an attention control intervention for people with aphasia: a quasi-randomised controlled feasibility study. Clin Rehabil 2016 Apr;30(4):359-373. [CrossRef] [Medline]
  78. Hill AJ, Breslin HM. Refining an asynchronous telerehabilitation platform for speech-language pathology: engaging end-users in the process. Front Hum Neurosci 2016;10:640 [FREE Full text] [CrossRef] [Medline]
  79. He Z, Tang X, Yang X, Guo Y, George TJ, Charness N, et al. Clinical trial generalizability assessment in the big data era: a review. Clin Transl Sci 2020 Jul;13(4):675-684 [FREE Full text] [CrossRef] [Medline]
  80. He J, Morales DR, Guthrie B. Exclusion rates in randomized controlled trials of treatments for physical conditions: a systematic review. Trials 2020 Feb 26;21(1):228 [FREE Full text] [CrossRef] [Medline]
  81. Jackson HM, Troeung L, Martini A. Prevalence, patterns, and predictors of multimorbidity in adults with acquired brain injury at admission to staged community-based rehabilitation. Arch Rehabil Res Clin Transl 2020 Dec;2(4):100089 [FREE Full text] [CrossRef] [Medline]
  82. Young JT, Hughes N. Traumatic brain injury and homelessness: from prevalence to prevention. Lancet Public Health 2020 Jan;5(1):e4-e5 [FREE Full text] [CrossRef] [Medline]
  83. Barnett P, Goulding L, Casetta C, Jordan H, Sheridan-Rains L, Steare T, et al. Implementation of telemental health services before COVID-19: rapid umbrella review of systematic reviews. J Med Internet Res 2021 Jul 20;23(7):e26492 [FREE Full text] [CrossRef] [Medline]
  84. Andersson G. Internet interventions: past, present and future. Internet Interv 2018 Jun;12:181-188 [FREE Full text] [CrossRef] [Medline]
  85. Landes SJ, McBain SA, Curran GM. Reprint of: an introduction to effectiveness-implementation hybrid designs. Psychiatry Res 2020 Jan;283:112630 [FREE Full text] [CrossRef] [Medline]
  86. Statement on consumer and community involvement in health and medical research. National Health and Medical Research Council. 2016 Sep.   URL: https:/​/www.​​about-us/​publications/​statement-consumer-and-community-involvement-health-and-medical-research [accessed 2021-10-24]
  87. UNESCO Recommendation on Open Science. United Nations Educational, Scientific and Cultural Organization. 2021.   URL: [accessed 2021-12-23]
  88. Mulvale G, Moll S, Miatello A, Robert G, Larkin M, Palmer VJ, et al. Codesigning health and other public services with vulnerable and disadvantaged populations: insights from an international collaboration. Health Expect 2019 Jun;22(3):284-297 [FREE Full text] [CrossRef] [Medline]
  89. Pierce LL, Steiner V. Usage and design evaluation by family caregivers of a stroke intervention web site. J Neurosci Nurs 2013 Oct;45(5):254-261 [FREE Full text] [CrossRef] [Medline]
  90. Wolfenden L, Foy R, Presseau J, Grimshaw JM, Ivers NM, Powell BJ, et al. Designing and undertaking randomised implementation trials: guide for researchers. BMJ 2021 Jan 18;372:m3721 [FREE Full text] [CrossRef] [Medline]

ABI: acquired brain injury
MMAT: Mixed Methods Appraisal Tool
NASSS: Nonadoption, Abandonment, Scale-Up, Spread, and Sustainability
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PROSPERO: International Prospective Register of Systematic Reviews
RCT: randomized controlled trial
REDCap: Research Electronic Data Capture
TBI: traumatic brain injury

Edited by R Kukafka; submitted 19.03.22; peer-reviewed by A Bamgboje-Ayodele, A Miatello; comments to author 05.05.22; revised version received 16.05.22; accepted 15.06.22; published 26.07.22


©Melissa Miao, Rachael Rietdijk, Melissa Brunner, Deborah Debono, Leanne Togher, Emma Power. Originally published in the Journal of Medical Internet Research (, 26.07.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.