Published on in Vol 25 (2023)

Preprints (earlier versions) of this paper are available at, first published .
Providing Human Support for the Use of Digital Mental Health Interventions: Systematic Meta-review

Providing Human Support for the Use of Digital Mental Health Interventions: Systematic Meta-review

Providing Human Support for the Use of Digital Mental Health Interventions: Systematic Meta-review


Center for Evidence-Based Mentoring, University of Massachusetts Boston, Boston, MA, United States

Corresponding Author:

Jean E Rhodes, PhD

Center for Evidence-Based Mentoring

University of Massachusetts Boston

100 Morrissey Boulevard

Boston, MA, 02125

United States

Phone: 1 6172876368


Background: Digital mental health interventions (DMHIs) have been increasingly deployed to bridge gaps in mental health care, particularly given their promising efficacy. Nevertheless, attrition among DMHI users remains high. In response, human support has been studied as a means of improving retention to and outcomes of DMHIs. Although a growing number of studies and meta-analyses have investigated the effects of human support for DMHIs on mental health outcomes, systematic empirical evidence of its effectiveness across mental health domains remains scant.

Objective: We aimed to summarize the results of meta-analyses of human support versus no support for DMHI use across various outcome domains, participant samples, and support providers.

Methods: We conducted a systematic meta-review of meta-analyses, comparing the effects of human support with those of no support for DMHI use, with the goal of qualitatively summarizing data across various outcome domains, participant samples, and support providers. We used MEDLINE, PubMed, and PsycINFO electronic databases. Articles were included if the study had a quantitative meta-analysis study design; the intervention targeted mental health symptoms and was delivered via a technology platform (excluding person-delivered interventions mediated through telehealth, text messages, or social media); the outcome variables included mental health symptoms such as anxiety, depression, stress, posttraumatic stress disorder symptoms, or a number of these symptoms together; and the study included quantitative comparisons of outcomes in which human support versus those when no or minimal human support was provided.

Results: The results of 31 meta-analyses (505 unique primary studies) were analyzed. The meta-analyses reported 45 effect sizes; almost half (n=22, 48%) of them showed that human-supported DMHIs were significantly more effective than unsupported DMHIs. A total of 9% (4/45) of effect sizes showed that unsupported DMHIs were significantly more effective. No clear patterns of results emerged regarding the efficacy of human support for the outcomes assessed (including anxiety, depression, posttraumatic stress disorder, stress, and multiple outcomes). Human-supported DMHIs may be more effective than unsupported DMHIs for individuals with elevated mental health symptoms. There were no clear results regarding the type of training for those providing support.

Conclusions: Our findings highlight the potential of human support in improving the effects of DMHIs. Specifically, evidence emerged for stronger effects of human support for individuals with greater symptom severity. There was considerable heterogeneity across meta-analyses in the level of detail regarding the nature of the interventions, population served, and support delivered, making it difficult to draw strong conclusions regarding the circumstances under which human support is most effective. Future research should emphasize reporting detailed descriptions of sample and intervention characteristics and describe the mechanism through which they believe the coach will be most useful for the DMHI.

J Med Internet Res 2023;25:e42864




Over the past 2 decades, a growing number of digital mental health interventions (DMHIs) have leveraged technology to address common mental health concerns, including anxiety, depression, obsessive-compulsive disorder, and suicidal ideation [1-3]. Research continuously supports the efficacy of several DMHIs [4-6], especially those that use cognitive behavioral therapy principles and include methods to cope with stress, such as journaling or tracking thoughts, feelings, and behaviors [1,7]. DMHIs can include mental health mobile apps (MHAs) and computer-based interventions [8-10], which deliver on-demand support ranging from behavioral strategies (eg, self-monitoring) to more complex therapeutic approaches (eg, cognitive behavioral therapy) [8-13]. Clinician-delivered interventions, such as an hour of psychotherapy or a dose of medication, are costly and consumable (ie, once delivered to 1 client, they cannot be used to treat another), whereas DMHIs are nonconsumable, in that they can be delivered with high fidelity multiple times [14,15]. Human-supported DMHIs have the potential to offer a cost-effective, sustainable way of scaling access to high-quality interventions.

In addition to their scalability and affordability, DMHIs often include dynamic features such as games, animation, and badging [16-19]; provide data collection features to evaluate efforts and present in-time data dashboards; and can readily incorporate new advances in research and practice [20,21]. DMHIs can also reduce stigma and provide a sense of privacy that typical therapeutic practices may not be able to offer, especially for underserved populations. Given these and other benefits, DMHIs may be able to dramatically extend the reach of evidence-based care and reduce the global burden of mental health impairment.

Despite this promise, the capacity of DMHIs to bridge treatment gaps remains limited [22]. The marketplace for DMHIs remains inefficient, with 90% use of MHAs overall being held by the 2 most popular MHAs available [23]. Moreover, although there have been growing efforts to increase the standards and rigor of the field [24], most studies have focused on feasibility and acceptability [25-27], with fewer than 5% empirically validated [28,29]. Even when DMHIs have been shown to be effective in rigorous trials, the potential to reproduce these results in real-world settings has been restricted by the overall lack of engagement and sharp attrition rates [20,30,31]. Overall, the clinical use of DMHIs has been disappointing given the low rates of uptake and engagement [32], with over 50% of the total DMHIs having little to no monthly engagement [33].

Added Support for DMHIs

The most common solution to attrition and low engagement is to provide users with personalized feedback [19] and human support designed to personalize DMHIs through supportive text messages, phone calls, personalized feedback, monitoring, and troubleshooting [34-37]. One popular model of human support is supportive accountability [35], in which a supportive guide or coach, perceived as trustworthy, kind, and competent, provides encouragement and holds the user accountable for completing an intervention. This can increase motivation, takes less time than providing direct service, and can be done via both synchronous and asynchronous channels. There is growing evidence that human support of this nature can increase users’ engagement with technology-delivered interventions [34] as well as intervention outcomes [7,36]. For example, a meta-analysis of 66 unique experimental comparisons showed that when DMHI use was supplemented by synchronous or asynchronous support, the effects were double compared with unsupported DMHI use [7].

Despite these promising trends, research on the role of human support in DMHIs is relatively new, and important questions remain unresolved. The effectiveness of human support may vary across populations and the issues that DMHIs are designed to address. Some reviews of the effectiveness of coaching in DMHIs have focused on mood disorders including anxiety disorders and depression [38,39], whereas others have not specified diagnosis or symptom level as inclusion criteria. The effect of human support may also vary based on the support provider. Some studies rely on support by a clinician or therapist [40-42], whereas others deploy paraprofessional support providers such as research or clinical staff, technicians, or e-coaches [7,43-46]. Finally, results may vary on the basis of meta-analysis quality [47].

Altogether, these remaining questions have implications for whether and under what circumstances human support should be deployed. Given the heterogeneity of approaches and the potential costs and benefits of adding a coaching component to DMHI, a systematic review of the role of human support is required [39]. Whereas meta-analyses allow for quantitative comparisons exploring specific research questions, meta-reviews (systematic reviews of empirical meta-analyses on a given topic) can synthesize the findings across several meta-analyses to create a comprehensive depiction of the current state of the field and determine the empirical quality of the evidence from these meta-analyses. Given the inconsistencies in research testing human support for DMHI use, a meta-review can reveal study design variations and limitations, allowing researchers to evaluate such inconsistencies across the literature [48-50]. Although there have been some meta-reviews investigating the effects of DMHIs on mental health [8,51], only 1 scoping review has examined the role of human support in DMHI use [39]. The scoping review by Bernstein et al [39] included both quantitative and qualitative findings, was limited to cognitive behavioral approaches, and focused only on DMHIs delivered via MHAs. Of the 64 studies included, only 7 (11%) included quantitative comparisons of supported versus unsupported approaches. Of these, fewer than half (3/7, 43%) showed positive effects of human support, and the review reported mixed findings overall. The authors concluded that the field of support for DMHI use remains insufficient for drawing strong conclusions and highlighted the need for additional evaluations.

This Study

A systematic meta-review of meta-analyses was conducted comparing human support with no support on DMHI outcomes. The goal of the review was to provide a more exhaustive representation of the effects of human support by summarizing the effects of human support on DMHIs (including MHAs and internet-based interventions) across various treatment outcomes, participant samples, and types of support providers and to evaluate the quality of evidence available.

Literature Search

A literature search was conducted to identify meta-analyses that investigated the use of human support for DMHIs on mental health outcomes. The search was restricted to meta-analyses available in English and included a comparison of mental health outcomes when human support was provided versus when no support was provided.

Search Strategy

We searched the MEDLINE, PubMed, and PsycINFO electronic databases for relevant articles using key terms related to DMHIs, with filters for meta-analyses, availability in English (based on the primary researchers’ language fluency), and year of publication since 2011. The MEDLINE and PubMed searches were completed on August 30, 2021, and the PsycINFO search was completed on September 6, 2021. To complete the most comprehensive review, the reference list of an unpublished meta-analysis on technology-delivered interventions was also searched for relevant articles. The full search terms were as follows: (“digital,” OR “mHealth,” OR “eHealth,” OR “web-based,” OR “internet-based,” OR “mobile phone,” OR “smartphone,” OR “internet interventions,” OR “apps,” OR “artificial intelligence,,” OR “technology-delivered intervention” OR “mobile mental health intervention,” OR “digital mental health intervention,” OR “internet-delivered”) AND (“mental health,” OR “depression,” OR “depressive symptoms,” OR “depressive disorders,” OR “anxiety,” OR “affective symptoms,” OR “anxiety disorders,” OR “mood disorders,” OR “stress,” OR “PTSD,” OR “suicidal ideation,” OR “psychological distress”). The term “virtual” was intentionally not included in the search terms, despite its growing popularity in the literature since the start of the COVID-19 pandemic. However, our interpretation of the literature is that “virtual” seems to be used as a descriptor for how synchronous, person-delivered interventions are delivered via technology. In this paper, we focused on technology-delivered interventions in which the core mental health skills are delivered through the digital platform via reading, didactics, games, tasks, etc (instead of by another human on a digital platform).

Exclusion and Inclusion Criteria

On the initial search, article abstracts were screened for inclusion criteria. Articles were included if they met the following criteria: (1) the study had a quantitative meta-analysis study design; (2) the intervention targeted mental health symptoms and was delivered via a technology platform (excluding person-delivered interventions mediated through telehealth, text messages, or social media); (3) the outcome variables included mental health symptoms such as anxiety, depression, stress, posttraumatic stress disorder (PTSD) symptoms, or a number of these symptoms together; and (4) the study included quantitative comparisons of outcomes in which human support versus those when no or minimal human support (excluding solely engagement reminders) was provided. Dissertations were included if they met the inclusion criteria. Two authors (SA and MJ) independently filtered and selected the meta-analyses based on the inclusion and exclusion criteria described earlier. Duplicates across search sources were removed, and the full texts of the remaining studies were screened for the inclusion criteria. One study that met the criteria was excluded in the last phase, as a more recently updated meta-analysis of the topic accounted for all the relevant primary studies. During manuscript preparation, the authors became aware of an additional meta-analysis that met the inclusion criteria [52]; this study was included in the final list of articles. See Figure 1 for the selection of meta-analyses.

Figure 1. Selection of meta-analyses.

Coding, Data Extraction, and Synthesis Strategy

The final list of included meta-analyses was cross-checked by 3 authors (SA, MJ, and AE). The following data were extracted from each of the included meta-analyses: (1) authors and year of publication of the meta-analysis; (2) number of studies included in the quantitative synthesis (meta-analysis); (3) design of the studies included in the meta-analysis; (4) participant populations of studies included in the meta-analysis; (5) type of DMHI examined in the meta-analysis; (6) main mental health outcome variables examined in the meta-analysis; (7) the meta-analysis’s definition of support and nonsupport; (8) effect sizes and CIs for all levels of support quantitatively conducted in the meta-analysis for mental health outcomes; and (9) P values for the difference between levels of support for mental health outcomes. Data extraction of the included studies was conducted in duplicate by the same 3 authors. Disagreements or questions were resolved between the 3 authors when needed, with questions and concerns being brought to the corresponding author (JR). To increase the integrity of our study, the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) [53] guidelines were followed. As the included studies were meta-analyses, risk of bias was not assessed. All authors contributed to the comparisons of effect sizes by outcome domain to examine the efficacy of human-supported DMHIs; no quantitative analyses were conducted in this meta-review.

Quality Assessment

Quality assessment was conducted for each meta-analysis using the Assessment of Multiple Systematic Reviews (AMSTAR) 2, as provided in the study by Shea et al [54]. AMSTAR allows researchers to rate the quality of systematic reviews and meta-analyses by indicating if the authors of the article completed a specific set of tasks, such as performing study selection in duplicate and describing the characteristics of the included studies in sufficient detail. Three authors (SA, MJ, and AE) evaluated the quality of the included studies jointly until a 95% agreement was met, after which the remaining studies were independently evaluated. Items with No, Partial Yes, and Yes options were given scores of 0, 1, and 2, respectively, and items with No and Yes options were given scores of 0 and 2, respectively. A percentage score was calculated for each study. For this meta-review, meta-analyses that satisfied at least 70% of the eligible AMSTAR 2 items were considered higher quality meta-analyses, whereas those with 50% to 69% completion were considered medium quality, and those with less than 50% were considered low quality. This method was informed by a previous meta-review [55].

Overlap Analysis

An overlap analysis was conducted to determine the percentage of primary studies evaluated across all meta-analyses that overlapped. Three authors (SA, MJ, and AE) independently extracted the reference citations of all primary studies included in each meta-analysis. The corrected covered area (CCA) [56] was estimated to determine the degree of overlap of the primary studies in the included meta-analyses. The CCA was calculated using a script that applies the calculation formula provided by the authors using the citation matrix [56]. Following these guidelines, CCA scores from 0% to 5% were considered to have slight overlap, those from 6% to 10% were considered to have moderate overlap, those from 11% to 15% were considered to have high overlap, and those greater than 15% were considered to have very high overlap. The results of the calculation showed a CCA value of 0.030 (3%), indicating slight overlap across meta-analyses. We suspect that the heterogeneity of the topics of the meta-analyses led to primary studies not meeting the inclusion criteria for multiple meta-analyses.

Description of Included Studies

The initial search identified 557 studies, from which 420 were excluded. Most (n=320) were excluded by screening the title and abstract, whereas others (n=100) were duplicates. The full texts of the remaining 137 studies were obtained and reviewed, and 107 further studies were excluded. After reviewing 505 unique primary studies, 31 meta-analyses on the effectiveness of DMHIs on mental health outcomes comparing human support versus no support were included in this meta-review. The PRISMA flowchart for the inclusion of studies in the meta-review is shown in Figure 1. Characteristics (number of studies, study design, participant population, DMHI type, outcomes, and quality assessment) of the included meta-analyses are shown in Table 1. See Multimedia Appendix 1 [3,34,47,54] for a list of studies that initially appeared to meet the inclusion criteria but were later excluded. Given that the focus of this meta-review was to examine the effects of human support, Table 2 includes the definitions of human support by study. The meta-analyses varied considerably in terms of the amount of detail provided. Some studies included varying levels of human support (eg, full support vs some support vs no support), whereas others only compared supported with unsupported cases.

Table 1. Meta-analyses included for final analyses.
StudyNumber of studiesStudy designParticipant populationDMHIa typeOutcome variablesQuality assessment
Carolan et al [57], 201721RCTsbEmployed participants aged ≥18 years; targeted populations (psychological) and universal populationOccupational digital mental health interventionsMultiple problems (stress, depression, and psychological distress)Low
Cheng et al [47], 202014RCTsPeople with HIV or AIDS and with clinical or subclinical depressionTechnology-delivered psychotherapeutic interventionsDepressionLow
Conley et al [58], 201648Mixed (reports, RCT, or quasi-experimental control design)Higher education students; universal prevention or indicated preventionTechnological mental health prevention programsMental health–related outcomesLow
Cowpertwait and Clarke [59], 201318RCTsDepressed adultsWeb-based psychological interventionsDepressionLow
Domhardt et al [4], 201934RCTsAdults with specific phobia, social anxiety disorder, panic disorder, agoraphobia, or generalized anxiety disorder at baselineInternet- and mobile-based interventions for anxietyAnxietyMedium
Firth et al [60], 201718RCTsNo restrictions based on diagnosis or any clinical or demographic traitsSmartphone- delivered mental health interventionsDepressionLow
Fu et al [61], 202022RCTsIndividuals with mental health problems in low-income and middle-income countriesDigital psychological interventionsMental health issuesMedium
Grist et al [62], 201934RCTsYouth with anxiety or depressionTechnology-delivered interventions for depression and anxietyMultiple problems (anxiety or depression symptoms)Low
Harrer et al [63], 201848RCTsUniversity studentsInternet-delivered psychological interventionsDepression and anxietyMedium
Heber et al [5], 201723RCTsAdult participants who experienced stressWeb- and computer-based stress management interventionsMultiple problems (stress, depression, and anxiety)Medium
Kampmann et al [64], 201637RCTsAdults who meet diagnostic criteria for social anxiety disorderiCBTcAnxiety (social anxiety)Medium
Kuester et al [65], 201620RCTsAdults with clinical or subclinical PTSDdInternet-based interventions. CBTe and expressive writing. Guided vs unguided comparisons only done with internet-based CBTPTSDLow
Leung et al [52], 202213RCTsParticipants aged 16-64 years, clinical or subthreshold mental health symptomsDigital intervention targeting mental healthMultiple mental health problemsMedium
Li et al [66], 20148RCTsNo limitations on the participants’ age or the significance of the depression symptomsGame-based digital interventions for depressionDepressionMedium
Linardon et al [7], 201966RCTsAll agesApp-supported smartphone interventions for mental health problemsDepression and anxiety (generalized anxiety)Low
Mehta et al [46], 201925RCTsPeople with chronic health conditionsiCBTDepression and anxietyLow
Pang et al [41], 202118RCTsAdults with depression that was diagnosed by a physician or by any well-validated depression scalesWeb-based self-management interventions for depressionDepressionMedium
Păsărelu et al [38], 201719RCTsAdult participants (aged ≥18 years) with either symptoms or a primary diagnosis of anxiety or unipolar depressionTransdiagnostic or tailored interventions, based on a CBT protocol; delivered on the web, via the internet (both self-help and clinician-delivered)AnxietyLow
Phillips et al [44], 201934RCTsAdults with any mental health condition in an employee population for any occupationOccupational e–mental health interventions (information and communication technology based)Stress, depression, and anxietyMedium
Richards and Richardson [40], 201223RCTsAdults with depression (self-report or diagnosis), established using valid and reliable measures, who may also have had comorbidityComputer-based psychological treatments for depressionDepressionLow
Sherifali et al [45], 201813Mixed (RCTs or controlled clinical trials)Informal caregivers aged ≥18 years who were currently providing caregiving support to adults aged ≥18 years living in the community with at least 1 chronic conditionInternet-based interventionsDepression and anxietyMedium
Sijbrandij et al [67], 201612RCTsIndividuals with subclinical or clinical PTSDiCBT for PTSDPTSD and depressionLow
Simmonds-Buckley et al [68], 202024RCTsAdults aged ≥18 years with depression or anxietyWeb-based or smartphone app interventionDepression and multiple problems (anxiety and Stress)Medium
Spijkerman et al [69], 201615RCTsAdults aged ≥18 yearsWeb-based MBIsfStress, depression, and anxietyLow
Stratton et al [70], 201723RCTs and pre- or posttrialsCurrent paid employment and working age adultseHealth-based interventionMultiple problems (depression, anxiety, and stress outcomes)Low
Sztein et al [6], 201814RCTsAdults aged ≥18 years with depressioniCBTDepressionMedium
Thompson et al [42], 202125RCTsAny adult population (aged ≥18 years); see the article for participant detailsInternet-based acceptance and commitment therapyDepression and anxietyHigh
Twomey et al [71], 202012RCTsAdults with elevated depressive symptomsIndividually tailored computer-assisted CBT program for depressionDepressionMedium
Versluis et al [72], 201627RCT, pre- or postdesignClinical and healthy samplesEcological momentary interventionsMultiple problems (anxiety, depression, and perceived stress)Low
Victorson et al [73], 202043RCTsClinical and healthy nonclinical populations. The average participant age was 40.5 yearsTechnology-enabled mindfulness-based programsAnxiety, depression, and stressLow
Wright et al [43], 201940RCTsParticipants with depression (clinically diagnosed or diagnosed by standardized assessments)Computer-assisted CBTDepressionLow

aDMHI: digital mental health intervention.

bRCT: randomized controlled trial.

ciCBT: internet- delivered cognitive behavioral therapy.

dPTSD: posttraumatic stress disorder.

eCBT: cognitive behavioral therapy.

fMBI: mindfulness-based intervention.

Table 2. Description of human support by meta-analysis.
StudyDescription of human supportDescription of no human support
Carolan et al [57], 2017Studies varied in who provided support: 70% of the studies described the support as coming from a therapist or coach, 20% had a coordinator or member of staff providing support, and 10% described support as a clinical psychologistSelf-guided DMHIa
Cheng et al [47], 2020Professional supportSelf-guided DMHI
Conley et al [58], 2016Participants received prompts, reminders, feedback, or guidance through emails, and some personal monitoring of the interventionSelf-administered DMHIs, in which assistance was provided only for assessment purposes or to offer a brief introduction to the technology
Cowpertwait and Clarke [59], 2013Human-supportedSelf-guided DMHI
Domhardt et al [4], 2019Continuous therapeutic supportSelf-help DMHI, with therapist contact for assessment (if at all)
Firth et al [60], 2017Involved “in-person” (ie, human) feedbackNo in-person feedback
Fu et al [61], 2020Presence of guidanceAbsence of guidance
Grist et al [62], 2019Supported: minimal contact therapy (“active involvement of therapist, help in applying specific therapeutic techniques, >90 min of time”); some support: predominantly unguided defined as predominantly self-administered (“giving initial therapeutic rationale, direction on how to use the program and periodic check-ins, <90 min of time”)Purely unguided defined as purely self-administered (“therapist contact for assessment at most”)
Harrer et al [63], 2018Individual feedbackUnguided DMHI
Heber et al [5], 2017Guided with regular written feedbackUnguided with no support or only technical support
Kampmann et al [64], 2016Guided internet-delivered cognitive behavioral therapyUnguided internet-delivered cognitive behavioral therapy
Kuester et al [65], 2016Therapeutic support from a therapist (“in remote contact with the client and provided therapeutic feedback messages”)No therapeutic support (“programs that were either fully automated, provided only nontherapeutic moderation such as the supervision of forum posts or solely technical assistance”)
Leung et al [52], 2022Nonclinician (eg, peers, research assistants, or other lay persons) or clinician (ie, psychiatrists, psychologists, therapists, social workers, graduate students in a mental health–related field, or students completing clinical practicum training)Unguided
Li et al [66], 2014Therapist involved (minimal contact therapy and therapy administrated)No therapist involved (self-administered and predominately self-help)
Linardon et al [7], 2019Studies that offered professional guidance (eg, regular supportive text messages, phone calls, or personalized feedback from therapists or research staff)Studies that did not offer professional guidance
Mehta et al [46], 2019Therapist-guided (“usually involve weekly contact with a web-based therapist or guide, either through asynchronous web-based messaging or by telephone”)Self-guided DMHI (“participants do not have regular contact with a therapist”)
Pang et al [41], 2021Therapist guidance group (“group communicating with the therapist”); virtual health indicator guidance group (“group communicating with the virtual health care provider”)No therapist guidance group (“group not communicating with the therapist”)
Păsărelu et al [38], 2017Experienced clinical psychologists and supervised studentsSelf-guided DMHI
Phillips et al [44], 2019Studies with guidance provided different types of human support (eg, regular calls by a clinical study officer, feedback from a clinical psychologist on home assignments, regular guidance from trained e-coaches, peer group discussions, and virtual class meetings)Without guidance
Richards and Richardson [40], 2012Therapist-supported studies included a clinician who offered postsession feedback and support or a clinician-delivered interventionCompletely self-administered
Sherifali et al [45], 2018Internet-based information or education plus professional psychosocial supportInternet-based information or education only
Sijbrandij et al [67], 2016Therapist-assisted (email, telephone calls, in-person support)Self-help
Simmonds-Buckley et al [68], 2020Predominantly therapist deliveredSelf-administered DMHI
Spijkerman et al [69], 2016Therapist guidanceWithout therapist guidance
Stratton et al [70], 2017Feedback provided, rather than just technical supportSelf-help
Sztein et al [6], 2018Clinician was in some way involved in the dissemination of information to the study participants, whether through discussion forums, email, telephone, etcSelf-guided
Thompson et al [42], 2021Therapist-guided (included phone calls, personalized written messages and feedback, tailored emails, face-to-face meetings, and automated text messages or emails)Not guided (although may have included automated text messages or emails)
Twomey et al [71], 2020Clinician or technician guidanceWithout guidance
Versluis et al [72], 2016DMHI was included in a “treatment package” and was supported by a mental health professional; the “treatment package” could include the DMHI and therapy or DMHI and continued feedback (on homework assignments or messages to improve adherence)Stand-alone DMHI
Victorson et al [73], 2020Human-supportedWithout human support
Wright et al [43], 2019Clinician assisted or technician assistedUnsupported DMHI

aDMHI: digital mental health intervention.

Methodological Quality of the Meta-analyses

On average, the meta-analyses achieved 49% completion of all AMSTAR 2 items, ranging from 25% to 81% satisfaction. A total of 17 studies were rated as low quality, 14 as medium quality, and 1 as high quality based on the AMSTAR 2 quality assessment scale. See Multimedia Appendix 2 [4-7,38,40-45,47,57-73] for a list of percentages achieved for each meta-analysis included.

Summary of All Effect Sizes

Of the 45 effect sizes reported, almost half (n=22, 48%) showed that human-supported interventions were significantly more effective than unsupported interventions. Only 9% (4/45) of effect sizes showed unsupported as significantly more effective (Table 3). A total of 28% (13/45) of effect sizes showed a potential trend toward supported interventions; however, these results were not significant.

Table 3. Effect sizes from included studies by outcome domain.
Meta-analysisEffect size typeHuman-supportedNot supportedSignificant difference

Effect sizeCIEffect sizeCIYes or noDirection

Victorson et al [73], 2020Effect size differencesaMean −0.07 (SD 0.88)bMean −0.14 (SD 0.47)NoNo difference; not significant

Sherifali et al [45], 2018SMDc−0.36−0.66 to −0.07−0.42−0.65 to −0.19YesUnsupported significantly; more effective

Phillips et al [44], 2019Hedges g0.480.16 to 0.800.260.10 to 0.41YesGuided significantly; more effective

Thompson et al [42], 2021Hedges g0.280.18 to 0.380.16−0.10 to 0.42NoGuided slightly more effective; not significant

Kampmann et al [64], 2016Hedges g0.870.72 to 1.020.780.50 to 1.05NoGuided slightly more effective; not significant

Kampmann et al [64], 2016Hedges g0.470.15 to 0.780.19−0.08 to 0.46NoGuided slightly more effective; not significant

Linardon et al [7], 2019Hedges g0.530.36 to 0.700.210.12 to 0.30YesGuided significantly; more effective

Spijkerman et al [69], 2016Hedges g0.260.02 to 0.500.19−0.06 to 0.43NoGuided slightly more effective; not significant

Harrer et al [63], 2018Hedges g0.270.02 to 0.520.250.02 to 0.49NoGuided slightly more effective; not significant

Domhardt et al [4], 2019SMD−0.39−0.59 to −0.18YesGuided significantly; more effective

Păsărelu et al [38], 2017Hedges g0.87 (clinical psychologist); 0.76 (supervised students)0.48 to 1.26; 0.56 to 0.960.54YesGuided significantly; more effective

Mehta et al [46], 2019Cohen d0.54 (SE 0.08)Mean 0.57 (SE 0.12)NoUnsupported slightly more effective; not significant

Victorson et al [73], 2020Effect size differencesMean −0.12 (SD 0.93)Mean−0.46 (SD 0.79)NoNo difference; not significant

Richards and Richardson [40], 2012Cohen d0.78−0.92 to −0.640.36−0.61 to −0.10YesGuided significantly; more effective

Sherifali et al [45], 2018SMD−0.34−0.63 to −0.05−0.31−0.50 to −0.11YesGuided significantly; more effective

Phillips et al [44], 2019Hedges g0.480.33 to 0.630.230.08 to 0.37YesGuided significantly; more effective

Thompson et al [42], 2021Hedges g0.450.34 to 0.560.14−0.022 to 0.29YesGuided significantly; more effective

Firth et al [60], 2017Hedges g0.137−0.08 to 0.350.4650.30 to 0.63YesUnsupported significantly; more effective

Li et al [66], 2014Cohen d−0.44−0.73 to −0.15−0.54−0.86 to −0.21YesUnsupported significantly; more effective

Linardon et al [7], 2019Hedges g0.480.34 to 0.620.230.15 to 0.31YesGuided significantly; more effective

Cheng et al [47], 2020Cohen d0.220.27NoUnsupported slightly more effective; not significant

Cowpertwait and Clarke [59], 2013Hedges g0.480.39 to 0.570.320.23 to 0.41YesGuided significantly; more effective

Spijkerman et al [69], 2016Hedges g0.290.06 to 0.530.290.03 to 0.55NoNo difference, not significant

Sijbrandij et al [67], 2016Hedges g0.660.36 to 0.960.550.12 to 0.98NoGuided slightly more effective; not significant

Harrer et al [63], 2018Hedges g0.28−0.02 to 0.570.150.06 to 0.25NoGuided slightly more effective; not significant

Simmonds-Buckley et al [68], 2020Hedges g0.61 (predominantly therapist delivered); 0.39 (minimal contact)—; 0.16 to 0.620.300.15 to 0.45NoGuided slightly more effective; not significant

Mehta et al [46], 2019Cohen d0.64 (SE 0.15)Mean 0.45 (SE 0.18)YesGuided significantly; more effective

Twomey et al [71], 2020Hedges g0.570.36 to 0.780.470.32 to 0.62NoGuided slightly more effective; not significant

Wright et al [43], 2019Hedges g0.6730.546 to 0.8010.2390.115 to 0.364YesGuided significantly; more effective

Pang et al [41], 2021Hedges g−0.60 (therapist); −0.27 (web-based health care provider)−0.81 to −0.38; −0.58 to 0.05−0.17−0.40 to 0.06YesGuided significantly; more effective

Sztein et al [6], 2018SMD0.730.58 to 0.870.790.55 to 1.03NoNo statistically significant difference

Kuester et al [65], 2016Hedges g0.80.62 to 0.980.540.22 to 0.86NoGuided slightly more effective; not significant

Sijbrandij et al [67], 2016Hedges g0.890.70 to to 0.78YesGuided significantly; more effective

Spijkerman et al [69], 2016Hedges g0.890.65 to 1.120.19−0.01 to 0.38YesGuided significantly; more effective

Victorson et al [73], 2020Effect size differencesMean −0.20 (SD 0.49)Mean −1.63 (SD 1.8)YesUnsupported significantly; more effective

Phillips et al [44], 2019Hedges g0.760.44 to 1.080.380.19 to 0.56YesGuided significantly; more effective

Conley et al [58], 2016Hedges g0.550.37 to 0.720.280.14 to 0.40YesGuided significantly; more effective

Heber et al [5], 2017Cohen d0.64(0.50 to 0.79)0.33(0.20 to 0.46)YesGuided significantly; more effective

Grist et al [62], 2019Hedges g0.87 (minimal contact therapy);0.81 (predominantly self-help)0.68 to 1.06; −0.68 to −2.310.240.10 to 0.38YesGuided significantly; more effective

Simmonds-Buckley et al [68], 2020Hedges g0.60 (minimal therapist contact); 0.47 (predominantly self-help)0.36 to 0.83; 0.11 to 0.83)0.230.09 to 0.36YesGuided significantly; more effective

Carolan et al [57], 2017Hedges g0.390.18 to 0.610.340.16 to 0.53NoGuided slightly more effective; not significant

Stratton et al [70], 2017Hedges g0.270.22NoGuided slightly more effective; not significant

Versluis et al [72], 2016Hedges g0.73 (mental health provider); 0.38 (DMHI + care as usual)0.57 to 0.88; 0.11 to 0.640.450.22 to 0.69YesGuided significantly; more effective

Fu et al [61], 2020Hedges g0.610.43 to 0.780.60.35 to 0.86NoGuided slightly more effective; not significant

Leung et ale [52], 2022Hedges g−0.17−0.23 to 0.11YesGuided significantly; more effective

aVictorson et al [73] reported differences in effect sizes for supported versus unsupported interventions.

bNot available.

cSMD: standardized mean difference.

dPTSD: posttraumatic stress disorder.

ePosttreatment SMD effect size overall comparison.

Outcome Domains

See Table 4 for the number of effect sizes showing effects in favor of supported interventions, the number of effect sizes showing effects in favor of unsupported interventions, and the number of effect sizes showing no significant differences between supported and unsupported interventions based on the characteristics of the studies. No patterns emerged regarding the effects of human support across outcome domains.

In particular, 26% (12/45) of effect sizes represented anxiety symptoms. Of these, 4 suggested that supported DMHIs resulted in significantly lower anxiety symptoms compared with unsupported DMHIs [4,7,38,44]. Only 1 meta-analysis found that unsupported interventions had significantly higher effects [45]. Among the meta-analyses, 5 effect sizes used clinical samples (with diagnosed or elevated clinical symptoms). Three of those indicated significant effects for human-supported DMHIs, and 2 effect sizes revealed null results comparing supported and unsupported interventions. When examining the studies that used clinical samples, those that found supported DMHIs more effective than unsupported DMHIs included individuals with “any mental health condition” [44], anxiety disorders [4], and anxiety or unipolar depression [38]. Interestingly, the study that examined DMHIs for individuals with social anxiety found no significant differences based on supported or unsupported DMHIs using 2 effect sizes [64].

A total of 19 meta-analyses examined the effect sizes of DMHIs on depression symptoms. Nine of those meta-analyses suggested that supported DMHIs result in significantly lower depression symptoms compared with unsupported DMHIs [7,40-46,59]. Two meta-analyses found that unsupported DMHIs were significantly more effective than supported DMHIs [60,66]. When focusing exclusively on studies of individuals with elevated symptoms of depression [6,40,41,43,71], supported DMHIs were more effective in reducing depressive symptoms than unsupported DMHIs (with 3 studies showing significant findings and 2 failing to find significant differences).

Two meta-analyses measured the effect sizes of DMHIs for the treatment of PTSD symptoms. One meta-analysis suggested that supported DMHIs result in significantly lower PTSD symptoms compared with unsupported DMHIs [67]. The other meta-analysis did not find statistically significant effects of human support [65]. Both studies included samples of individuals with elevated PTSD symptoms.

Three meta-analyses examined the effect sizes of DMHIs on stress. Two suggested that supported DMHIs result in significantly lower stress compared with unsupported DMHIs [44,69], whereas 1 found the opposite effect [73]. Only 1 meta-analysis [44] included a clinically elevated sample, which focused on individuals with “any mental health condition.” The other 2 studies included unselected samples of participants.

Finally, 9 meta-analyses examined the effect sizes of DMHIs on multiple mental health problems. Six of those meta-analyses suggested that supported DMHIs result in significantly lower mental health symptoms compared with unsupported DMHIs [5,52,58,62,68,72]. No meta-analyses found stronger effects of unsupported DMHIs for multiple mental health symptoms.

Table 4. Number of effect sizes showing effects (N=45).

Total number of effect sizes reportedHuman-supported interventions had significantly greater effects, n (%)No significant differences between human-supported and unsupported interventions, n (%)Unsupported interventions had significantly greater effects, n (%)
Outcome domains

Anxiety124 (33)7 (58)1 (8)

Depression199 (47)8 (42)2 (10)

PTSDa21 (50)1 (50)0 (0)

Stress32 (66)0 (0)1 (33)

Multiple96 (66)3 (33)0 (0)
Sample characteristics

Clinical or subclinical2112 (57)9 (42)0 (0)

Anxiety disorders11 (100)0 (0)0 (0)

Social anxiety20 (0)2 (100)0 (0)

Depression63 (50)3 (50)0 (0)

Anxiety disorders or depression43 (75)1 (25)0 (0)

PTSD31 (33)2 (66)0 (0)

Unrestricted mental health conditions54 (80)1 (20)0 (0)

Unrestricted samples2410 (41)10 (41)4 (16)
Quality of RCTb

High21 (50)1 (50)0 (0)

Medium199 (47)8 (42)2 (10)

Low2412 (50)10 (41)2 (8)
Type of human support

Clinically trainedc178 (47)8 (47)1 (5)

Mixedd97 (77)2 (22)0 (0)

Uncleare197 (36)9 (47)3 (15)

aPTSD: posttraumatic stress disorder.

bRCT: randomized controlled trial.

cClinically trained is defined as a therapist, clinical psychologist, or clinical psychology trainee.

dMixed support providers included both clinically trained individuals and individuals who did not have clinical training providing support for DMHIs.

eUnclear means that the authors did not provide information about the type of support provider in the meta-analysis.

Sample Characteristics

When examining effect sizes in randomized controlled trials (RCTs) that included participants with clinical or subclinical levels of symptoms, there were more significant effect sizes showing that human support increases intervention efficacy compared with no significant differences based on support. When examining unrestricted samples, the results were more mixed (Table 4).

Clinical or subclinical samples were further broken down by condition. One effect size reported that human-supported DMHIs were more effective than unsupported DMHIs for individuals with a variety of anxiety disorders, whereas 2 effect sizes found no significant effects of human support when the samples only included individuals with social anxiety specifically. Six effect sizes were reported for samples with depression; effect sizes were split between those favoring supported DMHIs and those that did not find significant differences between supported and unsupported DMHIs. Four effect sizes were reported for samples with anxiety or depression; most demonstrated that human-supported DMHIs were more effective than unsupported DMHIs. The results suggested a different pattern for individuals with PTSD, with most (2 out of 3) effect sizes suggesting no significant differences between supported and unsupported DMHIs. Finally, 4 of 5 effect sizes suggested that human-supported DMHIs were more effective than unsupported DMHIs among samples with mixed mental health conditions.

Quality of RCTs

Across high-, medium-, and low-quality RCTs, the percentage of effect sizes showing positive effects versus no effects was similar (Table 4). When only considering those studies with high- or medium-quality AMSTAR 2 ratings, the results seem to be split between effect sizes in favor of human-supported DMHIs and those revealing no significant differences between supported and unsupported DMHIs.

Support Provider Characteristics

Four of the studies included in this meta-review examined whether the supportive person’s training was related to DMHI effectiveness [4,38,40,52]. Three studies found no significant differences in effect sizes between experienced individuals providing support (eg, licensed clinicians) and individuals with less experience (eg, students or nonclinicians) [4,38,52]. One study [40] found significant differences between therapist-supported DMHIs and administrative staff members providing support for DMHIs, with therapist-supported DMHIs yielding higher effect sizes.

Of those meta-analyses that only included individuals with clinical training (eg, therapists, clinical psychologists, and clinical psychology trainees), approximately half of the effect sizes were in support of human support and approximately half found no significant differences between supported and nonsupported DMHIs; 1 meta-analysis found that the unsupported DMHIs were more effective. In contrast, among studies that reported that the included primary studies included a mix of supportive individuals (both clinically trained individuals and individuals without clinical training), effect sizes were more likely to be in favor of human support. Of those meta-analyses that did not define the type of support providers, the effect sizes were mixed in terms of efficacy of human-supported DMHIs (Table 4).

Principal Findings

A systematic meta-review of meta-analyses was conducted that compared the effects of human support or DMHIs with no support on mental health symptoms. The effects of human support on treatment outcomes, participant samples, and types of support providers were examined. Results from 31 meta-analyses representing 505 unique primary studies have been reported. Almost half (22/45, 48%) of the effect sizes revealed that supported interventions had significantly stronger effects compared with unsupported interventions. Only 9% (4/45) of effect sizes described the significantly stronger effects of unsupported interventions. No clear pattern of results emerged in the outcome domain. Evidence for human-supported DMHIs was split for depression and PTSD symptoms; for anxiety symptoms, evidence suggested that there were largely no significant differences between human-supported and unsupported DMHIs. However, when multiple outcomes were assessed, human support for DMHIs appeared to be more effective than no support. Given the variable and number of studies across several outcomes and discrepant results, it would be premature to draw firm conclusions regarding the relative importance of human support for DMHIs across different outcome domains. Similarly, no clear pattern of results emerged for sample characteristics, with effect sizes largely split across those that did vs did not show the efficacy of added human support. The same was true regarding the quality of the meta-analysis.

Moreover, we did not find a clear pattern of results when comparing highly trained support providers (eg, clinicians) with paraprofessional-level support, suggesting that DMHIs do not need to be supported by individuals with extensive mental health training. This is promising for models of increasing access to mental health services and has implications for task-shifting mental health care as well as for therapeutic mentoring [74]. Unfortunately, 19 of the 45 effect sizes were from meta-analyses that did not define the training or background of the individuals providing support, greatly limiting our ability to draw strong conclusions about the role of background and training of support providers on the efficacy of human support. Although no clear patterns emerged in the outcome domain, sample characteristics, or provider background, we highlight a few promising trends that can guide future research and practice. Among DMHIs that target individuals with elevated mental health symptoms and specific mental health symptoms (depression, anxiety, and PTSD), human support appears to lead to stronger effects when compared with unsupported DMHIs. Future studies should explore this association.

Among the meta-analyses that included unrestricted samples (eg, open to adults), the results for human support were more mixed. Our review suggests that human support may play an important role in helping individuals with specific challenges engage with DMHIs that may be the most effective. Future research will need to further specify the conditions under which human support is most effective, disentangling the mechanisms through which it has its effects. Support may provide the structure and incentives to help individuals engage with DMHIs such that they are more effective. In addition, support may also provide a quasi-therapeutic alliance that increases motivation. Along these lines, Mohr et al [35] set forth a range of testable hypotheses pertaining to client motivation, alliance, and communications media, each of which should be more explicitly defined, tested, and manualized. Similarly, there is a need to specify the type of human support provided, as it can vary and may include postsession feedback, regular calls, feedback on assignments, regular supportive text messages, asynchronous web-based messaging, personalized feedback, tailored emails, or even face-to-face meetings [75].


Several limitations should be considered when interpreting these findings. First, our study was limited by the available meta-analytic study literature. This meta-review may have excluded primary studies that examined the effects of DMHIs and compared the effects of supported versus unsupported interventions but were not included in the meta-analyses. Second, our search was limited to studies published in English and may have excluded some otherwise meeting the inclusion criteria. Third, meta--reviews are constrained by the limitations in primary studies that have been summarized in the included meta-analyses, meaning that the original limitations of the primary research are not considered as the main findings are summarized. Furthermore, our meta-review used a thematic synthesis of the findings from the included meta-analyses, which could be vulnerable to issues of subjective interpretation [76]. In addition, this meta-review does not specify for whom human-supported DMHIs are most effective, as there are insufficient meta-analyses to draw firm conclusions by samples or diagnoses. Future research is necessary to investigate how background characteristics (eg, demographics and symptom severity) interact with the types of human support. It will also be important to continue to explore interactions between DMHI approaches (ie, MHAs and internet browser–based intervention) and human support, as different types of DMHIs may need extra support.

This meta-review was limited by the variable quality of evidence from the included meta-analyses. Of note, only 1 meta-analysis included in this review achieved a high-quality rating on the AMSTAR 2 guide, and most meta-analyses were rated as low quality. Overall, these ratings indicate that the meta-analyses included in this study demonstrated weakness in the core domains of experimental research methodology, hindering our confidence in drawing strong conclusions [54]. Moreover, the insufficient reporting of specific intervention characteristics in the included meta-analyses prevents us from providing discrete recommendations for future intervention protocols. One of the most important omissions from many meta-analyses was the description of individuals providing support. It should be noted that 19 of the 45 effect sizes that were reported in the meta-analyses did not provide this information with sufficient clarity for coding. Thus, our lack of specificity in this review reflects the current state of intervention reporting in the field. Without such details, it will be difficult to advance and improve the specificity with which human support procedures can improve DMHI outcomes.

Comparison With Prior Work

To our knowledge, the review by Bernstein et al [39] is the only prior scoping review examining the role of human support in DMHI use. This study was able to expand on the work by Bernstein et al [39] by reviewing a more exhaustive set of DMHIs (rather than just MHAs with cognitive behavioral approaches) while focusing on higher-order quantitative comparisons of human support levels for DMHI use. However, similar to Bernstein et al [39], this meta-review was unable to draw strong conclusions based on the outcome of interest. Across studies in our meta-review, there did not appear to be a strong pattern of results when examining effect sizes across mental health outcomes (anxiety, depression, PTSD, stress, or multiple outcomes), sample characteristics, or meta-analysis quality. Understanding how the type of support may interact with diagnoses or challenges is critical. Our review found that supported DMHIs targeting anxiety symptoms among individuals with clinically elevated symptoms may be more effective than unsupported DMHIs targeting similar outcomes. However, we noted an important caveat in the results. For example, the findings did not hold for the study that examined meta-analyses focusing on individuals with social anxiety disorder [64]. Recent data from a human-supported, web-based anxiety program suggested that human support may negatively interact with social anxiety symptoms; some individuals enrolled in a web-based anxiety intervention reported that they did not want to talk on the phone with an intervention coach, citing their anxiety about speaking to strangers [76]. Some individuals cited this as a reason for dropping out of the intervention altogether. Given the mixed results in our study, we recommend that future research more closely examine how specific mental health challenges and interventions may benefit from (or be hindered by) specific models of human support.

This similarity notwithstanding, our meta-review provides insights for leveraging DMHIs. Bernstein et al [39] suggested their overall results of human support on DMHI outcomes to be mainly ambiguous. However, our study found that nearly half of the meta-analyses within this sample indicated human-supported interventions to have significantly better outcomes and that among those samples with clinical elevations, effects may be stronger. Less than 10% of the effect sizes showed stronger effects of unsupported DMHIs.

Although additional research is needed, our results highlight the important role that human support plays across various types of interventions, suggesting promise for reducing the global burden of mental health challenges and the lack of access to adequate care. They also suggest that positive effects of human-supported DMHIs are not limited to clinically severe cases, which offers promise for considering how these interventions may be useful in prevention settings as well.

Recommendations for Future Research

The meta-analyses included in our meta-review varied considerably in terms of what was reported about the human support provided in the individual studies, making it challenging to see clear patterns in the results. To that end, we strongly recommend that future reports on meta-analyses and RCTs provide more detailed information. Our recommendations are similar to those made by Bernstein et al [39]. However, we suggest additional guidelines for reporting that do not exclusively focus on human support.

First, similar to Bernstein et al [39], we strongly recommend additional information about the training of individuals providing support (eg, therapists, graduate students, and paid research assistants). Although some meta-analyses made clear that they included studies that focused on only one type of supportive person (eg, clinician-supported interventions), most meta-analyses did not specify the training of the guidance or human support provider. Information about the type of training received by support providers is crucial, and future work should focus on including specific information about training and supervision (see the study by Werntz et al [76] for an example).

Second, in line with the study by Bernstein et al [39], researchers need to clearly define what the support providers are doing during the intervention. The studies included in these meta-analyses reported various types of supportive behaviors (both between and within meta-analyses). Although some meta-analyses included lists of the types of support behaviors, including writing emails to participants and texting to provide support, most did not provide that level of detail. We suspect that the kind of behavior expected from support providers largely influences the effectiveness of support. Until there is greater specificity, it will be difficult for the field to advance science-backed support guidelines.

Third, information about the DMHIs themselves needs to be included in the reports. In our meta-review, there was diversity in the types of interventions (mindfulness programs, cognitive behavioral programs, and cognitive bias modification) delivered as well as in the delivery approach (smartphone, CD-ROM, and internet). Interventions also varied widely in terms of recommended program length and the types of behaviors required by the user. We hypothesize that different types of support are needed for different types of programs; thus, additional information about the interventions needs to be more transparent for future investigations.

Finally, we recommend that researchers describe the mechanism through which they believe the coach will be most useful for the DMHI. For example, the supportive accountability model [35] posits that human support increases adherence to a DMHI, thereby increasing the efficacy of the intervention. However, other models of human support may combine supportive accountability with supervised practice of skills to transfer to the user’s real world [76], thereby increasing efficacy. As noted by Leung et al [52], there is considerable heterogeneity across studies in models of support. In the studies they analyzed, Leung et al [52] found that although most used a supportive accountability model, at least 1 study included support providers that focused on sharing their own experiences of recovery [77], suggesting a very different hypothesized mechanism for how human support may enhance DMHI efficacy. Bernstein et al [39] described a similar concept of testing hypothesized targets of coaching interventions. Understanding how human support increases the efficacy of interventions is a crucial next step in this field to fully leverage the potential of DMHIs.


Mental health challenges and their associated impairments remain widespread and burdensome, particularly among individuals from culturally disadvantaged populations [78]. DMHIs offer promise of access to high-fidelity evidence-based interventions, and human support allows DMHI users to benefit from assistance and accountability. The findings of this meta-review suggest that human-supported DMHIs are a promising way to improve the impact of DMHIs on a range of mental health symptoms, and the human support does not have to come from highly trained mental health professionals. The combination of paraprofessional coaching and evidence-based DMHIs could bridge some of the current gaps in global mental health care. Future research will allow for an understanding of how models of human support can be matched to individuals’ backgrounds and types of DMHIs.


This work was supported by a grant from the National Institute of Mental Health (grant 1R41MH126795-01A1) awarded to Rhodes.

Data Availability

Data are available upon request to the first author (AW).

Conflicts of Interest

None declared.

Multimedia Appendix 1

Articles excluded at the full-text level.

DOCX File , 39 KB

Multimedia Appendix 2

Assessment of Multiple Systematic Reviews 2 percentages achieved for each meta-analysis included.

DOCX File , 17 KB

  1. Liverpool S, Mota CP, Sales CM, Čuš A, Carletto S, Hancheva C, et al. Engaging children and young people in digital mental health interventions: systematic review of modes of delivery, facilitators, and barriers. J Med Internet Res 2020 Jun 23;22(6):e16317 [FREE Full text] [CrossRef] [Medline]
  2. West R, Michie S. A Guide to Development and Evaluation of Digital Behaviour Change Interventions in Healthcare. Sutton, UK: Silverback Publishing; 2016.
  3. Rauschenberg C, Schick A, Hirjak D, Seidler A, Paetzold I, Apfelbacher C, et al. Evidence synthesis of digital interventions to mitigate the negative impact of the COVID-19 pandemic on public mental health: rapid meta-review. J Med Internet Res 2021 Mar 10;23(3):e23365 [FREE Full text] [CrossRef] [Medline]
  4. Domhardt M, Geßlein H, von Rezori RE, Baumeister H. Internet- and mobile-based interventions for anxiety disorders: a meta-analytic review of intervention components. Depress Anxiety 2019 Mar;36(3):213-224. [CrossRef] [Medline]
  5. Heber E, Ebert DD, Lehr D, Cuijpers P, Berking M, Nobis S, et al. The benefit of web- and computer-based interventions for stress: a systematic review and meta-analysis. J Med Internet Res 2017 Feb 17;19(2):e32 [FREE Full text] [CrossRef] [Medline]
  6. Sztein DM, Koransky CE, Fegan L, Himelhoch S. Efficacy of cognitive behavioural therapy delivered over the Internet for depressive symptoms: a systematic review and meta-analysis. J Telemed Telecare 2018 Sep;24(8):527-539. [CrossRef] [Medline]
  7. Linardon J, Cuijpers P, Carlbring P, Messer M, Fuller-Tyszkiewicz M. The efficacy of app-supported smartphone interventions for mental health problems: a meta-analysis of randomized controlled trials. World Psychiatry 2019 Oct;18(3):325-336 [FREE Full text] [CrossRef] [Medline]
  8. Lecomte T, Potvin S, Corbière M, Guay S, Samson C, Cloutier B, et al. Mobile apps for mental health issues: meta-review of meta-analyses. JMIR Mhealth Uhealth 2020 May 29;8(5):e17458 [FREE Full text] [CrossRef] [Medline]
  9. Sort A, Khazaal Y. Six tips on how to bring epic wins to health care. Front Psychiatry 2017 Nov 30;8:264 [FREE Full text] [CrossRef] [Medline]
  10. Torous J, Hsin H. Empowering the digital therapeutic relationship: virtual clinics for digital health interventions. NPJ Digit Med 2018 May 16;1:16 [FREE Full text] [CrossRef] [Medline]
  11. Torous J, Jän Myrick K, Rauseo-Ricupero N, Firth J. Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Ment Health 2020 Mar 26;7(3):e18848 [FREE Full text] [CrossRef] [Medline]
  12. Harwood TM, L'Abate L. Self-Help in Mental Health: A Critical Review. New York, NY, USA: Springer; 2010.
  13. Kazdin AE, Blase SL. Rebooting psychotherapy research and practice to reduce the burden of mental illness. Perspect Psychol Sci 2011 Jan;6(1):21-37. [CrossRef] [Medline]
  14. Muñoz RF. Using evidence-based internet interventions to reduce health disparities worldwide. J Med Internet Res 2010 Dec 17;12(5):e60 [FREE Full text] [CrossRef] [Medline]
  15. Muñoz RF, Pineda BS, Barrera AZ, Bunge E, Leykin Y. Digital tools for prevention and treatment of depression: lessons from the Institute for International Internet Interventions for Health. Clin Salud 2021 Mar;32(1):37-40. [CrossRef]
  16. Epstein DS, Zemski A, Enticott J, Barton C. Tabletop board game elements and gamification interventions for health behavior change: realist review and proposal of a game design framework. JMIR Serious Games 2021 Mar 31;9(1):e23302 [FREE Full text] [CrossRef] [Medline]
  17. Lister C, West JH, Cannon B, Sax T, Brodegard D. Just a fad? Gamification in health and fitness apps. JMIR Serious Games 2014 Aug 04;2(2):e9 [FREE Full text] [CrossRef] [Medline]
  18. Qu C, Sas C, Daudén Roquet C, Doherty G. Functionality of top-rated mobile apps for depression: systematic search and evaluation. JMIR Ment Health 2020 Jan 24;7(1):e15321 [FREE Full text] [CrossRef] [Medline]
  19. Saleem M, Kühne L, De Santis KK, Christianson L, Brand T, Busse H. Understanding engagement strategies in digital interventions for mental health promotion: scoping review. JMIR Ment Health 2021 Dec 20;8(12):e30000 [FREE Full text] [CrossRef] [Medline]
  20. Bakker D, Kazantzis N, Rickwood D, Rickard N. Mental health smartphone apps: review and evidence-based recommendations for future developments. JMIR Ment Health 2016 Mar 01;3(1):e7 [FREE Full text] [CrossRef] [Medline]
  21. Nicholas J, Larsen ME, Proudfoot J, Christensen H. Mobile apps for bipolar disorder: a systematic review of features and content quality. J Med Internet Res 2015 Aug 17;17(8):e198 [FREE Full text] [CrossRef] [Medline]
  22. Schueller SM, Torous J. Scaling evidence-based treatments through digital mental health. Am Psychol 2020 Nov;75(8):1093-1104 [FREE Full text] [CrossRef] [Medline]
  23. Wasil AR, Gillespie S, Patel R, Petre A, Venturo-Conerly KE, Shingleton RM, et al. Reassessing evidence-based content in popular smartphone apps for depression and anxiety: developing and applying user-adjusted analyses. J Consult Clin Psychol 2020 Nov;88(11):983-993. [CrossRef] [Medline]
  24. Kuhn E, Kanuri N, Hoffman JE, Garvert DW, Ruzek JI, Taylor CB. A randomized controlled trial of a smartphone app for posttraumatic stress disorder symptoms. J Consult Clin Psychol 2017 Mar;85(3):267-273. [CrossRef] [Medline]
  25. Berry N, Lobban F, Emsley R, Bucci S. Acceptability of interventions delivered online and through mobile phones for people who experience severe mental health problems: a systematic review. J Med Internet Res 2016 May 31;18(5):e121 [FREE Full text] [CrossRef] [Medline]
  26. Leigh S, Flatt S. App-based psychological interventions: friend or foe? Evid Based Ment Health 2015 Nov;18(4):97-99. [CrossRef] [Medline]
  27. Naslund JA, Marsch LA, McHugo GJ, Bartels SJ. Emerging mHealth and eHealth interventions for serious mental illness: a review of the literature. J Ment Health 2015;24(5):321-332 [FREE Full text] [CrossRef] [Medline]
  28. Neary M, Schueller SM. State of the field of mental health apps. Cogn Behav Pract 2018 Nov;25(4):531-537 [FREE Full text] [CrossRef] [Medline]
  29. Sucala M, Cuijpers P, Muench F, Cardoș R, Soflau R, Dobrean A, et al. Anxiety: there is an app for that. A systematic review of anxiety apps. Depress Anxiety 2017 Jun;34(6):518-525. [CrossRef] [Medline]
  30. Lattie EG, Schueller SM, Sargent E, Stiles-Shields C, Tomasino KN, Corden ME, et al. Uptake and usage of IntelliCare: a publicly available suite of mental health and well-being apps. Internet Interv 2016 May;4(2):152-158 [FREE Full text] [CrossRef] [Medline]
  31. Stiles-Shields C, Montague E, Lattie EG, Kwasny MJ, Mohr DC. What might get in the way: barriers to the use of apps for depression. Digit Health 2017 Jun 8;3:2055207617713827 [FREE Full text] [CrossRef] [Medline]
  32. Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res 2019 Sep 25;21(9):e14567 [FREE Full text] [CrossRef] [Medline]
  33. Wasil AR, Gillespie S, Shingleton R, Wilks CR, Weisz JR. Examining the reach of smartphone apps for depression and anxiety. Am J Psychiatry 2020 May 01;177(5):464-465. [CrossRef] [Medline]
  34. Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res 2021 Mar 24;23(3):e24387 [FREE Full text] [CrossRef] [Medline]
  35. Mohr DC, Cuijpers P, Lehman K. Supportive accountability: a model for providing human support to enhance adherence to eHealth interventions. J Med Internet Res 2011 Mar 10;13(1):e30 [FREE Full text] [CrossRef] [Medline]
  36. Andersson G, Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cogn Behav Ther 2009;38(4):196-205. [CrossRef] [Medline]
  37. Perini S, Titov N, Andrews G. Clinician-assisted internet-based treatment is effective for depression: randomized controlled trial. Aust N Z J Psychiatry 2009 Jun;43(6):571-578. [CrossRef] [Medline]
  38. Păsărelu CR, Andersson G, Bergman Nordgren L, Dobrean A. Internet-delivered transdiagnostic and tailored cognitive behavioral therapy for anxiety and depression: a systematic review and meta-analysis of randomized controlled trials. Cogn Behav Ther 2017 Jan;46(1):1-28. [CrossRef] [Medline]
  39. Bernstein EE, Weingarden H, Wolfe EC, Hall MD, Snorrason I, Wilhelm S. Human support in app-based cognitive behavioral therapies for emotional disorders: scoping review. J Med Internet Res 2022 Apr 08;24(4):e33307 [FREE Full text] [CrossRef] [Medline]
  40. Richards D, Richardson T. Computer-based psychological treatments for depression: a systematic review and meta-analysis. Clin Psychol Rev 2012 Jun;32(4):329-342. [CrossRef] [Medline]
  41. Pang Y, Zhang X, Gao R, Xu L, Shen M, Shi H, et al. Efficacy of web-based self-management interventions for depressive symptoms: a meta-analysis of randomized controlled trials. BMC Psychiatry 2021 Aug 11;21(1):398 [FREE Full text] [CrossRef] [Medline]
  42. Thompson EM, Destree L, Albertella L, Fontenelle LF. Internet-based acceptance and commitment therapy: a transdiagnostic systematic review and meta-analysis for mental health outcomes. Behav Ther 2021 Mar;52(2):492-507. [CrossRef] [Medline]
  43. Wright JH, Owen JJ, Richards D, Eells TD, Richardson T, Brown GK, et al. Computer-assisted cognitive-behavior therapy for depression: a systematic review and meta-analysis. J Clin Psychiatry 2019 Mar 19;80(2):18r12188 [FREE Full text] [CrossRef] [Medline]
  44. Phillips EA, Gordeev VS, Schreyögg J. Effectiveness of occupational e-mental health interventions: a systematic review and meta-analysis of randomized controlled trials. Scand J Work Environ Health 2019 Nov 01;45(6):560-576 [FREE Full text] [CrossRef] [Medline]
  45. Sherifali D, Ali MU, Ploeg J, Markle-Reid M, Valaitis R, Bartholomew A, et al. Impact of Internet-based interventions on caregiver mental health: systematic review and meta-analysis. J Med Internet Res 2018 Jul 03;20(7):e10668 [FREE Full text] [CrossRef] [Medline]
  46. Mehta S, Peynenburg VA, Hadjistavropoulos HD. Internet-delivered cognitive behaviour therapy for chronic health conditions: a systematic review and meta-analysis. J Behav Med 2019 Apr;42(2):169-187. [CrossRef] [Medline]
  47. Cheng LJ, Kumar PA, Wong SN, Lau Y. Technology-delivered psychotherapeutic interventions in improving depressive symptoms among people with HIV/AIDS: a systematic review and meta-analysis of randomised controlled trials. AIDS Behav 2020 Jun;24(6):1663-1675. [CrossRef] [Medline]
  48. Barbui C, Purgato M, Abdulmalik J, Acarturk C, Eaton J, Gastaldon C, et al. Efficacy of psychosocial interventions for mental health outcomes in low-income and middle-income countries: an umbrella review. Lancet Psychiatry 2020 Feb;7(2):162-172. [CrossRef] [Medline]
  49. Dragioti E, Solmi M, Favaro A, Fusar-Poli P, Dazzan P, Thompson T, et al. Association of antidepressant use with adverse health outcomes: a systematic umbrella review. JAMA Psychiatry 2019 Dec 01;76(12):1241-1255 [FREE Full text] [CrossRef] [Medline]
  50. Fusar-Poli P, Radua J. Ten simple rules for conducting umbrella reviews. Evid Based Ment Health 2018 Aug;21(3):95-100. [CrossRef] [Medline]
  51. Goldberg SB, Lam SU, Simonsson O, Torous J, Sun S. Mobile phone-based interventions for mental health: a systematic meta-review of 14 meta-analyses of randomized controlled trials. PLOS Digit Health 2022;1(1):e0000002 [FREE Full text] [CrossRef] [Medline]
  52. Leung C, Pei J, Hudec K, Shams F, Munthali R, Vigo D. The effects of nonclinician guidance on effectiveness and process outcomes in digital mental health interventions: systematic review and meta-analysis. J Med Internet Res 2022 Jun 15;24(6):e36004 [FREE Full text] [CrossRef] [Medline]
  53. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021 Mar 29;372:n71 [FREE Full text] [CrossRef] [Medline]
  54. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017 Sep 21;358:j4008 [FREE Full text] [CrossRef] [Medline]
  55. Hennessy EA, Johnson BT, Acabchuk RL, McCloskey K, Stewart-James J. Self-regulation mechanisms in health behavior change: a systematic meta-review of meta-analyses, 2006-2017. Health Psychol Rev 2020 Mar;14(1):6-42 [FREE Full text] [CrossRef] [Medline]
  56. Pieper D, Antoine SL, Mathes T, Neugebauer EA, Eikermann M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol 2014 Apr;67(4):368-375. [CrossRef] [Medline]
  57. Carolan S, Harris PR, Cavanagh K. Improving employee well-being and effectiveness: systematic review and meta-analysis of web-based psychological interventions delivered in the workplace. J Med Internet Res 2017 Jul 26;19(7):e271 [FREE Full text] [CrossRef] [Medline]
  58. Conley CS, Durlak JA, Shapiro JB, Kirsch AC, Zahniser E. A meta-analysis of the impact of universal and indicated preventive technology-delivered interventions for higher education students. Prev Sci 2016 Aug;17(6):659-678. [CrossRef] [Medline]
  59. Cowpertwait L, Clarke D. Effectiveness of web-based psychological interventions for depression: a meta-analysis. Int J Ment Health Addict 2013 Jan 18;11(2):247-268. [CrossRef]
  60. Firth J, Torous J, Nicholas J, Carney R, Pratap A, Rosenbaum S, et al. The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry 2017 Oct;16(3):287-298 [FREE Full text] [CrossRef] [Medline]
  61. Fu Z, Burger H, Arjadi R, Bockting CL. Effectiveness of digital psychological interventions for mental health problems in low-income and middle-income countries: a systematic review and meta-analysis. Lancet Psychiatry 2020 Oct;7(10):851-864 [FREE Full text] [CrossRef] [Medline]
  62. Grist R, Croker A, Denne M, Stallard P. Technology delivered interventions for depression and anxiety in children and adolescents: a systematic review and meta-analysis. Clin Child Fam Psychol Rev 2019 Jun;22(2):147-171 [FREE Full text] [CrossRef] [Medline]
  63. Harrer M, Adam SH, Baumeister H, Cuijpers P, Karyotaki E, Auerbach RP, et al. Internet interventions for mental health in university students: a systematic review and meta-analysis. Int J Methods Psychiatr Res 2019 Jun;28(2):e1759 [FREE Full text] [CrossRef] [Medline]
  64. Kampmann IL, Emmelkamp PM, Morina N. Meta-analysis of technology-assisted interventions for social anxiety disorder. J Anxiety Disord 2016 Aug;42:71-84. [CrossRef] [Medline]
  65. Kuester A, Niemeyer H, Knaevelsrud C. Internet-based interventions for posttraumatic stress: a meta-analysis of randomized controlled trials. Clin Psychol Rev 2016 Feb;43:1-16. [CrossRef] [Medline]
  66. Li J, Theng YL, Foo S. Game-based digital interventions for depression therapy: a systematic review and meta-analysis. Cyberpsychol Behav Soc Netw 2014 Aug;17(8):519-527 [FREE Full text] [CrossRef] [Medline]
  67. Sijbrandij M, Kunovski I, Cuijpers P. Effectiveness of internet-delivered cognitive behavioral therapy for posttraumatic stress disorder: a systematic review and meta-analysis. Depress Anxiety 2016 Sep;33(9):783-791. [CrossRef] [Medline]
  68. Simmonds-Buckley M, Bennion MR, Kellett S, Millings A, Hardy GE, Moore RK. Acceptability and effectiveness of NHS-recommended e-therapies for depression, anxiety, and stress: meta-analysis. J Med Internet Res 2020 Oct 28;22(10):e17049 [FREE Full text] [CrossRef] [Medline]
  69. Spijkerman MP, Pots WT, Bohlmeijer ET. Effectiveness of online mindfulness-based interventions in improving mental health: a review and meta-analysis of randomised controlled trials. Clin Psychol Rev 2016 Apr;45:102-114 [FREE Full text] [CrossRef] [Medline]
  70. Stratton E, Lampit A, Choi I, Calvo RA, Harvey SB, Glozier N. Effectiveness of eHealth interventions for reducing mental health conditions in employees: a systematic review and meta-analysis. PLoS One 2017 Dec 21;12(12):e0189904 [FREE Full text] [CrossRef] [Medline]
  71. Twomey C, O'Reilly G, Bültmann O, Meyer B. Effectiveness of a tailored, integrative internet intervention (deprexis) for depression: updated meta-analysis. PLoS One 2020 Jan 30;15(1):e0228100 [FREE Full text] [CrossRef] [Medline]
  72. Versluis A, Verkuil B, Spinhoven P, van der Ploeg MM, Brosschot JF. Changing mental health and positive psychological well-being using ecological momentary interventions: a systematic review and meta-analysis. J Med Internet Res 2016 Jun 27;18(6):e152 [FREE Full text] [CrossRef] [Medline]
  73. Victorson DE, Sauer CM, Wolters L, Maletich C, Lukoff K, Sufrin N. Meta-analysis of technology-enabled mindfulness-based programs for negative affect and mindful awareness. Mindfulness (N Y) 2020 Apr 27;11(8):1884-1899. [CrossRef]
  74. McQuillin SD, Hagler MA, Werntz A, Rhodes JE. Paraprofessional youth mentoring: a framework for integrating youth mentoring with helping institutions and professions. Am J Community Psychol 2022 Mar;69(1-2):201-220. [CrossRef] [Medline]
  75. Meyer A, Wisniewski H, Torous J. Coaching to support mental health apps: exploratory narrative review. JMIR Hum Factors 2022 Mar 08;9(1):e28301 [FREE Full text] [CrossRef] [Medline]
  76. Werntz A, Silverman AL, Behan H, Patel SK, Beltzer M, Boukhechba MO, et al. Lessons learned: providing supportive accountability in an online anxiety intervention. Behav Ther 2022 May;53(3):492-507. [CrossRef] [Medline]
  77. Possemato K, Johnson EM, Emery JB, Wade M, Acosta MC, Marsch LA, et al. A pilot study comparing peer supported web-based CBT to self-managed web CBT for primary care veterans with PTSD and hazardous alcohol use. Psychiatr Rehabil J 2019 Sep;42(3):305-313 [FREE Full text] [CrossRef] [Medline]
  78. McKnight-Eily LR, Okoro CA, Strine TW, Verlenden J, Hollis ND, Njai R, et al. Racial and ethnic disparities in the prevalence of stress and worry, mental health conditions, and increased substance use among adults during the COVID-19 pandemic - United States, April and May 2020. MMWR Morb Mortal Wkly Rep 2021 Feb 05;70(5):162-166 [FREE Full text] [CrossRef] [Medline]

AMSTAR: Assessment of Multiple Systematic Reviews
CCA: corrected covered area
DMHI: digital mental health intervention
MHA: mental health mobile app
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
PTSD: posttraumatic stress disorder
RCT: randomized controlled trial

Edited by G Eysenbach; submitted 21.09.22; peer-reviewed by A Bauer, L Xu; comments to author 04.11.22; revised version received 23.11.22; accepted 11.01.23; published 06.02.23


©Alexandra Werntz, Selen Amado, Megyn Jasman, Ariel Ervin, Jean E Rhodes. Originally published in the Journal of Medical Internet Research (, 06.02.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.