This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
Digital cognitive behavioral therapy (CBT) interventions can effectively prevent and treat depression and anxiety, but engagement with these programs is often low. Although extensive research has evaluated program use as a proxy for engagement, the extent to which users acquire knowledge and enact skills from these programs has been largely overlooked.
This study aimed to investigate how skill enactment and knowledge acquisition have been measured, evaluate postintervention changes in skill enactment and knowledge acquisition, examine whether mental health outcomes are associated with skill enactment or knowledge acquisition, and evaluate predictors of skill enactment and knowledge acquisition.
PubMed, PsycINFO, and Cochrane CENTRAL were searched for randomized controlled trials (RCTs) published between January 2000 and July 2022. We included RCTs comparing digital CBT with any comparison group in adolescents or adults (aged ≥12 years) for anxiety or depression. Eligible studies reported quantitative measures of skill enactment or knowledge acquisition. The methodological quality of the studies was assessed using the Joanna Briggs Institute Critical Appraisal Checklist for RCTs. Narrative synthesis was used to address the review questions.
In total, 43 papers were included, of which 29 (67%) reported a skill enactment measure and 15 (35%) reported a knowledge acquisition measure. Skill enactment was typically operationalized as the frequency of enacting skills using the completion of in-program activities (ie, formal skill enactment; 13/29, 45%) and intervention-specific (9/29, 31%) or standardized (8/29, 28%) questionnaires. Knowledge measures included tests of CBT knowledge (6/15, 40%) or mental health literacy (5/15, 33%) and self-report questionnaires (6/15, 40%). In total, 17 studies evaluated postintervention changes in skill enactment or knowledge acquisition, and findings were mostly significant for skill enactment (6/8, 75% of the studies), CBT knowledge (6/6, 100%), and mental health literacy (4/5, 80%). Of the 12 studies that evaluated the association between skill enactment and postintervention mental health outcomes, most reported ≥1 significant positive finding on standardized questionnaires (4/4, 100%), formal skill enactment indicators (5/7, 71%), or intervention-specific questionnaires (1/1, 100%). None of the 4 studies that evaluated the association between knowledge acquisition and primary mental health outcomes reported significant results. A total of 13 studies investigated predictors of skill enactment; only type of guidance and improvements in psychological variables were associated with increased skill enactment in ≥2 analyses. Predictors of knowledge acquisition were evaluated in 2 studies.
Digital CBT for depression and anxiety can improve skill enactment and knowledge acquisition. However, only skill enactment appears to be associated with mental health outcomes, which may depend on the type of measure examined. Additional research is needed to understand what types and levels of skill enactment and knowledge acquisition are most relevant for outcomes and identify predictors of these constructs.
PROSPERO CRD42021275270; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=275270
Depression and anxiety are common mental disorders [
Despite the promise of DMHIs, limited or low engagement with these programs is common. A systematic review of self-guided interventions for depression (most of which were based on CBT) found that >80% of research trial participants failed to complete all intervention modules, and only approximately 40% completed half of all modules [
However, challenges remain regarding how to define and measure engagement with DMHIs [
Recognizing the need to improve how engagement with DMHIs is measured and reported, recent recommendations have proposed the use of multiple approaches [
The importance of skill enactment for positive outcomes has also been instantiated in several frameworks of engagement with face-to-face and digital health interventions [
A factor that has been suggested to be important for both skill enactment and mental health outcomes is the acquisition of knowledge about mental health and awareness of strategies to address symptoms (hereafter referred to as knowledge acquisition) [
Although there is robust evidence for the effectiveness of digital CBT and the logical and theoretical importance of knowledge acquisition and skill enactment in achieving these outcomes, little is known about whether users acquire knowledge or enact skills from these interventions. Consequently, it is unclear whether skill enactment and knowledge acquisition are important for mental health outcomes or which factors influence skill enactment and knowledge acquisition. Although several reviews have investigated the predictors and outcomes of engagement or adherence [
This systematic review builds on previous reviews by investigating skill enactment as a component of engagement with digital CBT interventions for depression and anxiety. Although we focused on skill enactment, the review adhered to a broader definition of engagement that includes initial uptake, ongoing use of a program, and enactment of skills in everyday life [
This review was reported according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [
We searched the Cochrane CENTRAL, PubMed, and PsycINFO databases for peer-reviewed articles published between January 1, 2000, and July 26, 2022. The start date was selected to coincide with the publication of RCTs related to the first digital interventions for depression and anxiety [
Inclusion and exclusion criteria were developed using the participants, intervention, comparator, outcome, and study design framework [
Studies targeting adolescent or adult samples with a mean age of ≥12 years were eligible for inclusion. Articles were excluded if the mean age of the sample was <12 years or if the study primarily targeted people with a physical health condition. Primary studies were not required to screen participants for the presence of elevated anxiety or depressive symptoms to be eligible for inclusion given evidence that prevention programs can result in symptom improvements among people who do not have clinical levels of depression or anxiety [
Articles were eligible for inclusion if they tested a stand-alone CBT intervention delivered via a digital platform that was designed to reduce or prevent symptoms of depression or anxiety. Interventions were classified as CBT if they were described as such by the study authors and included cognitive restructuring as a core component. Interventions could be delivered with or without guidance. Articles were excluded if the intervention (1) was not CBT (eg, behavioral activation, acceptance and commitment therapy, or cognitive behavioral stress management), (2) did not include depression or anxiety as the main intervention target (eg, a program that primarily focused on chronic pain but also included a depression outcome), (3) used technology but was delivered at a clinic or in the laboratory (ie, not a distal intervention), (4) was delivered as part of stepped care or as adjunctive therapy, or (5) was primarily delivered by a health professional in person or via videoconference or email (eg, the main intervention was in person, but an SMS text message component was included).
Eligible interventions could be compared with an active (eg, other intervention or attention placebo) or inactive (eg, waitlist or no-intervention) control group. Uncontrolled studies were excluded.
Articles were included if they reported a quantitative measure of skill enactment or knowledge acquisition. The selection of eligible measures for this review was based on definitions provided within existing engagement frameworks [
Regarding knowledge acquisition, articles were eligible for inclusion if they reported a measure of (1) actual learning via a knowledge test or (2) self-reported learning. For this review, measures of general mental health literacy (ie, measures targeting awareness of the causes, epidemiology, symptoms, diagnosis, and treatment of mental disorders [
Primary and secondary reports of RCTs were eligible for inclusion. Cluster and factorial designs were eligible for inclusion, as were pilot and feasibility trials. Non-RCTs and observational studies (eg, cross-sectional, cohort, and case-control designs) were excluded.
The search results were uploaded to EndNote (version 20; Clarivate Analytics), and duplicate records were removed automatically and through hand searching. Study selection was completed in 3 stages. At stage 1, titles and abstracts were screened by 1 of 2 reviewers (HJ or AT) and discussed with the last author (LF) to exclude irrelevant records. A third reviewer (GF) screened 10% of the abstracts to confirm that the review criteria had been applied consistently (percentage of agreement=99.6%; Cohen κ=0.97). At stage 2, full-text articles were uploaded to the Covidence systematic review software (Veritas Health Innovation) and coded by the first author (HJ) according to whether the studies (1) evaluated a CBT-based program and (2) reported a measure of skill enactment or knowledge acquisition (“yes,” “no,” or “unclear”). Relevant construct definitions and inclusion criteria were piloted with 50 full-text articles and refined through discussion within the review team before completion of this stage. A second reviewer screened 10% of the full-text articles to check for coder bias at stage 2, which did not result in the inclusion of any additional manuscripts (percentage of agreement=98.4%; Cohen κ=0.90). At stage 3, all articles coded as “yes” or “unclear” were referred to a third stage for double screening by 2 reviewers (HJ and GF) against all inclusion criteria. Discrepancies were resolved through discussion, and a third author (LF) was consulted if an agreement could not be reached.
A data extraction form was developed in Microsoft Excel (Microsoft Corp) and pilot-tested on 5 papers. Data extraction was completed by the first author (HJ), and accuracy was confirmed by a second author (GF). The key data elements extracted included skill enactment and knowledge acquisition data to address the objectives of the review (described in the
The presence of guidance was categorized as guided (support related to treatment content was provided), unguided (no support related to treatment content provided), or supported (support was provided by educational staff for interventions delivered in educational settings, but this support was not related to treatment content). We included a
The methodological quality of the included studies was independently assessed by 2 reviewers (HJ and LF) using the Joanna Briggs Institute Critical Appraisal Checklist for RCTs [
The studies were not pooled for meta-analysis because of methodological diversity related to the measurement tools used to evaluate skill enactment and knowledge acquisition, as well as the use of different statistical approaches to determine the associations among skill enactment, knowledge acquisition, and mental health outcomes. Instead, descriptive narrative synthesis was conducted to address the review questions per our PROSPERO registration (CRD42021275270). All eligible measures of skill enactment and knowledge acquisition were recorded and tabulated, with tables ordered by symptoms or disorder targeted. In addition, we recorded any data on change in skill enactment or knowledge acquisition outcomes at postintervention measurement or follow-up (eg, between- or within-group comparisons or assessments of self-reported skill practice or learning), any analysis of the association between skill enactment or knowledge acquisition and primary or secondary mental health outcomes (eg, via correlation, regression, or mediation analyses), and any investigation of potential predictors of skill enactment or knowledge acquisition (eg, via regression or between-group analyses).
Definitions provided in the literature on CBT skill use [
Significant and nonsignificant findings were also summarized to examine the association between mental health outcomes and skill enactment or knowledge acquisition (review question 3) and investigate predictors of skill enactment or knowledge acquisition (review question 4). Predictor variables could include participant characteristics (eg, age and sex), disease-specific effects (eg, baseline symptom levels), intervention characteristics (eg, presence of guidance), and components of intervention engagement (eg, module completion). Previous systematic reviews of predictors of adherence or engagement [
A total of 27,822 records were retrieved from the database and forward citation searches. After removing duplicates, 20,281 titles and abstracts were screened. Full texts were retrieved for 781 articles, and 43 (5.5%) papers [
Flow diagram of study screening and inclusion. CBT: cognitive behavioral therapy; RCT: randomized controlled trial.
Study characteristics and outcomes of the review are reported in
A total of 10,078 participants were included in the 42 studies (ie, the number analyzed). Among the 41 studies that reported the number of participants by group assignment, 5881 participants received a stand-alone digital CBT intervention. The sample sizes ranged from 43 to 1236 (mean 239.95, SD 278.91; median 149.50). Most studies (30/42, 71%) recruited participants from the community, although 14% (6/42) recruited from educational settings, 10% (4/42) recruited from health care settings, and 5% (2/42) recruited from occupational settings. Studies (36/42, 86%) were typically conducted with adults, although 10% (4/42) included adolescents, and 5% (2/42) included young adults. Among the 31 studies where the mean age of the total sample was reported, the mean participant age ranged from 15.00 to 51.60 years (mean 31.90, SD 8.21 years). The proportion of female participants in the study samples ranged from 16.1% to 100% (median 76.59%), and 17% (7/42) of the studies included only female or pregnant participants, all of which focused on maternal or perinatal mental health.
Studies were categorized according to the symptoms or disorders targeted, as specified by the study authors. The most common conditions or symptoms targeted were depression (14/42, 33%), social anxiety (9/42, 21%), or depression and anxiety (5/42, 12%). Other symptoms or disorders targeted included panic disorder and agoraphobia (3/42, 7%), postnatal depression (3/42, 7%), perinatal anxiety and depression (2/42, 5%), and anxiety (1/42, 2%). A total of 12% (5/42) of the interventions targeted multiple mental health concerns. Almost half (19/42, 45%) of the trials focused on symptom reduction (ie, the study screened for and targeted individuals with elevated symptom levels not yet meeting diagnostic criteria), 38% (16/42) focused on treatment (ie, the study screened for and targeted individuals meeting diagnostic criteria for a depressive or anxiety disorder), and 17% (7/42) focused on prevention (ie, the study did not screen participants for elevated symptom levels). A total of 57 eligible interventions were tested in the 42 studies.
Most studies (33/42, 79%) evaluated internet-based interventions, with access primarily provided via a computer. The remaining studies tested smartphone apps (3/42, 7%), internet-based programs with an adjunct smartphone app (2/42, 5%), a computerized intervention (delivered via CD-ROM; 1/42, 2%), a conversational agent (1/42, 2%), a multiplatform intervention (1/42, 2%), and an internet-based intervention delivered via a smartphone app versus a computer (1/42, 2%).
A nearly equal number of studies evaluated only guided (19/42, 45%) or unguided (17/42, 40%) interventions, and 5% (2/42) of the studies tested supported interventions (self-guided interventions delivered in a supported environment such as a classroom). In addition, 5% (2/42) of the studies compared digital CBT delivered with and without guidance, 2% (1/42) compared digital CBT delivered with different types of guidance, and another study (1/42, 2%) tested unguided digital CBT compared with 2 types of guidance.
Intervention length ranged from 2 weeks to 4 months (mean 8.21, SD 3.78; median 8.00). Among the 34 studies that specified the number of modules or sessions, the number of core modules ranged from 3 to 17 (mean 7.21, SD 2.46; median 7.50).
The number of studies that satisfied each of the quality items is shown in
Number (and percentage) of the included studies meeting the criteria on the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Randomized Controlled Trials (n=42).
Item number | JBI checklist item | Studies, n (%) |
1 | True randomization | 31 (74) |
2 | Allocation concealment | 33 (79) |
3 | Treatment groups similar at baselinea | 36 (86) |
4 | Participants blind to treatment assignment | 0 (0) |
5 | Those delivering the intervention blind to treatment assignmentb | 15 (36) |
6 | Outcome assessors blind to treatment assignment | 0 (0) |
7 | Treatment groups treated identically | 42 (100) |
8 | Follow-up complete or differences between groups adequately described and analyzed | 33 (79) |
9 | Participants analyzed in the groups to which they were randomized | 35 (83) |
10 | Outcomes measured in the same way for treatment groups | 42 (100) |
11 | Outcomes measured reliably | 22 (52) |
12 | Appropriate statistical analysis | 39 (93) |
13 | Appropriate trial design and any deviations accounted for | 42 (100) |
aIn 26% (11/42) of the studies, observed differences between treatment groups were accounted for in the analyses.
bStudies testing unguided programs wherein treatment was delivered entirely on the web met this criterion by default.
Of the 43 papers, 28 (65%) reported ≥1 measure of skill enactment [
Of the 29 studies reporting a measure of skill enactment, 26 (90%) were conducted with adults, 2 (7%) were conducted with adolescents, and 1 (3%) was conducted with young adults. The studies evaluated guided (15/29, 52%), unguided (10/29, 34%), or supported (2/29, 7%) interventions, and 7% (2/29) compared ≥1 guided and unguided intervention.
Of the 15 studies reporting a measure of knowledge acquisition, most (n=11, 73%) were conducted with adults, although 3 (20%) included adolescent samples, and 1 (7%) included a young adult sample. The studies evaluated guided (5/15, 33%), unguided (8/15, 53%), or supported (1/15, 7%) interventions, and 7% (1/15) compared guided and unguided interventions.
The methods used to measure skill enactment varied across the studies. These included formal skill enactment measures captured via log data (13/29, 45%) and standardized (8/29, 28%) or intervention-specific (9/29, 31%) questionnaire measures. Among the studies reporting an indicator of formal skill enactment, most targeted social anxiety (7/13, 54%) or panic disorder and agoraphobia (2/13, 15%). Of the studies reporting a standardized or intervention-specific questionnaire, studies typically evaluated interventions for depression (7/17, 41%), perinatal mental health (4/17, 24%), or depression and anxiety (2/17, 12%). Regardless of the measure used, skill enactment was assessed as frequency (eg, how often a person performed skills over the past week or the number of in-program activities completed) or time spent practicing skills in all studies except 1 (28/29, 97%), which also assessed quality of skill enactment. See
Summary of methods used to investigate skill enactment in the included studies (n=29).
Type of skill enactment measure and measure of skill enactment | Times reporteda, n (%) | ||
|
|||
|
Tracked exposures | 7 (24) | |
|
Cognitive restructuring exercises | 6 (21) | |
|
Anxiety diaries | 4 (14) | |
|
Activity planning | 2 (7) | |
|
Behavioral experiments | 2 (7) | |
|
Attentional training exercises | 1 (3) | |
|
Relaxation exercises | 2 (7) | |
|
Global indicators | 4 (14) | |
|
|||
|
Behavioral activation skills | 6 (21) | |
|
Cognitive skills | 1 (3) | |
|
Cognitive and behavioral skills | 2 (7) | |
|
|||
|
Time spent practicing skills (single item) | 5 (17) | |
|
Frequency of practicing skills (single item) | 2 (7) | |
|
Frequency of practicing specific skills (multi-item) | 3 (10) | |
|
Successful use of skillsc | 1 (3) |
aNumbers do not add up to 29 as some studies include more than one indicator of skill enactment.
bAll standardized and intervention-specific questionnaires relied on participant self-reports unless otherwise stated.
cOn the basis of coach reports.
Methods used to measure knowledge acquisition included objective tests (9/15, 60%) designed to measure declarative knowledge about CBT principles or mental health literacy, as well as questionnaire measures (6/15, 40%) to assess perceived learning or knowledge. Studies that reported a measure of knowledge acquisition typically targeted symptoms of depression (8/15, 53%) or anxiety and depression (2/15, 13%).
Summary of methods used to investigate knowledge acquisition in the included studies (n=15).
Type of knowledge acquisition measure and measure of knowledge acquisitiona | Times reportedb, n (%) | ||
|
|||
|
Multiple-choice itemsd | 3 (20) | |
|
True-or-false items | 2 (13) | |
|
Multiple-choice + true-or-false itemsd | 1 (7) | |
|
|||
|
Multiple-choice items | 1 (7) | |
|
True-or-false items | 1 (7) | |
|
Helpfulness ratings | 1 (7) | |
|
Multiple-choice items + helpfulness ratings | 1 (7) | |
|
True-or-false items + vignettes | 1 (7) | |
|
|||
|
Perceived learning (single item) | 4 (27) | |
|
Perceived knowledge of specific treatment components (multi-item) | 2 (13) |
aKnowledge tests and questionnaire measures were completed by the participants in all studies.
bNumbers do not add up to 15 as some studies include more than one measure of knowledge acquisition.
cCBT: cognitive behavioral therapy.
dIn addition to the total number of correct answers, 50% (3/6) of the studies reporting CBT knowledge tests reported weighted scores based on the level of certainty associated with a given response.
A total of 8 studies evaluated skill enactment from baseline to postintervention measurement, all of which used standardized questionnaire measures to assess the frequency of enacting skills. Of these 8 studies, 6 (75%) reported significant findings in favor of the intervention group at postintervention measurement [
A total of 10 studies evaluated knowledge acquisition from baseline to postintervention measurement. The results are grouped by type of knowledge measure.
A total of 6 studies on 10 eligible interventions examined whether CBT knowledge improved at postintervention measurement, and all (6/6, 100%) reported significant improvements across all eligible interventions using between-group [
A total of 5 studies on 7 eligible interventions examined whether mental health literacy improved at postintervention measurement. Of these 5 studies, 2 (40%) reported significant findings in favor of the intervention group at postintervention measurement across all eligible interventions [
A total of 2 studies examined whether self-reported knowledge improved at postintervention measurement, with 1 (50%) reporting significant findings in favor of the intervention group across all learning areas [
A total of 12 studies evaluated the relationship between skill enactment and postintervention mental health outcomes, of which most (n=7, 58%) examined indicators of formal skill enactment and all but 2 (17%) were conducted with adult samples. The results in the following sections are grouped by type of skill enactment measure.
Four studies examined whether frequency of skill enactment mediated the effect of the intervention on postintervention mental health outcomes using standardized questionnaire measures. Of these 4 studies, 3 (75%) found that skill enactment significantly mediated improvements in mental health outcomes at postintervention measurement [
A total of 7 studies on 11 eligible interventions investigated the association between indicators of formal skill enactment and mental health outcomes at postintervention measurement. As some studies included multiple analyses (eg, multiple skill enactment measures investigated or >1 eligible intervention evaluated), results are organized around the specific measure used to predict mental health outcomes and summarized in terms of the number of analyses. We focus on indicators showing significant results in ≥2 analyses, although
Summary of formal skill enactment indicators used to predict postintervention mental health outcomes (n=7).
Formal skill enactment indicator | Studiesa, n/N (%) | Positive analysesb, n/N (%) |
Tracked exposures | 3/3 (100) | 4/5 (80) |
Global indicator | 2/3 (67) | 2/5 (40) |
Cognitive restructuring exercisesc | 1/4 (25) | 1/6 (17) |
Anxiety diaries | 1/2 (50) | 1/4 (25) |
Activity planning | 0/1 (0) | 0/1 (0) |
Relaxation tools | 0/2 (0) | 0/3 (0) |
aNumber of studies reporting ≥1 significant positive associations between a formal skill enactment indicator and mental health outcomes over the total number of studies investigating that indicator.
bNumber of analyses reporting a significant positive association between a formal skill enactment indicator and mental health outcomes over the total number of analyses.
cOne analysis showed a negative association between cognitive restructuring exercises and improvement in stress at postintervention measurement, although this indicator was associated with significant improvements in anxiety and stress at follow-up.
Overall, at least one positive finding was reported in 71% (5/7) of the studies. However, only the number of exposure exercises completed (4/5, 80% of analyses in 3/3, 100% of the studies) and global indicators (2/5, 40% of analyses in 2/3, 67% of the studies) were positively associated with improvement in mental health outcomes at postintervention measurement in ≥2 analyses. At least 1 significant positive finding was reported in each of the 4 studies evaluating improvements in social anxiety symptoms [
A total of 1 study examined whether frequency of enacting intervention-specific skills was associated with improvement in mental health outcomes at postintervention measurement. Results were largely nonsignificant; skill enactment was associated with improvements in support-seeking coping but not in depression, anxiety, well-being, or emotion regulation [
A total of 4 studies evaluated the association between knowledge acquisition and mental health outcomes at postintervention measurement. The results in the following sections are grouped by type of knowledge measure.
A total of 3 studies on 6 eligible interventions examined whether CBT knowledge acquisition was associated with improvement in mental health outcomes at postintervention measurement. None of the studies reported significant findings for the primary outcomes of depression (n=2), anxiety (n=1), or social anxiety (n=1) [
A total of 1 study on 2 eligible interventions investigated whether improvements in mental health literacy were associated with the primary outcomes of depression, anxiety, and mental well-being at postintervention measurement, with no significant findings reported [
In total, 13 studies investigated potential predictors of frequency or time spent using skills, of which 6 (46%) examined formal skill enactment measures, 4 (31%) examined intervention-specific questionnaires, and 3 (23%) examined standardized questionnaire measures. The results in the following sections are grouped according to type of predictor and focus on significant findings. As with review question 3, the results are reported in terms of the number of studies and analyses.
In total, 2/13 (15%) studies examined whether factors related to intervention content were associated with time spent using skills (ie, CBT with or without exposure and CBT with or without mindfulness), and the results were not significant [
A total of 2/13 (15%) studies on 4 eligible interventions examined whether the presence of specific intervention features was associated with frequency of enacting skills, and there were mixed results. The delivery of an adjunct skills-based app during an internet-based intervention (compared with sequential delivery of the same app; 1/1, 100% of the analyses) [
A total of 2/13 (15%) studies examined whether the therapeutic approach was associated with frequency or time spent using skills (ie, cognitive restructuring vs self-compassion intervention and internet-based CBT vs internet-based exposure therapy), and the results were largely nonsignificant (1/5, 20% of the analyses) [
In total, 2/13 (15%) studies on 5 eligible interventions investigated whether the presence and type of guidance were associated with skill enactment frequency. Individual therapist guidance, compared with group-based guidance, was found to be positively associated with skill enactment (2/2, 100% of the analyses) [
A total of 2/13 (15%) studies examined whether psychological factors were associated with frequency of enacting skills, with both studies finding that improvements in skill enactment were associated with improvements in negative thinking (2/2, 100% of the analyses) and savoring (2/2, 100% of the analyses) [
A total of 2/13 (15%) studies investigated whether baseline symptoms predicted the frequency of skill enactment, with neither study reporting significant findings [
A total of 1/13 (8%) studies examined whether completion of a fixed number of modules was associated with skill enactment frequency [
A total of 2 studies examined predictors of knowledge acquisition. The results are grouped according to the type of predictor.
In total, 1/2 (50%) studies examined whether learning support integrated into a web-based program predicted CBT knowledge acquisition, and the results were significant (1/1, 100% of the analyses) [
A total of 1/2 (50%) studies investigated whether weekly therapist support was associated with improvements in CBT knowledge, and the results were not significant [
A total of 1/2 (50%) studies evaluated whether assignment to the active intervention condition was associated with higher levels of perceived learning relative to an attention control program, and the results were significant (1/1, 100% of the analyses) [
This review aimed to systematically examine the literature on skill enactment and knowledge acquisition in the context of digital CBT interventions for symptoms of depression or anxiety. In total, 43 papers (reporting on 42 independent trials) were included, of which 29 (67%) reported a measure of skill enactment and 15 (35%) reported a measure of knowledge acquisition.
Most of the research on engagement with digital CBT interventions for depression and anxiety has not measured skill enactment. Despite the use of broad inclusion criteria to identify skill enactment measures, only approximately 6.4% (29/456) of eligible papers reported a quantitative measure of skill enactment and were included in this review. In contrast, a previous review of adherence to manualized web-based interventions found that 85% of primary publications included information on program use [
This review identified some weaknesses in current methods of measuring skill enactment. Only standardized questionnaires of behavioral activation skills, single items measuring “global” skill enactment, and adherence measures derived from exposure diaries or cognitive restructuring exercises were examined in ≥5 studies. As digital CBT programs target a range of adaptive skills, the use of unidimensional measures may not sufficiently capture the multifaceted nature of these interventions or enable examination of the differential impact of enacting specific skills. A related issue is the relatively common use of automatically captured log data (ie, formal skill enactment measures) as a proxy for skill enactment (although we do not suggest that this was the intention of the study authors). Although these measures provide helpful information on the frequency of using specific intervention tools, they cannot capture the various ways in which people can engage with skills outside their interaction with a program. For example, people might schedule pleasant events or achievement activities on their phones or personal calendars rather than using the tools provided in the program itself. Therefore, it is important to implement measures that reflect the breadth of recommended skills and the various ways in which people can enact them.
The studies also varied in the methods used to report skill enactment data (eg, the number of formal exercises completed and percentage of participants able to regularly perform exercises), and nearly all studies (28/29, 97%) reported the frequency or amount of time spent practicing skills based on retrospective self-reports. These approaches overlook key aspects of skill enactment outlined in the literature on CBT and emotion regulation [
The number of studies that examined changes in skill enactment (n=8) or effects on mental health outcomes (n=12) was small and, again, limited to assessments of skill enactment frequency using questionnaire measures or log data. Nevertheless, some promising findings were reported. Most studies that evaluated changes in skill enactment (6/8, 75%) found that levels of skill enactment increased at postintervention measurement among participants exposed to interventions relative to the comparison groups. Estimated effect sizes were generally medium to large. Overall, these data suggest that interventions targeting depression are effective in improving the frequency of skill enactment. Interventions for postnatal depression or anxiety and depression combined may also be effective, but the findings were mixed [
In contrast, studies that addressed mental health outcomes in relation to formal skill enactment measures provided mixed evidence. Only the number of tracked exposures and global skill enactment indicators were positively associated with outcomes in ≥2 analyses, whereas other indicators (eg, cognitive restructuring and activity planning) were not found to be consistently related to outcomes or were addressed in only 1 analysis. This result is perhaps not surprising given the similarly mixed findings for comparable program use measures, such as number of diary entries, tools used, or activities [
Similar to the selection of skill enactment measures, there was considerable variation in the predictors and types of skill enactment studied. The same predictor was only investigated in more than one study on 2 occasions. There was some evidence that participants used skills more often if they demonstrated improvements in negative thinking and savoring [
Only 3.3% (15/456) of eligible articles reported a quantitative measure of knowledge acquisition. Among these studies, objective tests (eg, multiple-choice or true-or-false tests and objectively scored helpfulness ratings) designed to measure declarative knowledge about CBT or mental health literacy comprised almost two-thirds (9/15, 60%) of the measures. Self-report questionnaires on perceived learning were also reported, although these measures were typically limited to single items. Regardless of the type of measure reported, more than half (8/15, 53%) of the included studies evaluated interventions for depression. Some limitations of existing approaches to measuring knowledge acquisition include the fact that most studies (9/15, 60%) only considered declarative knowledge (ie, knowledge of facts and information) measured using recognition-based tasks (eg, multiple-choice questions). Other forms of learning and knowledge may also be important to consider, such as more implicit and procedural knowledge (ie, knowledge of how to perform a skill), as well as the ability to generalize and apply learning to novel situations [
Most studies (8/10, 80%) that evaluated changes in knowledge found that intervention group participants significantly improved their levels of knowledge at postintervention measurement. The studies that found at least one positive effect included all the studies (6/6, 100%) that used a test of CBT knowledge, 80% (4/5) of the studies evaluating mental health literacy, and 50% (1/2) of the studies evaluating perceived knowledge acquisition. Overall, these findings indicate that participants can learn and recognize novel information about mood and anxiety disorders and underlying therapeutic principles in digital CBT. These interventions may also be effective in improving perceived knowledge and learning, but the findings were mixed [
There was no evidence that knowledge acquisition, assessed using tests of CBT knowledge or mental health literacy, was associated with improved mental health outcomes at postintervention measurement. Thus, participants can learn the treatment content, but they do not necessarily benefit in terms of symptom reduction. Overall, this pattern of results is in line with studies evaluating digital CBT interventions targeting other common mental disorders such as eating disorders [
Only 2 studies on knowledge acquisition examined potential predictors of knowledge acquisition. Significant predictors included the use of learning support strategies and assignment to the active intervention rather than the attention control group, whereas therapist guidance was nonsignificant [
This review highlighted several gaps in the current state of knowledge in this area. Overall, there was a lack of research that examined skill enactment and knowledge acquisition in the context of digital CBT. Very few studies (6/42, 14%) reported on skill enactment and knowledge acquisition among adolescents and young adults, and most studies (33/42, 79%) evaluated internet-based interventions accessed via a computer. Similar to other authors [
However, future research must go beyond simply measuring skill enactment and knowledge acquisition to also address measurement issues. A key limitation of the literature identified by this review is the heterogeneity in how knowledge acquisition, and especially skill enactment, was measured and analyzed. Heterogeneity was evident not only in the specific construct examined (eg, skill enactment frequency vs quality and CBT knowledge vs mental health literacy) but also in the mode of assessment and the way in which data were reported and analyzed. This heterogeneity hinders the pooling of data for quantitative synthesis and may be due in part to a lack of specific theoretical guidance or standards available to inform the selection, analysis, and reporting of skill enactment and knowledge acquisition measures in DMHIs. To facilitate consistency going forward, we recommend that definitions of skill enactment and knowledge acquisition in DMHIs and the subsequent selection of measures are theory-driven. For example, the literature on CBT skill use and emotion regulation provides clear conceptual definitions of skill enactment and its distinct components (eg, frequency, quantity, and quality) that can inform the selection of appropriate measures [
In addition, although the overall quality of the included studies was good, some methodological issues were evident, with most or all studies not meeting quality assessment criteria related to blinding of participants (0/42, 0%), outcome assessors (0/42, 0%), or those delivering treatment (15/42, 36%). This is an inherent problem with RCTs of psychological interventions [
This review has some limitations. The inclusion of an English-language restriction and single screening of titles, abstracts, and initial full texts may have increased the risk of selection bias. Although double screening of 10% of the records at each stage did not result in the inclusion of any additional papers, it is possible that some publications were excluded erroneously. It is also possible that some studies were excluded because of insufficient or unclear reporting of the study methods or results. The review was further limited to RCTs of stand-alone digital CBT interventions for depression and anxiety. Thus, our findings cannot be generalized to open-access settings, other mental health problems, stepped-care interventions, or psychotherapeutic treatments other than CBT. Nevertheless, the examination of knowledge acquisition and skill enactment is likely to be important in any skills training intervention, and this review can inform the development of future studies in this area. In addition, we could not conduct a quantitative synthesis of skill enactment and knowledge acquisition because of methodological diversity in the included studies, meaning that we could not estimate the average effect of skill enactment or knowledge acquisition on depression and anxiety outcomes, nor could we examine the factors that modify these effects. Future meta-analytic studies will be essential to understand therapeutic mechanisms in digital CBT and advance future interventions but will require greater methodological consistency.
In digital interventions for mental health problems, it is essential to consider engagement as a concept that extends beyond program use to encompass actions that are implemented outside initial engagement with an intervention. Given that most studies on engagement with DMHIs focus on program use, this review addresses a substantial gap by focusing on skill enactment and knowledge acquisition during or following program use. This review demonstrated that digital CBT interventions for depression and anxiety appear to be effective in improving skill enactment frequency; levels of CBT knowledge; and, to a lesser extent, mental health literacy. However, only skill enactment frequency was found to be associated with positive mental health outcomes. Few studies have investigated predictors of skill enactment and knowledge acquisition, and those that have done so have mostly been limited to investigations of intervention-related factors and reported null results. This review calls for a more systematic and theory-based approach to studying the role of skill enactment and knowledge acquisition in DMHIs for depression and anxiety. The findings of this review can inform the development and selection of skill enactment and knowledge acquisition measures and promote the inclusion of these types of measures in future studies evaluating DMHIs.
PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.
Search strategies for PubMed, PsycINFO, and Cochrane CENTRAL.
Results of the review.
Quality assessment.
cognitive behavioral therapy
digital mental health intervention
intention-to-treat
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
randomized controlled trial
The authors would like to thank Ms Angelica Trias for assistance with the initial screening of the studies. HMJ is supported by an Australian Government Research Training Program Scholarship and Grace Groom Memorial Scholarship. LMF is supported by an Australian Research Council Discovery Early Career Researcher Award (ARC DECRA) DE190101382. PJB is supported by National Health and Medical Research Council (NHMRC) Fellowship 1158707. ALC is supported by NHMRC Fellowship 1173146. The funders were not involved in the design or conduct of this research.
HMJ conceived and designed the research in consultation with LMF, PJB, ALC, and JLO. HMJ and GMF completed screening and data extraction, and HMJ and LMF conducted the risk-of-bias assessment. HMJ performed the synthesis with input from LMF, PJB, ALC, and JLO. HMJ prepared the first draft of the manuscript. All the authors were involved in the revision of earlier versions of and approved the final manuscript.
PJB, ALC, and LMF are codevelopers of FitMindKit, which was evaluated in one of the included trials [