Background: Mindfulness-based mobile apps have become popular tools for enhancing well-being in today’s fast-paced world. Their ability to reduce geographical, financial, and social barriers makes them a promising alternative to traditional interventions.
Objective: As most available apps lack a theoretical framework, this review aimed to evaluate their effectiveness and assess their quality. We expected to find small sample sizes, high dropout rates, and small effect sizes in the included studies.
Methods: A systematic literature search was conducted using PsycInfo, PsycNet, PubMed, an institutional search engine (u:search), and Google Scholar. Randomized controlled trials assessing the impact of mobile mindfulness apps on well-being in nonclinical samples were included. Study selection, risk of bias (using the version 2 of the Cochrane risk-of-bias tool for randomized trials), and reporting quality (using selected CONSORT [Consolidated Standards of Reporting Trials] statement criteria) assessments were performed by 2 authors independently and discussed until a consensus was reached.
Results: The 28 included randomized controlled trials differed in well-being measures, apps, and intervention duration (7 to 56 days; median duration 28 days). A wide range of sample sizes (12 to 2282; median 161) and attrition rates (0% to 84.7%; median rate 23.4%) were observed. Most studies (19/28, 68%) reported positive effects on at least one aspect of well-being. The effects were presented using different metrics but were primarily small or small to medium in size. Overall risk of bias was mostly high.
Conclusions: The wide range of sample sizes, attrition rates, and intervention periods and the variation in well-being measures and mobile apps contributed to the limited comparability of the studies. Although most studies (16/28, 57%) reported small or small to medium effects for at least one well-being outcome, this review demonstrates that the generalizability of the results is limited. Further research is needed to obtain more consistent conclusions regarding the impact of mindfulness-based mobile apps on well-being in nonclinical populations.
Mindfulness has its roots in Buddhism  and has become a popular field of research in psychology [ ]. There has been an exponential rise in mindfulness meditation research over the past 2 decades [ ]. Although there is not yet a clear definition [ ], an operational working definition describes mindfulness as “the awareness that emerges through paying attention on purpose, in the present moment, and nonjudgmentally to the unfolding of experience moment by moment” [ ]. By helping individuals disengage from automatic thoughts, habits, and unhealthy behavior patterns, mindfulness may play an important role in fostering behavioral regulation [ ] and, thus, is believed to promote well-being [ ].
Well-being is a complex construct consisting of several factors . As with mindfulness, there is no consensus definition, so different well-being definitions include different descriptions, components, and determinants [ ]. In this review, we define well-being not as the absence of illness but use the definition by Diener [ ] that incorporates positive and negative affect as well as life satisfaction and flourishing [ ]. Many studies have found that mindfulness-based interventions have a strong influence on well-being [ ] and are believed to improve mental health in clinical as well as nonclinical populations [ , ]. The term nonclinical refers to individuals in good health or whose symptoms are below the threshold for mental disorders [ , ].
To date, there is a great body of research supporting the benefits of widespread mindfulness-based interventions, such as mindfulness-based stress reduction  and mindfulness-based cognitive therapy [ ]. These interventions are associated with several beneficial effects, including reductions in negative affect and increases in positive affect and life satisfaction [ , ]. Nevertheless, some limitations to these programs also need to be considered. Not only are they 8 weeks long, consist of group-based interventions, and require up to 45 minutes of daily home practice, but they are also expensive and may put individuals in uncomfortable situations by having to expose themselves [ , ]. All these are potential barriers to their use.
A possible solution to reduce these barriers is offered by the booming field of mindfulness-based mobile apps (MBMAs) [, ], where mindfulness content is mostly delivered via audio-guided meditation. MBMAs are more easily accessible, diverse, flexible, dynamic, discreet, and cheaper than conventional mindfulness-based interventions [ ]. Owing to their portable nature, interventions delivered via mobile phone can reduce geographical, social, and financial barriers [ ] and, thus, have the potential to reach a wide range of people [ ]. Even though the market for mindfulness meditation apps has grown exponentially in the last few years [ ], the theoretical framework aimed at delivering such interventions lacks evidence [ ]. Only a small number have been scientifically proven to be efficient in providing support for an increase in well-being [ ].
As mental health problems have become one of the leading issues of today’s population, often leading to long-term disability in the Western world [, ], it is essential to offer convenient services to prevent the emergence of disorders (for a review of the effectiveness of prevention, see, eg, the study by Cuijpers et al [ ]). There are numerous reviews and meta-analyses that deal with different disorders in clinical samples [ - ], but only a small share of the extant literature focuses on the potential positive impact of MBMAs on well-being in nonclinical samples. Although a recent meta-analysis [ ] showed the importance of mobile-based mindfulness interventions for depressive, anxiety, and stress symptoms, this review fills the gap in research on well-being outcomes. Available studies suggest that short mindfulness meditations via smartphone apps are able to improve mental health in nonclinical populations [ ]. However, studies are often characterized by small sample sizes, sometimes because of high attrition rates (eg, Economides et al [ ] and Walsh et al [ ]). In addition, studies use different definitions and measures of well-being, making it hard to compare the reported results. To our knowledge, no review to date has provided a comprehensive overview of the studies in this field of research.
Although smartphone apps may provide an easy and cheap alternative to traditional mindfulness programs, caution is advised as most available apps still lack evidence of effectiveness . Therefore, the primary goal of this systematic review was to provide an overview of the extant research on the impact of MBMAs on well-being in nonclinical samples. Furthermore, we aimed to critically examine the quality of reporting and risk of bias in this field of research by applying the version 2 of the Cochrane risk-of-bias tool for randomized trials (RoB 2) [ ] and a selection of CONSORT (Consolidated Standards of Reporting Trials) statement [ ] criteria. We hypothesized that extant studies were characterized by small sample sizes (hypothesis 1) and high attrition rates (hypothesis 2) and expected to find only small effect sizes (hypothesis 3) [ ] regarding well-being.
The systematic literature search used the publicly available databases PsycInfo, PsycNet, and PubMed. In addition to these databases, the search engine Google Scholar and an institutional search engine at the University of Vienna, u:search, were used. The following search terms were used for all databases on February 28, 2023: “mindful*” AND “well-being” OR “wellbeing” OR “well being” AND “rct” OR “randomized controlled trial” OR “randomise* control* trial” AND “app” OR “mobile app” OR “apps” OR “mobile device applications” OR “mobile apps” OR “smartphone.” The term “mindful*” was complemented with “mindfulness” in u:search as the search string with the former term retrieved no studies there. As several relevant papers did not include the terms “mindfulness” or “well-being” in the title, it was also decided to conduct searches without these terms using the following alternative search string: “well-being” OR “wellbeing” OR “well being” AND “rct” OR “randomized controlled trial” OR “randomise* control* trial” AND “app” OR “mobile app” OR “mobile” OR “apps” OR “mobile device applications” OR “mobile apps” OR “smartphone” as well as “mindful*” AND “rct” OR “randomized controlled trial” OR “randomise* control* trial” AND “app” OR “mobile app” OR “mobile” OR “apps” OR “mobile device applications” OR “mobile apps” OR “smartphone.” We conducted 5 searches to increase the chances of finding newly published papers.
Following the advice of one of the reviewers, a further literature search was performed on May 6, 2023, using individually tailored search strategies for each database. To minimize possible selection bias, validated filters [- ] were used in PubMed and PsycInfo for eligible study designs (randomized controlled trials [RCTs]), and the outcome measure criterion (see the following section) was only applied during screening. The final search string for PubMed was “(mindful*) AND (app OR mobile app OR apps OR mobile device applications OR mobile apps OR smartphone) AND (randomized controlled trial [Publication Type] OR randomized[Title/Abstract] OR placebo[Title/Abstract]).” The final search term for PsycInfo was “(mindful*) AND (app OR mobile app OR apps OR mobile device applications OR mobile apps OR smartphone) AND (TX double-blind OR TX random: assigned OR TX control).” No validated filter was available for PsycNet, which is why both the study design criterion and the outcome measure criterion were applied only during screening to minimize selection bias. The final search term for PsycNet was “(mindful*) AND (app OR mobile app OR apps OR mobile device applications OR mobile apps OR smartphone).”
We searched for studies in English and German, and only papers in English turned out to meet the inclusion criteria (see the following section). All relevant studies were included independent of their publication date. In total, 2 studies were retrieved by searching the reference lists of other relevant papers and were also included as they met all the inclusion criteria.
Selection of Studies
The searches in the databases, as well as the screening of titles, abstracts, and full articles, were conducted by 2 authors independently. After removing duplicates, the titles and abstracts of all the remaining studies were screened for relevance, and studies that did not meet the inclusion criteria were discarded. The predefined inclusion criteria are presented in. The remaining papers were downloaded and carefully reviewed. Both authors independently assessed the eligibility of the studies by using the predefined inclusion and exclusion criteria. Uncertainties or disagreements were discussed between the 2 authors and with the third author of this study until a consensus was reached.
Predefined inclusion and exclusion criteria.
Scientific field of interest
- Inclusion criteria: mindfulness
- Exclusion criteria: no explicit investigation of mindfulness
- Inclusion criteria: randomized controlled trial
- Exclusion criteria: no randomization, less than at least 2 study arms, no control condition, and no specific manipulation of variables
- Inclusion criteria: well-being (general well-being, positive affect, negative affect, life satisfaction, and flourishing)
- Exclusion criteria: no standardized assessment of predefined aspects of well-being
How content was provided
- Inclusion criteria: smartphone app
- Exclusion criteria: in person, websites, group interventions, educational literature, and podcasts
- Inclusion criteria: nonclinical
- Exclusion criteria: clinical
- Inclusion criteria: English or German
- Exclusion criteria: every other language
Data Items, Quality Assessment, and Coding of Studies
The following information was coded from the included studies: population, sample size of the intervention and control groups, and attrition rate after randomization. Furthermore, we coded the apps used and control conditions, number of sessions, duration of the sessions in minutes, total duration of the interventions, well-being measures used, and stated effect sizes reported. In total, 2 authors independently rated the risk of bias using the Cochrane RoB 2  and assessed the quality of reporting using a selection of the CONSORT statement [ ] criteria. The RoB 2 allows for risk of bias ratings in the following aspects: randomization process, deviations from the intended interventions, missing outcome data, measurement of the outcome, selection of the reported result, and overall bias. Each domain contains signaling questions that were answered using the following response options: yes, probably yes, probably no, no, and no information. After the authors independently rated the single domains, the implemented algorithm was used to decide on the final risk of bias, the scores being low risk, some concerns, or high risk. In the case of different results, assessments were discussed by the 2 raters and with the third author of this study until a consensus was reached.
From the CONSORT statement criteria, we chose the most informative and relevant points of the reporting quality of the included RCTs, which were identification as a randomized trial in the title; structured summary of trial design, methods, results, and conclusions; specific objectives or hypotheses; eligibility criteria for participants; the interventions for each group, with sufficient details to allow for replication, including how and when they were actually administered; completely defined prespecified primary and secondary outcome measures, including how and when they were assessed; how the sample size was determined; method used to generate the random allocation sequence; type of randomization and details of any restriction; who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions; if done, who was blinded after assignment to interventions; for each group, the number of participants who were randomly assigned, received intended treatment, and were analyzed for the primary outcome; for each group, losses and exclusions after randomization, together with reasons; a table showing baseline demographic and clinical characteristics for each group; for each primary and secondary outcome, the results for each group, the estimated effect size, and its precision; results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing prespecified from exploratory; trial limitations, addressing sources of potential bias, imprecision, and multiplicity of analyses if relevant; generalizability of the trial findings; interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence; registration number and name of trial registry; and where the full trial protocol could be accessed, if available.
For the classification of the degree of fulfillment, 3 gradings were used. The term stated means that sufficient information was provided. Partly stated or partly given were used when information was only partially available. For example, Economides et al  mentioned their outcome measures but did not differentiate between primary and secondary, only fulfilling the associated CONSORT point to some extent. In case of missing information, we settled for the term not stated. In the rare cases of different results, assessments were discussed by the 2 raters and the third author of this study until a consensus was reached.
A total of 28 studies were included in this review. The study selection process is presented in a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart  in . A detailed overview of the included studies is provided in .
|Study||Population||App (number of participants)||Control (number of participants)||Attrition rate after randomization until latest time of assessment (%)||Number of sessions and duration of sessions in minutes||Duration of intervention||Well-being scale||Reported effect sizes (95% CIs if applicable)|
|Bostock et al , 2019||Employees||Headspace (128)||Waitlist (110)||21.85b||45; 10-20||8 weeks||WEMWBSc||Time×group interaction: Well-being: ηp2=0.037; Positive affect: ηp2=0.04|
|Carissoli et al , 2017||Pregnant women||BenEssereMamma (35)||Childbirth class (43)||—d||20; free to choose||4 weeks||PWBe||—|
|Champion et al , 2018||General population||Headspace (38)||Waitlist (36)||16.22f||30; 10-20||30 days||SWLSg||Cohen d=0.60 (0.08-1.12)|
|Coelhoso et al , 2019||Female hospital employees||Self-developed mindfulness app (250)||Self-developed app, monitoring of perceptions (240)||54b,f||36; 15||8 weeks||WHO-5h||Time×group interaction: ηp2=0. 047|
|Deady et al , 2022||Employees||HeadGear (1128)||Mood monitoring app (1143)||81.36f||30; 5-10||30 days||WHO-5||—|
|Economides et al , 2018||General population||Headspace (mindfulness content; 87)||Headspace (audiobook; 82)||56.88f||10; 10||1 month||SPANEi||Cohen d=0.47 (−1.92 to 2.87)|
|Flett et al , 2018||University students||Headspace (72) and Smiling Mind (63)||Evernote (75)||17.7b; 8.57f||10; 10||10 days||Flourishing Scale||After the intervention: Headspace: Hedges g=0.08; Smiling Mind: Hedges g=0.12. Follow-up: Headspace: Hedges g=0.01; Smiling Mind: Hedges g=−0.07|
|Fuller-Tyszkiewicz et al , 2020||Caregivers||StressLess (73)||StressMonitor (110)||37.16f||Not reported||5 weeks||PWIj||Within-group: Baseline–after the intervention: StressLess: Cohen d=0.135; StressMonitor: Cohen d=0.428 (subjective well-being worsened). After the intervention–follow-up: StressLess: Cohen d=0.621; Emotional well-being: Cohen d=0.742|
|Gnanapragasam et al , 2023||Health care workers||Foundations (502)||Waitlist (500)||10.78f||28; duration not reported||4 weeks||SWEMWBSk||Cohen d=0.14 (0.05-0.22)|
|Hirshberg et al , 2021||School employees||Healthy minds program (346)||Waitlist (320)||13.06f||10 lessons+14 guided meditations; 5-30||4 weeks||WHO-5||After the intervention: Cohen d=0.42 (0.27-0.58). Follow-up: Cohen d=0.34 (0.19-0.49)|
|Howells et al , 2016||General population||Headspace (97)||Catch notes (list-making app; 97)||37.95; 37.63f||10; 10||10 days||SWLS, Flourishing Scale, and PANASl||Time×group interaction: SWLS: ηp2=0.003; Flourishing: ηp2=0.006; Positive affect: ηp2=0.071; Negative affect: ηp2=0.010|
|Keng et al , 2022||Health care workers||Headspace (40)||Lumosity (40)||1.25f||21; 10||3 weeks||PWI||After the intervention: f2=0.03. Follow-up: f2=0.06|
|Levin et al , 2022||University students||Stop, Breathe, and Think (10)||Waitlist (13)||30.43f||28; 1-10||4 weeks||MHC-SFm||Hedges g=0.52 (90% CI −0.31 to 1.41)|
|Lindsay et al , 2018||General population||Self-developed app (MAn; 58) and MOo (58)||Coping control (37)||2b; 5.88f||14; 20||2 weeks||State positive and negative affect (momentary and diary assessments)||Positive affect: MA versus MO: Hedges g=0.46; MA versus control: Hedges g=0.71; MO versus control: Hedges g=0.25. Momentary positive affect: MA versus MO: Hedges g=0.41; MA versus control: Hedges g=0.66; MO versus control: Hedges g=0.25. Within-group negative affect: MA: Cohen d=0.40; MO: Cohen d=0.36; Control: Cohen d=0.12. Within-group momentary negative affect: MA: Cohen d=0.38; MO: Cohen d=0.41; Control: Cohen d=0.24|
|Mak et al , 2018||General population||Living With Heart MBPp (739); Living With Heart SCPq (748)||Living With Heart cognitive behavioral psychoeducation program (795)||84.71f||28; 10-15||4 weeks||WHO-5||Within-group after the intervention: MBP: Cohen d=0.31; SCP: Cohen d=0.40; Control: Cohen d=0.36. Within-group follow-up: MBP: Cohen d=0.51; SCP: Cohen d=0.40; Control: Cohen d=0.38|
|Noone and Hogan , 2018||University students||Headspace mindfulness meditation (43)||Headspace sham meditation (48)||21.98f||30; 10||6 weeks||PANAS and WEMWBS||—|
|Ponzo et al , 2020||University students||BioBase (130)||Waitlist (132)||53.1f||42; 5||4 weeks||WEMWBS||Within-group after the intervention: BioBase: Cohen d=0.65; Waitlist: Cohen d=0.15. Follow-up: BioBase: Cohen d=1.16; Waitlist: Cohen d=0.38|
|Robinson , 2018||Community support workers||Headspace (8)||Waitlist (4)||0f||30; 10||30 days||PANAS||Within-group positive affect: Headspace: η2=0.181; Waitlist: η2=0.614. Negative affect: Headspace: η2=0.054; Waitlist: η2=0.54|
|Schulte-Frankenfeld and Trautwein , 2021||University students||Balloon (50)||Waitlist (49)||35.35f||57; 10||8 weeks||LSSr||ηp2=0.032. Time×group interaction: ηp2=0.034|
|Smith et al , 2020||Employees||Boosts (107)||Waitlist (108)||21.4f||28; 6-9||4 weeks||PANAS||Within-group: Boosts: β=.23 (−.36 to −.10); Waitlist: β=−.12 (−.23 to −.01)|
|Taylor et al , 2022||Health care workers||Headspace (1095)||Moodzone (1087)||48.53f||30; 10||30 days||SWEMWBS||After the intervention: Hedges g=0.07. Follow-up: Hedges g=0.19|
|Thabrew et al , 2022||General population||Whitu: seven ways in seven days (45)||Waitlist (45)||8.8f||Not reported||4 weeks||SWEMWBS and WHO-5||Within-group: WHO-5: f2=0.05; WEMWBS: f2=0.077|
|Vu , 2018 (pilot study)||University students||Pacifica (full intervention; 21)||Pacifica Lite (20)||17.07f||7; duration not reported||1 week||PANAS-SFs and PROMISt||PANAS: Negative affect: ηp2=0.12; Positive affect: ηp2=0.01; Global mental health: ηp2=0.08. PROMIS: Negative affect: Cohen d=−0.94; Positive affect: Cohen d=−0.60; Global mental health: Cohen d=0.38|
|Vu , 2018||University students||Pacifica (full intervention; 140)||Pacifica Lite (138); waitlist (142)||24.76f||14; duration not reported||2 weeks||PANAS-SF and PROMIS||Negative affect: Pacifica versus waitlist: Cohen d=−0.23; Pacifica versus Lite: Cohen d=−0.30; Lite versus waitlist: Cohen d=0.07. Positive affect: Pacifica versus waitlist: Cohen d=−0.07; Pacifica versus Lite: Cohen d=−0.02. Lite versus waitlist: Cohen d=−0.04. Global mental health: Pacifica versus waitlist: Cohen d=0.2; Pacifica versus Lite: Cohen d=0.11; Lite versus waitlist: Cohen d=0.11|
|Walsh et al , 2019||University students||Wildflowers (45)||Mobile game “2048” (41)||20.37f (postintervention measures); 26.85f (state measures)||21; 10||3 weeks||PWBSu||Acceptance: r=0.15; Awareness: r=0.14; Openness: r=0.26; Alerting effect: r=−0.05; Orienting effect: r=−0.05; Conflict monitoring: r=0.15. Time×group interaction: Acceptance: r=0.21; Awareness: r=0.10; Openness: r=−0.05; Alerting effect: r=0.15; Orienting effect: r=0.15; Conflict monitoring: r=−0.24|
|Xu et al , 2022||Emergency department staff||Headspace (74)||Waitlist (74)||35.14f||28; 10||4 weeks||WEMWBS||Within-group after the intervention: Headspace: Cohen d=0.56; Waitlist: Cohen d=0.49. Follow-up: Headspace: Cohen d=0.48; Waitlist: Cohen d=0.51|
|Yang et al , 2018||Medical students||Headspace (45)||Waitlist (43)||Not reported||30; 10-20||30 days||GWBSv||—|
|Yoon et al , 2022||Stressed employees||InMind (22)||Waitlist (23)||2.22f||28; 20||4 weeks||COMOSWBw||Within-group baseline–after the intervention: Cohen d=0.54; Baseline–follow-up: Cohen d=0.51. After the intervention–follow-up: Cohen d=0.07; Group×time interaction: η2=0.090|
aIf not stated otherwise in the table, effect sizes are for between-group comparisons.
bAttrition rates as reported in the study.
cWEMWBS: Warwick-Edinburgh Mental Well-being Scale.
ePWB: the Italian version of the psychological well-being questionnaire.
fAttrition rates as calculated by the current authors (based on the reported numbers in the studies).
gSWLS: Satisfaction with Life Scale.
hWHO-5: World Health Organization 5-item Well-Being Index.
iSPANE: Scale of Positive and Negative Experience.
jPWI: Personal Wellbeing Index.
kSWEMWBS: Warwick-Edinburgh Mental Well-being Scale–Short Version.
lPANAS: Positive and Negative Affect Schedule.
mMHC-SF: Mental Health Continuum–Short Form.
oMO: monitor only.
pMBP: mindfulness-based program.
qSCP: self-compassion program.
rLSS: Questionnaire for the Assessment of Happiness (Lebensglückskala in German; the study was conducted with German-speaking participants).
sPANAS-SF: Positive and Negative Affect Schedule–Short-Form.
tPROMIS: Patient-Reported Outcome Measurement Information System.
uPWBS: Psychological Wellbeing Scale.
vGWBS: General Well-Being Schedule.
wCOMOSWB: Concise Measure of Subjective Well-Being.
The included studies provided evidence for 18 different apps. In total, 39% (11/28) of the studies used Headspace , which was thus the single most used app. Other studies (17/28, 61%) used the following apps: HeadGear [ ]; Smiling Mind [ ]; Healthy Minds program [ ]; Stop, Breathe, and Think [ ]; Living With Heart [ ]; Balloon [ ]; Pacifica [ ]; Wildflowers [ ]; BioBase [ ]; Whitu [ ]; Foundations [ ]; BenEssereMamma [ ]; StressLess [ ]; Boosts [ ]; InMind [ ]; and 2 self-developed apps [ , ] (ie, apps that were programmed for the study and are not commercially available in app stores). A total of 11% (3/28) of the studies investigated >1 intervention [ , , ]. Although most apps conveyed mindfulness content solely through audio-guided meditations (eg, Economides et al [ ]), including exercises such as the body scan (eg, Mak et al [ ]), breathing techniques, or the practice of nonjudgment of emotions (eg, Flett et al [ ]) with the common goal of grounding awareness in the present moment [ ], others also implemented educational audio or video lessons aiming to explain the rationale of mindfulness (eg, Bostock et al [ ] and Champion et al [ ]). The intervention periods lasted from 1 week to a maximum of 8 weeks. The number of sessions to be completed by the participants ranged from 7 to 57, with a duration ranging from 1 to 30 minutes. In total, 4% (1/28) of the studies did not state instructions on the number of sessions to be completed or their duration [ ].
For the assessment of well-being, various scales were used. The most common was the World Health Organization 5-item Well-Being Index , measuring general well-being. Other scales used for the assessment of general well-being were the Warwick-Edinburgh Mental Well-being Scale [ ] as well as its short version [ ], the Psychological Wellbeing Scale [ ], the Patient-Reported Outcome Measurement Information System [ ], the Italian version of the Psychological Wellbeing questionnaire [ ], the Personal Wellbeing Index [ ], the Scale of Positive and Negative Experience [ ], and the Concise Measure of Subjective Well-Being [ ]. In addition, some studies (9/28, 32%) measured individual aspects of well-being using the Positive and Negative Affect Schedule [ ], Satisfaction with Life Scale [ ], Questionnaire for the Assessment of Happiness [ ], and Flourishing Scale [ ].
Concerning control conditions, 64% (18/28) of the studies did not implement any active control conditions, meaning that participants in the control groups did not complete any active interventions. Thus, these waitlist control conditions did not control for the digital placebo effect, which may lead to improvements in mental health only because of downloading and using an app . Other studies used mobile apps that did not offer mindfulness meditations (eg, Flett et al [ ]), educational programs (eg, Mak et al [ ]), or mobile games [ ] as active control conditions.
Impact on Well-Being
The primary aim of this review was to investigate whether the use of MBMAs has an impact on well-being in nonclinical populations. In this regard, ambiguous results were found.
Substantial improvements in at least one aspect of well-being were reported in 68% (19/28) of the RCTs [, , , , , , - , , , , - , ], indicating that participating in the mobile mindfulness intervention enhanced well-being. Although 7% (2/28) of the studies found significant results for positive and negative affect [ , ], others demonstrated changes solely in individual aspects. For example, Howells et al [ ] reported increases in positive affect, whereas Vu [ ] reported decreases in negative affect. Using the Warwick-Edinburgh Mental Well-being Scale and its short version, 21% (6/28) of the studies reported significant improvements in mental well-being [ , , , , , ]. For satisfaction with life, contradictory results were reported. Only 4% (1/28) of the studies reported a significant time×group interaction [ ]; effects with other measures of satisfaction with life in other studies did not reach significance [ , ]. There were no significant changes in flourishing [ , ]. Finally, 18% (5/28) of the RCTs reported no significant changes in well-being outcomes at all [ , , , , ].
Sample Size, Attrition Rates, and Size of Reported Effects
We expected small sample sizes for our first hypothesis (hypothesis 1). The total sample size before attrition ranged from 12  to 2282 [ ], with a median of 161. Participants were mostly evenly allocated to the intervention and control groups ( ). A total of 68% (19/28) of the studies stated how the sample size was determined, although not a single RCT used effect sizes of well-being measures in nonclinical populations in their calculations. Only 4% (1/28) of the studies [ ] used the effect sizes of well-being measures but taken out of a clinical context and concerning web-based interventions. Another study [ ] based its sample size calculations on meta-analytic results concerning web-based mindfulness-based interventions [ ]. However, it remained unclear whether this involved well-being as an outcome of interest. In addition, the meta-analysis examined web-based interventions rather than mobile apps.
High attrition rates were expected for our second hypothesis (hypothesis 2). We calculated dropout rates from 0%  to 84.7% [ ], with a median of 23.4%. In total, 11% (3/28) of the studies showed discrepancies in the reported attrition rate and the rate calculated by the authors of this review using the reported numbers in these studies [ , , ]. A total of 79% (22/28) of the studies did not state their attrition rates. It is worth noting that the highest attrition rate was observed for the largest sample [ ]. We calculated a median of 106 for the total sample size after attrition, with a minimum of 12 and a maximum of 894 participants. In line with the RoB 2 [ ], a difference of ≥5% in the attrition rates of the study arms was considered substantial. This was the case in 50% (14/28) of the studies. In total, 39% (11/28) of the studies [ , , , , , , , , , , ] had significantly higher attrition in the intervention arm, whereas 11% (3/28) of the studies [ , , ] had higher attrition in the control condition. In total, 27% (3/11) of the studies in the former group and 67% (2/3) of the latter studies concerned the app Headspace. A table containing an overview of these differences in attrition rate is provided in Table S1 in [ , , , , , - ].
We expected to find small effect sizes regarding well-being outcomes for our third hypothesis (hypothesis 3). To account for the difference in reported effect size metrics, we decided to summarize the effects according to widely used benchmarks rather than their actual values (these are provided in) to allow for better comparability. According to Cohen [ ], a Cohen d or Hedges g of 0.2 was classified as a small effect, 0.5 was classified as a medium effect, and 0.8 was classified as a large effect; a Cohen f2 of 0.02 was classified as a small effect, 0.15 was classified as a medium effect, and 0.35 was classified as a large effect; a partial ηp2 [ ] of 0.01 was classified as a small effect, 0.06 was classified as a medium effect, and 0.14 was classified as a large effect; and a Pearson r of 0.1 was classified as a small effect, 0.3 was classified as a medium effect, and 0.5 was classified as a large effect. According to the benchmarks for the standardized regression coefficient β, effects between .10 and .29 are classified as small, between .30 and .49 are classified as medium, and effects of ≥.5 are considered large. A total of 39% (11/28) of the included studies reported small effects [ , , , , , - , ], 29% (8/28) reported small to medium effects [ , , , , , , , ], and 14% (4/28) reported medium-sized effects [ , , , ]. In total, 29% (8/28) of the papers reported small- as well as medium-sized effects [ , , , , , , , ] depending on the measured aspect of well-being. Only 11% (3/28) of the studies reported large effects, namely on negative affect [ ], positive affect [ ], and general well-being [ ]. It is worth mentioning that 4% (1/28) of the studies reporting the largest effects [ ] also had the smallest sample size of all the included studies (n=12). A total of 14% (4/28) of the RCTs did not report any effect sizes at all [ , , , ].
Risk of Bias and Quality of Reporting
Overall risk of bias was high for 86% (24/28) of the studies, and there were some concerns for the remaining 14% (4/28) of the studies (seefor a summary of overall and individual domain ratings; a more detailed overview is provided in Figure S1 in ). None of the studies received a low overall risk of bias rating. Of the individual domains, the reporting of studies on the randomization process appeared to be least problematic in comparison with the other domains and most problematic concerning the domain of selection of the reported result.
Detailed results on the ratings of the selected CONSORT statement criteria are presented in Table S2 in. A summary is provided in , which presents the percentages of studies that met, partly met, or did not meet each criterion. The 2 criteria that were met with the highest frequency concerned the reporting of specific objectives or hypotheses and the number of participants who were randomly assigned, received the intended treatment, and were analyzed for the primary outcome. The 2 criteria that were partly met with the highest frequency concerned the reporting of detailed results for each group in all primary and secondary outcomes and losses and exclusions after randomization for each group. The 2 criteria that were not met with the highest frequency concerned the accessibility of full trial protocols and the reporting on who generated the random allocation sequence, who enrolled the participants, and who assigned the participants to the interventions. For 2 CONSORT statement criteria, namely generalizability and interpretation, a summary through simple percentages was not possible. Instead, provides an overview of the most salient trial limitations and their degree of fulfillment (all individual trial limitations are provided in Table S2 in ).
Trial limitations, addressing sources of potential bias; imprecision; and, if relevant, multiplicity of analyses, are shown in. Examples of limited generalizability (external validity and applicability) of the trial findings include the selection of very specific samples (eg, university students [ , , ]). In addition, the small sample sizes contributed to limited generalizability, especially when combined with high attrition rates [ , , ]. In total, 7% (2/28) of the studies [ , ] interpreted (partially) not significant results (P>.05) as significant. Therefore, the positive effects of practicing mindfulness meditation on well-being were not fully supported by their own data. In addition, another 25% (7/28) of the studies [ , , , , , , ] made broad and general statements about the benefits of mindfulness meditation for well-being without clear reference to their exact results. However, the data indicated either only statistical trends (P>.05) or benefits referring only to specific aspects of well-being (eg, positive affect) instead of its multiple aspects (ie, positive and negative affect, life satisfaction, and flourishing) that were often measured in tandem. This interpretation problem was exacerbated by the use of various definitions of the concept of well-being in the included studies. Most definitions overlapped to a large degree with the definition of this review, with the scales used measuring at least one of the aforementioned 4 aspects of well-being. However, 4% (1/28) of the studies [ ] used both life satisfaction and perceived stress as indicators of well-being and interpreted the significant results observed for perceived stress (but not life satisfaction) as evidence of benefits for well-being. Nevertheless, it is also worth mentioning that most studies (22/28, 79%) did embed their findings adequately into the previous literature.
The aim of this systematic review was to examine the impact of mindfulness-based mobile interventions on well-being in nonclinical populations. We obtained evidence from 28 RCTs on interventions that mostly consisted of audio-guided meditations aimed at fostering present-moment awareness. Few studies (6/28, 21%) also delivered audio or video lessons to explain the rationale behind mindfulness. Most of the 28 RCTs (19/28, 68%) reported significant improvements in well-being even though the effect sizes were mostly small to medium and the overall risk of bias was mostly high. In addition, a wide range of sample sizes (12 to 2282) and attrition rates (0% to 84.7%) were observed. Taking these results into consideration, findings need to be interpreted with caution.
The highest attrition rate was observed in the largest sample , drastically reducing the number of effective participants in this study (349 vs 2282). The median total sample size after attrition was 106 (12 to 894). A medium-sized effect (ie, Cohen d or Hedges g of 0.5) requires a total sample size of at least approximately 100 participants to achieve 80% analytical power; a small effect (ie, Cohen d or Hedges g=0.2) requires approximately 620 participants. Thus, our first hypothesis was partly confirmed as only approximately half (14/28, 50%) of the studies appeared to be adequately powered to detect the medium-sized or even smaller effects that were of relevance (see the following paragraphs for a more detailed discussion of the magnitude of the reported effects). In addition, it is worth noting that sample size calculations, if conducted, were not based on effect sizes regarding well-being measures in nonclinical populations or concerning mobile apps but, rather, were mostly related to depression (eg, Deady et al [ ]), anxiety [ ], and stress (eg, Economides et al [ ]). Meta-analyses of web-based mindfulness-based interventions have reported inconsistent effects on these outcomes, varying from small to large [ ]. Overall, this might have led researchers to expect larger effects also on well-being. This, in turn, might have increased the risk of studies being too underpowered to reliably detect the mostly smaller effects on the well-being outcomes that were of interest in this review.
In addition, attrition rates varied widely, with the lowest being 0% and the highest being 84.7%. We calculated a median of 23.4%. Prior work has reported attrition rates from 50% to 60% for research on electronic health care delivered via the internet  and an average attrition rate of 43.4% in mobile health interventions delivered via smartphones [ ] (for face-to-face mindfulness interventions, attrition rates are approximately 19.1% on average [ ]). This higher maximum and lower average rates partly confirm our second hypothesis. However, the study by Mak et al [ ] illustrated that large sample sizes may go hand in hand with high attrition rates nonetheless, which ultimately might lead to low study power. This further highlights the importance of treating the results of extant studies carefully. Attrition rates further differed in half (14/28, 50%) of the studies, with 39% (11/28) of the studies reporting significantly higher attrition rates in the intervention arm than in the control condition and 11% (3/28) of the studies reporting higher attrition rates in the control condition than in the intervention arm. The reasons for this differential attrition need to be investigated in future research in more detail as this may indicate that either the usability of the investigated apps may need improvement or that users need to be made aware of unintended effects that may prompt the discontinuation of using the app or engaging in mindfulness practice. Context factors also need to be considered in this case as differential attrition for the most investigated app, Headspace, concerned both the intervention and control groups in different studies. Attrition is a problem for intervention research overall, but differential attrition seems to be less of a problem for common, non–mobile-based mindfulness interventions [ ].
Of the 28 included RCTs in this review, most (16/28, 57%) found small or small to medium effect sizes for well-being outcomes, which mostly confirms our third hypothesis, in which we expected small effect sizes. In this context, it is worth mentioning that different aspects of the well-being construct were measured in these studies and that effects were reported also using different effect size metrics. Although some studies (21/28, 75%) assessed general well-being, others measured only individual aspects (positive or negative affect, life satisfaction, and flourishing). The difference in the underlying definitions of well-being as well as the variety of scales used to measure the construct might have contributed to broad and sometimes inexact interpretations. It is further worth mentioning that nonclinical populations in many studies likely already had high well-being baselines, making large improvements hard to achieve and overall less likely. This may have contributed to the relatively small effect sizes reported in the included studies. However, in addition to the heterogeneity in the measured outcomes, there were other factors that may limit the comparability of the studies in this review to a larger degree. Most studies (11/28, 39%) investigated the effects of Headspace; however, in total, 18 different apps were used. Differences in the interventions concerning the implemented number of sessions and their duration may further contribute to the limited comparability of the investigated apps and generalizability of the results. Finally, the observed differential attrition rates in the included studies suggest that many of the reported effects may be biased.
Limitations and Future Research
This is the first systematic review of the impact of MBMAs on well-being in nonclinical populations using RCT data and focusing on the quality of reporting and risk of bias. However, some limitations need to be mentioned. First, we did not address study outcomes other than well-being, and the quality assessment was applied to measures of well-being only. Future studies may want to also address other outcomes and investigate the effects of MBMAs on these outcomes as well. In addition, we would like to recommend that authors and researchers use a consistent terminology for well-being in their studies. This would enhance clarity and contribute to readers’ understanding but would also increase the generalizability of the results. Second, trait mindfulness could serve as an effect moderator of mindfulness interventions , meaning that people with high scores in mindfulness might benefit less than those with low scores. Thus, future research could also perform meta-analytic calculations using the data from the primary studies and investigate baseline trait mindfulness as a possible effect moderator. As the focus of this review lies on risk of bias and reporting quality, we did not aim to provide meta-analytic calculations ourselves. However, future meta-analyses would also be well advised to consider the possible effects of dose-response relationships with the number of sessions and their duration. This review provided evidence that the dose varies widely in extant studies.
This review provided evidence of relatively high attrition rates. Future research should investigate the possible reasons for attrition to implement appropriate actions to maintain higher study participation. Future RCTs as well as reviews and meta-analyses should also apply the Mobile App Rating Scale  consistently over all apps used to enhance quality and comparability. Moreover, research needs to address the issue of which intervention elements might be the most effective or may boost the effects of other intervention elements. Most of the currently available studies include guided meditations but differ with regard to other intervention elements (eg, whether to provide a theoretical explanation and rationale for the effects of mindfulness). As the number of sessions and their duration vary widely, future studies should systematically test which intervention duration provides the most effective support for the promotion of well-being. Finally, using the criteria of the CONSORT statement allowed for a very detailed and extensive assessment of the quality of reporting of the included studies in this review. Therefore, this approach is recommended also for future research.
Smartphones and mobile apps are gaining popularity, and therefore, their use has far-reaching consequences. This systematic review is consistent with previous studies showing positive but small effects of MBMAs on well-being. It provides another important step in the booming field of mindfulness research, striving to optimize the usability and quality of mobile mindfulness apps. This is especially important considering that most people today own a smartphone, making them more likely to increasingly seek help through mobile apps. These have been proven to be effective in preventing mental health issues; hence, this field of research is of high importance for a great number of people in today’s fast-paced world.
This systematic review showed that some mobile mindfulness interventions have a positive impact on well-being in nonclinical populations in RCT data. Nevertheless, there was a large variation in sample size and attrition rates, the effects were predominantly of only small to medium size, and the overall risk of bias was mostly high. Assessment of the quality of reporting and risk of bias revealed a lack of a priori power calculations and active control conditions. The use of different well-being measures further limited extant studies’ comparability and the generalizability of their results. Thus, these findings emphasize that there is still a need for high-quality research on mobile apps, which become more and more important in today’s modern world where smartphones are an essential component of everyday life. Even though mobile apps are easily accessible to a large segment of the population as well as cheap and discreet, more evidence is needed to reliably evaluate their potential for enhancing users’ well-being.
All data generated or analyzed during this study are included in this published article (and its supplementary information files).
Conflicts of Interest
Detailed information regarding the total sample size after attrition and attrition rates within study arms, the CONSORT (Consolidated Standards of Reporting Trials) statement reporting on study quality as well as an overview of risk of bias assessment.PDF File (Adobe PDF File), 448 KB
- Brown KW, Ryan RM. The benefits of being present: mindfulness and its role in psychological well-being. J Pers Soc Psychol 2003 Apr;84(4):822-848 [CrossRef] [Medline]
- Gu J, Strauss C, Bond R, Cavanagh K. How do mindfulness-based cognitive therapy and mindfulness-based stress reduction improve mental health and wellbeing? A systematic review and meta-analysis of mediation studies. Clin Psychol Rev 2015 Apr;37:1-12 [CrossRef] [Medline]
- Mascaro JS, Wehrmeyer K, Mahathre V, Darcher A. A longitudinal, randomized and controlled study of app-delivered mindfulness in the workplace. J Wellness 2020;2(1):1-9 [https://doi.org/10.18297/jwellness/vol2/iss1/4] [CrossRef]
- Jayewardene WP, Lohrmann DK, Erbe RG, Torabi MR. Effects of preventive online mindfulness interventions on stress and mindfulness: a meta-analysis of randomized controlled trials. Prev Med Rep 2016 Nov 14;5:150-159 [https://linkinghub.elsevier.com/retrieve/pii/S2211-3355(16)30146-2] [CrossRef] [Medline]
- Kabat-Zinn J. Mindfulness-based interventions in context: past, present, and future. Clin Psychol Sci 2006 May 11;10(2):144-156 [https://onlinelibrary.wiley.com/doi/full/10.1093/clipsy.bpg016] [CrossRef]
- Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol 2000 Jan;55(1):68-78 [CrossRef] [Medline]
- Brown K, Ryan R, Creswell J. Mindfulness: theoretical foundations and evidence for its salutary effects. Psychol Inq 2007;18(4):211-237 [https://psycnet.apa.org/record/2007-17560-001] [CrossRef]
- Seligman MEP. Authentic Happiness: Using the New Positive Psychology to Realize Your Potential for Lasting Fulfillment. New York, NY: Free Press; 2002.
- Simons G, Baldwin D. A critical review of the definition of 'wellbeing' for doctors and their patients in a post COVID-19 era. Int J Soc Psychiatry 2021 Dec;67(8):984-991 [https://journals.sagepub.com/doi/abs/10.1177/00207640211032259?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub 0pubmed] [CrossRef] [Medline]
- Diener E. Subjective well-being. Psychol Bull 1984;95(3):542-575 [https://doi.org/10.1037/0033-2909.95.3.542] [CrossRef]
- Diener E, Wirtz D, Tov W, Kim-Prieto C, Choi DW, Oishi S, et al. New well-being measures: short scales to assess flourishing and positive and negative feelings. Soc Indic Res 2010;97:143-156 [https://doi.org/10.1007/s11205-009-9493-y] [CrossRef]
- Eberth J, Sedlmeier P. The effects of mindfulness meditation: a meta-analysis. Mindfulness 2012 May 02;3:174-189 [https://doi.org/10.1007/s12671-012-0101-x] [CrossRef]
- Creswell J. Mindfulness interventions. Annu Rev Psychol 2017 Jan;68:491-516 [https://doi.org/10.1146/annurev-psych-042716-051139] [CrossRef]
- Hofmann SG, Sawyer AT, Witt AA, Oh D. The effect of mindfulness-based therapy on anxiety and depression: a meta-analytic review. J Consult Clin Psychol 2010 Apr;78(2):169-183 [https://europepmc.org/abstract/MED/20350028] [CrossRef] [Medline]
- Huppert F. Psychological well-being: evidence regarding its causes and consequences. Appl Psychol Health Well Being 2009;1(2):137-164 [https://doi.org/10.1111/j.1758-0854.2009.01008.x] [CrossRef]
- Mollayeva T, Thurairajah P, Burton K, Mollayeva S, Shapiro CM, Colantonio A. The Pittsburgh sleep quality index as a screening tool for sleep dysfunction in clinical and non-clinical samples: a systematic review and meta-analysis. Sleep Med Rev 2016 Feb;25:52-73 [CrossRef] [Medline]
- Kabat-Zinn J. An outpatient program in behavioral medicine for chronic pain patients based on the practice of mindfulness meditation: theoretical considerations and preliminary results. Gen Hosp Psychiatry 1982 Apr;4(1):33-47 [CrossRef] [Medline]
- Teasdale JD, Segal ZV, Williams JM, Ridgeway VA, Soulsby JM, Lau MA. Prevention of relapse/recurrence in major depression by mindfulness-based cognitive therapy. J Consult Clin Psychol 2000 Aug;68(4):615-623 [CrossRef] [Medline]
- Economides M, Martman J, Bell MJ, Sanderson B. Improvements in stress, affect, and irritability following brief use of a mindfulness-based smartphone app: a randomized controlled trial. Mindfulness (N Y) 2018;9(5):1584-1593 [CrossRef] [Medline]
- Fish J, Brimson J, Lynch S. Mindfulness interventions delivered by technology without facilitator involvement: what research exists and what are the clinical outcomes? Mindfulness (N Y) 2016;7(5):1011-1023 [https://europepmc.org/abstract/MED/27642370] [CrossRef] [Medline]
- Plaza I, Demarzo MM, Herrera-Mercadal P, García-Campayo J. Mindfulness-based mobile applications: literature review and analysis of current features. JMIR Mhealth Uhealth 2013 Nov 01;1(2):e24 [https://mhealth.jmir.org/2013/2/e24/] [CrossRef] [Medline]
- Howells A, Ivtzan I, Eiroa-Orosa FJ. Putting the ‘app’ in happiness: a randomised controlled trial of a smartphone-based mindfulness intervention to enhance wellbeing. J Happiness Stud 2016;17:163-185 [https://doi.org/10.1007/s10902-014-9589-1] [CrossRef]
- Cavanagh K, Strauss C, Forder L, Jones F. Can mindfulness and acceptance be learnt by self-help?: a systematic review and meta-analysis of mindfulness and acceptance-based self-help interventions. Clin Psychol Rev 2014 Mar;34(2):118-129 [https://core.ac.uk/reader/19701137?utm_source=linkout] [CrossRef] [Medline]
- Flett JA, Hayne H, Riordan BC, Thompson LM, Conner TS. Mobile mindfulness meditation: a randomised controlled trial of the effect of two popular apps on mental health. Mindfulness 2019;10:863-876 [https://doi.org/10.1007/s12671-018-1050-9] [CrossRef]
- Coelhoso CC, Tobo PR, Lacerda SS, Lima AH, Barrichello CR, Amaro Jr E, et al. A new mental health mobile app for well-being and stress reduction in working women: randomized controlled trial. J Med Internet Res 2019 Nov 07;21(11):e14269 [https://www.jmir.org/2019/11/e14269/] [CrossRef] [Medline]
- Harvey SB, Henderson M, Lelliott P, Hotopf M. Mental health and employment: much work still to be done. Br J Psychiatry 2009 Mar;194(3):201-203 [CrossRef] [Medline]
- Murray CJ, Vos T, Lozano R, Naghavi M, Flaxman AD, Michaud C, et al. Disability-adjusted life years (DALYs) for 291 diseases and injuries in 21 regions, 1990-2010: a systematic analysis for the Global Burden of Disease Study 2010. Lancet 2012 Dec 15;380(9859):2197-2223 [CrossRef] [Medline]
- Cuijpers P, Van Straten A, Smit F. Preventing the incidence of new cases of mental disorders: a meta-analytic review. J Nerv Ment Dis 2005 Feb;193(2):119-125 [CrossRef] [Medline]
- Gál É, Ștefan S, Cristea I. The efficacy of mindfulness meditation apps in enhancing users' well-being and mental health related outcomes: a meta-analysis of randomized controlled trials. J Affect Disord 2021 Jan 15;279:131-142 [CrossRef] [Medline]
- Josephine K, Josefine L, Philipp D, David E, Harald B. Internet- and mobile-based depression interventions for people with diagnosed depression: a systematic review and meta-analysis. J Affect Disord 2017 Dec 01;223:28-40 [CrossRef] [Medline]
- Firth J, Torous J, Nicholas J, Carney R, Rosenbaum S, Sarris J. Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials. J Affect Disord 2017 Aug 15;218:15-22 [https://linkinghub.elsevier.com/retrieve/pii/S0165-0327(17)30015-0] [CrossRef] [Medline]
- Tan ZY, Wong SH, Cheng LJ, Lau ST. Effectiveness of mobile-based mindfulness interventions in improving mindfulness skills and psychological outcomes for adults: a systematic review and meta-regression. Mindfulness 2022;13:2379-2395 [https://doi.org/10.1007/s12671-022-01962-z] [CrossRef]
- Walsh KM, Saab BJ, Farb NA. Effects of a mindfulness meditation app on subjective well-being: active randomized controlled trial and experience sampling study. JMIR Ment Health 2019 Jan 08;6(1):e10844 [https://mental.jmir.org/2019/1/e10844/] [CrossRef] [Medline]
- Sterne JA, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019 Aug 28;366:l4898 [https://eprints.whiterose.ac.uk/150579/] [CrossRef] [Medline]
- Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. PLoS Med 2010 Mar 24;7(3):e1000251 [https://dx.plos.org/10.1371/journal.pmed.1000251] [CrossRef] [Medline]
- Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd edition. New York, NY: Routledge; 1988.
- Haynes RB, McKibbon KA, Wilczynski NL, Walter SD, Werre SR, Hedges Team. Optimal search strategies for retrieving scientifically strong studies of treatment from Medline: analytical survey. BMJ 2005 May 21;330(7501):1179 [https://europepmc.org/abstract/MED/15894554] [CrossRef] [Medline]
- Hedges team Medline search filters. McMaster University. URL: https://hiruweb.mcmaster.ca/hkr/hedges/medline/ [accessed 2023-06-07]
- Hedges team PsycInfo search filters. McMaster University. URL: https://hiruweb.mcmaster.ca/hkr/hedges/psycinfo/ [accessed 2023-06-07]
- Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. PLoS Med 2021 Mar 29;18(3):e1003583 [https://dx.plos.org/10.1371/journal.pmed.1003583] [CrossRef] [Medline]
- Bostock S, Crosswell AD, Prather AA, Steptoe A. Mindfulness on-the-go: effects of a mindfulness meditation app on work stress and well-being. J Occup Health Psychol 2019 Feb;24(1):127-138 [https://europepmc.org/abstract/MED/29723001] [CrossRef] [Medline]
- Carissoli C, Villani D, Gasparri D, Riva G. Enhancing psychological wellbeing of women approaching childbirth: a controlled study with a mobile application. Annu Rev CyberTherapy Telemed 2017;15:45-50 [https://publires.unicatt.it/en/publications/enhancing-psychological-wellbeing-of-women-approaching-the-childb-6]
- Champion L, Economides M, Chandler C. The efficacy of a brief app-based mindfulness intervention on psychosocial outcomes in healthy adults: a pilot randomised controlled trial. PLoS One 2018 Dec 31;13(12):e0209482 [https://dx.plos.org/10.1371/journal.pone.0209482] [CrossRef] [Medline]
- Deady M, Glozier N, Calvo R, Johnston D, Mackinnon A, Milne D, et al. Preventing depression using a smartphone app: a randomized controlled trial. Psychol Med 2022 Feb;52(3):457-466 [CrossRef] [Medline]
- Fuller-Tyszkiewicz M, Richardson B, Little K, Teague S, Hartley-Clark L, Capic T, et al. Efficacy of a smartphone app intervention for reducing caregiver stress: randomized controlled trial. JMIR Ment Health 2020 Jul 24;7(7):e17541 [https://mental.jmir.org/2020/7/e17541/] [CrossRef] [Medline]
- Gnanapragasam SN, Tinch-Taylor R, Scott HR, Hegarty S, Souliou E, Bhundia R, et al. Multicentre, England-wide randomised controlled trial of the 'Foundations' smartphone application in improving mental health and well-being in a healthcare worker population. Br J Psychiatry 2023 Feb;222(2):58-66 [CrossRef] [Medline]
- Hirshberg MJ, Frye C, Dahl CJ, Riordan KM, Vack NJ, Sachs J, et al. A randomized controlled trial of a smartphone-based well-being training in public school system employees during the COVID-19 pandemic. J Educ Psychol 2022 Nov;114(8):1895-1911 [CrossRef] [Medline]
- Keng SL, Chin JW, Mammadova M, Teo I. Effects of mobile app-based mindfulness practice on healthcare workers: a randomized active controlled trial. Mindfulness (N Y) 2022;13(11):2691-2704 [https://europepmc.org/abstract/MED/36160038] [CrossRef] [Medline]
- Levin ME, Hicks ET, Krafft J. Pilot evaluation of the stop, breathe & think mindfulness app for student clients on a college counseling center waitlist. J Am Coll Health 2022 Jan;70(1):165-173 [CrossRef] [Medline]
- Lindsay EK, Chin B, Greco CM, Young S, Brown KW, Wright AG, et al. How mindfulness training promotes positive emotions: dismantling acceptance skills training in two randomized controlled trials. J Pers Soc Psychol 2018 Dec;115(6):944-973 [https://europepmc.org/abstract/MED/30550321] [CrossRef] [Medline]
- Mak WW, Tong AC, Yip SY, Lui WW, Chio FH, Chan AT, et al. Efficacy and moderation of mobile app-based programs for mindfulness-based training, self-compassion training, and cognitive behavioral psychoeducation on mental health: randomized controlled noninferiority trial. JMIR Ment Health 2018 Oct 11;5(4):e60 [https://mental.jmir.org/2018/4/e60/] [CrossRef] [Medline]
- Noone C, Hogan MJ. A randomised active-controlled trial to examine the effects of an online mindfulness intervention on executive control, critical thinking and key thinking dispositions in a university student sample. BMC Psychol 2018 Apr 05;6(1):13 [https://bmcpsychology.biomedcentral.com/articles/10.1186/s40359-018-0226-3] [CrossRef] [Medline]
- Ponzo S, Morelli D, Kawadler JM, Hemmings NR, Bird G, Plans D. Efficacy of the digital therapeutic mobile app BioBase to reduce stress and improve mental well-being among university students: randomized controlled trial. JMIR Mhealth Uhealth 2020 Apr 06;8(4):e17767 [https://mhealth.jmir.org/2020/4/e17767/] [CrossRef] [Medline]
- Robinson CM. Are you in the right headspace? Using a mindfulness-based mobile application as a wellbeing intervention in the workplace. University of Canterbury. 2018. URL: https://ir.canterbury.ac.nz/bitstream/handle/10092/15325/Robinson%2c%20Chelsea_MSc%20Thesis.pdf?sequence=1&isAllowed=y [accessed 2023-05-06]
- Schulte-Frankenfeld PM, Trautwein FM. App-based mindfulness meditation reduces perceived stress and improves self-regulation in working university students: a randomised controlled trial. Appl Psychol Health Well Being 2022 Nov;14(4):1151-1171 [https://europepmc.org/abstract/MED/34962055] [CrossRef] [Medline]
- Smith EN, Santoro E, Moraveji N, Susi M, Crum AJ. Integrating wearables in stress management interventions: promising evidence from a randomized trial. Int J Stress Manag 2020 May;27(2):172-182 [https://doi.org/10.1037/str0000137] [CrossRef]
- Taylor H, Cavanagh K, Field AP, Strauss C. Health care workers' need for headspace: findings from a multisite definitive randomized controlled trial of an unguided digital mindfulness-based self-help app to reduce healthcare worker stress. JMIR Mhealth Uhealth 2022 Aug 25;10(8):e31744 [https://mhealth.jmir.org/2022/8/e31744/] [CrossRef] [Medline]
- Thabrew H, Boggiss AL, Lim D, Schache K, Morunga E, Cao N, et al. Well-being app to support young people during the COVID-19 pandemic: randomised controlled trial. BMJ Open 2022 May 19;12(5):e058144 [https://bmjopen.bmj.com/lookup/pmidlookup?view=long&pmid=35589362] [CrossRef] [Medline]
- Vu AH. Randomized controlled trial of pacifica, a CBT and mindfulness-based app for stress, depression, and anxiety management with health monitoring. University of Minnesota. 2018. URL: https://hdl.handle.net/11299/216811 [accessed 2022-05-18]
- Xu HG, Eley R, Kynoch K, Tuckett A. Effects of mobile mindfulness on emergency department work stress: a randomised controlled trial. Emerg Med Australas 2022 Apr;34(2):176-185 [CrossRef] [Medline]
- Yang E, Schamber E, Meyer RM, Gold JI. Happier healers: randomized controlled trial of mobile mindfulness for stress management. J Altern Complement Med 2018 May;24(5):505-513 [CrossRef] [Medline]
- Yoon SI, Lee SI, Suh HW, Chung SY, Kim JW. Effects of mobile mindfulness training on mental health of employees: a CONSORT-compliant pilot randomized controlled trial. Medicine (Baltimore) 2022 Sep 02;101(35):e30260 [https://europepmc.org/abstract/MED/36107583] [CrossRef] [Medline]
- Home page. Headspace Inc. URL: https://www.headspace.com [accessed 2022-04-05]
- World Health Organization. Wellbeing measures in primary health care/the DepCare project: report on a WHO meeting. World Health Organization Regional Office for Europe. 1998. URL: https://apps.who.int/iris/handle/10665/349766 [accessed 2022-05-24]
- Tennant R, Hiller L, Fishwick R, Platt S, Joseph S, Weich S, et al. The Warwick-Edinburgh Mental Well-being Scale (WEMWBS): development and UK validation. Health Qual Life Outcomes 2007 Nov 27;5:63 [https://hqlo.biomedcentral.com/articles/10.1186/1477-7525-5-63] [CrossRef] [Medline]
- Ng Fat L, Scholes S, Boniface S, Mindell J, Stewart-Brown S. Evaluating and establishing national norms for mental wellbeing using the short Warwick Edinburgh mental well-being scale (SWEMWBS): findings from the health survey for England. Qual Life Res 2017 May;26(5):1129-1144 [https://europepmc.org/abstract/MED/27853963] [CrossRef] [Medline]
- Ryff C, Keyes C. The structure of psychological well-being revisited. J Pers Soc Psychol 1995 Oct;69(4):719-727 [CrossRef] [Medline]
- Cella D, Yount S, Rothrock N, Gershon R, Cook K, Reeve B, PROMIS Cooperative Group. The Patient-Reported Outcomes Measurement Information System (PROMIS): progress of an NIH Roadmap cooperative group during its first two years. Med Care 2007 May;45(5 Suppl 1):S3-11 [https://europepmc.org/abstract/MED/17443116] [CrossRef] [Medline]
- Ruini C, Ottolini F, Rafanelli C, Ryff C, Fava GA. La validazione italiana delle psychological wellbeing scales (PWB). Riv Psichiatr 2003;38(3):117-130 [https://psycnet.apa.org/record/2003-99420-002]
- International Wellbeing Group. Personal wellbeing index. 5th edition. Australian Centre on Quality of Life. 2013. URL: http://www.acqol.com.au/instruments#measures [accessed 2023-05-06]
- Suh EM, Koo J. Concise measure of subjective wellbeing. APA PsycTests. 2011. URL: https://doi.org/10.1037/t67901-000 [accessed 2023-05-06]
- Watson D, Clark LA, Tellegen A. Development and validation of brief measures of positive and negative affect: the PANAS scales. J Pers Soc Psychol 1988 Jun;54(6):1063-1070 [CrossRef] [Medline]
- Diener E, Emmons RA, Larsen RJ, Griffin S. The satisfaction with life scale. J Pers Assess 1985 Feb;49(1):71-75 [CrossRef] [Medline]
- Ciccarello L, Reinhard MA. LGS-Lebensglückskala. PsychArchives. 2022. URL: https://doi.org/10.23668/psycharchives.4503 [accessed 2022-05-18]
- Torous J, Firth J. The digital placebo effect: mobile mental health meets clinical psychiatry. Lancet Psychiatry 2016 Feb;3(2):100-102 [CrossRef] [Medline]
- Spijkerman MP, Pots WT, Bohlmeijer ET. Effectiveness of online mindfulness-based interventions in improving mental health: a review and meta-analysis of randomised controlled trials. Clin Psychol Rev 2016 Apr;45:102-114 [https://linkinghub.elsevier.com/retrieve/pii/S0272-7358(15)30062-3] [CrossRef] [Medline]
- Cohen J. Eta-squared and partial eta-squared in fixed factor Anova designs. Educ Psychol Meas 1973;33(1):107-112 [https://doi.org/10.1177/001316447303300111] [CrossRef]
- Hochheimer CJ, Sabo RT, Krist AH, Day T, Cyrus J, Woolf SH. Methods for evaluating respondent attrition in web-based surveys. J Med Internet Res 2016 Nov 22;18(11):e301 [https://www.jmir.org/2016/11/e301/] [CrossRef] [Medline]
- Linardon J, Fuller-Tyszkiewicz M. Attrition and adherence in smartphone-delivered interventions for mental health problems: a systematic and meta-analytic review. J Consult Clin Psychol 2020 Jan;88(1):1-13 [CrossRef] [Medline]
- Lam SU, Kirvin-Quamme A, Goldberg SB. Overall and differential attrition in mindfulness-based interventions: a meta-analysis. Mindfulness (N Y) 2022 Nov;13(11):2676-2690 [CrossRef] [Medline]
- Oberleiter S, Wainig H, Voracek M, Tran US. No effects of a brief mindfulness intervention on controlled motivation and amotivation, but effect moderation through trait mindfulness: a randomized controlled trial. Mindfulness (N Y) 2022;13(10):2434-2447 [https://europepmc.org/abstract/MED/36034413] [CrossRef] [Medline]
- Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth 2015 Mar 11;3(1):e27 [https://mhealth.jmir.org/2015/1/e27/] [CrossRef] [Medline]
|CONSORT: Consolidated Standards of Reporting Trials|
|MBMA: mindfulness-based mobile app|
|PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses|
|RCT: randomized controlled trial|
|RoB 2: version 2 of the Cochrane risk-of-bias tool for randomized trials|
Edited by A Mavragani; submitted 28.11.22; peer-reviewed by C Bedard, HL Tam, J Zhang; comments to author 30.01.23; revised version received 16.03.23; accepted 21.06.23; published 04.08.23Copyright
©Katrin Schwartz, Fabienne Marie Ganster, Ulrich S Tran. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 04.08.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.