This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.
Most people with mental health disorders fail to receive timely access to adequate care. US Hispanic/Latino individuals are particularly underrepresented in mental health care and are historically a very difficult population to recruit into clinical trials; however, they have increasing access to mobile technology, with over 75% owning a smartphone. This technology has the potential to overcome known barriers to accessing and utilizing traditional assessment and treatment approaches.
This study aimed to compare recruitment and engagement in a fully remote trial of individuals with depression who either self-identify as Hispanic/Latino or not. A secondary aim was to assess treatment outcomes in these individuals using three different self-guided mobile apps: iPST (based on evidence-based therapeutic principles from problem-solving therapy, PST), Project Evolution (EVO; a cognitive training app based on cognitive neuroscience principles), and health tips (a health information app that served as an information control).
We recruited Spanish and English speaking participants through social media platforms, internet-based advertisements, and traditional fliers in select locations in each state across the United States. Assessment and self-guided treatment was conducted on each participant's smartphone or tablet. We enrolled 389 Hispanic/Latino and 637 non-Hispanic/Latino adults with mild to moderate depression as determined by Patient Health Questionnaire-9 (PHQ-9) score≥5 or related functional impairment. Participants were first asked about their preferences among the three apps and then randomized to their top two choices. Outcomes were depressive symptom severity (measured using PHQ-9) and functional impairment (assessed with Sheehan Disability Scale), collected over 3 months. Engagement in the study was assessed based on the number of times participants completed active surveys.
We screened 4502 participants and enrolled 1040 participants from throughout the United States over 6 months, yielding a sample of 348 active users. Long-term engagement surfaced as a key issue among Hispanic/Latino participants, who dropped from the study 2 weeks earlier than their non-Hispanic/Latino counterparts (
Fully remote mobile-based studies can attract a diverse participant pool including people from traditionally underserved communities in mental health care and research (here, Hispanic/Latino individuals). However, keeping participants engaged in this type of “low-touch” research study remains challenging. Hispanic/Latino populations may be less willing to use mobile apps for assessing and managing depression. Future research endeavors should use a user-centered design to determine the role of mobile apps in the assessment and treatment of depression for this population, app features they would be interested in using, and strategies for long-term engagement.
Clinicaltrials.gov NCT01808976; https://clinicaltrials.gov/ct2/show/NCT01808976 (Archived by WebCite at http://www.webcitation.org/70xI3ILkz)
Technology is being leveraged as a way to perform large-scale clinical research targeting typically underrepresented populations. Given the extensive use of mobile devices across communities, remote research methods are becoming widely used. Additionally, technology is also seen as a potential method for bridging health disparities, which are typically driven by limited resources and stigma most apparent in minority communities. Of particular interest is the Hispanic/Latino community: Although they comprise one of the fastest-growing demographic segments in the United States [
The widespread availability of digital technology has the potential to drive a sea change in access to psychosocial treatment for mental health problems in Hispanic/Latino communities [
Therefore, the aim of this study was to determine the feasibility of conducting remote research with a Hispanic/Latino adult sample of smartphone users, how they interact with depression apps, and the potential clinical impact mHealth apps may have on treating depression in this population. We report recruitment, engagement, and cost in this 12-week, fully remote randomized controlled trial among Hispanic/Latino individuals with depression and a cohort of non-Hispanic/Latinos with depression to act as a direct comparator group (and extend our previous findings).
Ethical approval for the trial (NCT01808976) was granted by the Institutional Review Board of University of California, San Francisco. Specific research methods for this project replicated the BRIGHTEN V1 study and are described elsewhere [
Three different types of recruitment approaches, including traditional, social networking, and search-engine strategies, were used (
This study used an equipoise stratified clinical trial design [
Interested participants completed a brief Web-based screening consisting of questions about their ability to speak Spanish (“Do you speak Spanish?; ¿Hablas Español?”) and mobile device ownership (“Do you have an iPhone or Android smartphone?”).
Participants were given the University of California, San Francisco consent form to read and were instructed to watch a video that highlighted the goals and procedures of the study, as well as risks and benefits of participation. After viewing the video, participants had to pass a quiz that confirmed their understanding that participation was voluntary, was not a substitute for treatment, and that they were to be randomized to treatment conditions. Each question had to be answered correctly before moving on to baseline assessment and randomization. Eligibility was established after consent was obtained. Upon being eligible, participants were sent a link to download their assessment app (Surveytory).
Participants had to speak English or Spanish, be 18 years old or older, and own either an iPhone with Wi-Fi or 3G/4G/LTE capabilities or an Android phone along with an Apple iPad version 2.0 or newer device. An iOS-based device was required as one of our intervention apps was only available on iOS devices at the time of the study. If a user had an Android phone, he or she was only eligible to participate if he or she also owned an Apple iPad version 2 or newer iOS tablet device. Participants had to endorse clinically significant symptoms of depression, as indicated by either a score of 5 or higher on PHQ-9 or a score of 2 or greater on PHQ item 10 (indicating feeling disabled in his or her life because of mood).
The baseline assessment included collecting demographic variables including age, race/ethnicity, marital and employment status, income, education, smartphone ownership, use of other health apps, and use of mental health services, including use of medications and psychotherapy. We collected information on mental health status using PHQ-9 [
Overall BRIGHTEN V2 study schematic showing participant recruitment, consent, enrollment, and randomization workflow along with weekly and daily data collection. EVO: Project Evolution; GPS: Global Positioning System; PHQ-2: 2-item Patient Health Questionnaire; PHQ-9: 9-item Patient Health Questionnaire; SDS: Sheehan Disability Scale.
SDS assesses perceived functional impairment across 3 domains (work/school, social life, and family/home responsibilities), yielding a sum score of 0-30, in which higher scores represent greater disability. SDS is popular in clinical trials given its sensitivity in detecting treatment effects [
Our custom mobile app, Surveytory, was used to collect all outcome and passive data. The assessments to measure changes in mood (PHQ-9) and disability (SDS) were administered weekly. Daily changes in mood were assessed using the PHQ-2 survey. Passive data collection included daily phone usage logs (call/text time, call duration, and text length) and mobility data (activity type and distance traveled using the phone’s accelerometer and Global Positioning System). Participants were automatically notified every 8 hours for 24 hours if they had not completed a survey within 8 hours of its original delivery. A built-in reminder also prompted the participant to check for any surveys on a daily basis in case they missed a new survey notification. An assessment was considered missing if it was not completed within a 24-hour time frame.
After confirming completion of baseline assessments (or 72 hours after the initiation of these assessments, whichever came first), participants were sent a Web-based survey that described each of the 3 treatment arms. Following this description, participants were asked to select which 2 apps they were most inclined to use in this study. Participants were then randomly assigned to one of these 2 preferred conditions and sent a link to download the intervention app, which included a brief video explaining how to download and use the assigned treatment app. This download also included a custom dashboard to monitor their study progress. Participants were asked to use their assigned app for 1 month.
The first app was a video game-inspired cognitive intervention (Project Evolution, EVO) designed to modulate cognitive control abilities, as declines in these abilities have been associated with depression [
Each of the 3 apps represented the most common type of self-guided depression apps available at the time of the study: apps based on psychotherapy principles, apps that claim to improve mood through therapeutic games, and apps that provide suggestions for mindfulness and behavioral exercises. Similar to the assessment notifications, each intervention app was equipped with built-in reminders asking the participant to use their app on a daily basis (reminders were sent once daily).
Randomized participants were paid a total of US $75 in Amazon gift vouchers for completing all assessments over the 12 weeks. Participants received US $15 for completing the initial baseline assessment and an additional US $20 for each subsequent assessment at the 4-, 8-, and 12-week time points.
“Gaming” is a situation where a user enrolls in a study solely to acquire research payment or attempts to influence specific methodological aspects of the study. We utilized the following safeguards to prevent this: (1) locking the eligibility or treatment randomization survey if a participant tried to change a submitted answer so that only the initial answer was utilized, (2) using study links that are valid for one user/device, and (3) tracking internet protocol addresses to minimize duplicate enrollments.
Participant self-reported race/ethnicity was used to create 2 groups of Hispanic/Latino and non-Hispanic/Latino adults (eg, all other races and ethnicities) to test our main study aims. Sample demographics and clinical characteristics were calculated using appropriate descriptive statistics. Comparisons between participant demographics were done using a chi-square test of independence for categorical variables and one-way analysis of variance to compare continuous variables across the groups. To assess the marginal effect (ie, association in the entire sample) between longitudinal weekly PHQ-9 and SDS scores and treatment arms, we used generalized estimating equations (GEEs) [
The BRIGHTEN V2 study started recruitment in August 2016 with screening and enrollment continuing for 7 months. A total of 4502 people were screened, and 23.10% (1040/4502) adults met the eligibility criteria and were enrolled in the study. Of these, 37.40% (389/1040) reported being Hispanic/Latinos. As in BRIGHTEN V1 study [
Enrolled participants lived throughout the United States, with all the metropolitan areas represented (
Of those who were randomized, 31.8% (87/274) attempted to change their assigned intervention by hitting the “back” button to return to the randomization page, while an additional 10.2% (28/274) participants returned to the survey a second time to change their preferences (9/274, 3.2%) of these individuals used both methods). Note that these attempts were unsuccessful because participant randomization was determined by the first answer given by a participant, and not any of the subsequent attempts made.
See
Overall, the cohort reported moderate depressive symptomatology with a mean baseline PHQ-9 of 13.61 (SD 5.46). There was no difference in baseline depression between Hispanic/Latino and non-Hispanic/Latino participants (
US map showing the location of people who were screened (gray) and enrolled (red) in the BRIGHTEN V2 Study.
The Consolidated Standards of Reporting Trials flow diagram. iPST: internet-based problem-solving therapy; EVO: Project Evolution; N/A: not available.
BRIGHTEN V2 participant characteristics.
Characteristics | Overalla (N=345) | Hispanic/Latino (n=106) | Non-Hispanic/Latino (n=239) | ||||
Baseline Patient Health Questionnaire-9, mean (SD) | 13.61 (5.46) | 14.41 (5.69) | 13.26 (5.34) | .08 | |||
Gender (female), n (%) | 266 (77.1) | 82 (77.4) | 184 (77.0) | >.99 | |||
Age (years), mean (SD) | 34.90 (10.92) | 32.71 (10.10) | 35.88 (11.15) | .02 | |||
.22 | |||||||
18-30 | 137 (40.2) | 51 (48.6) | 86 (36.4) | ||||
31-40 | 101 (29.6) | 27 (25.7) | 74 (31.4) | ||||
41-50 | 74 (21.7) | 22 (21.0) | 52 (22.0) | ||||
51-60 | 23 (6.7) | 5 (4.8) | 18 (7.6) | ||||
61-70 | 5 (1.5) | 0 (0.0) | 5 (2.1) | ||||
>70 | 1 (0.3) | 0 (0.0) | 1 (0.4) | ||||
.005 | |||||||
20,000 or less | 102 (29.6) | 43 (40.6) | 59 (24.7) | ||||
20,000-40,000 | 90 (26.1) | 31 (29.2) | 59 (24.7) | ||||
40,000-60,000 | 76 (22.0) | 20 (18.9) | 56 (23.4) | ||||
60,000-80,000 | 32 (9.3) | 5 (4.7) | 27 (11.3) | ||||
80,000-100,000 | 22 (6.4) | 2 (1.9) | 20 (8.4) | ||||
100,000 | 23 (6.7) | 5 (4.7) | 18 (7.5) | ||||
<.001 | |||||||
Community college | 72 (20.9) | 25 (23.6) | 47 (19.7) | ||||
Graduate degree | 58 (16.8) | 11 (10.4) | 47 (19.7) | ||||
High school | 56 (16.2) | 29 (27.4) | 27 (11.3) | ||||
University | 159 (46.1) | 41 (38.7) | 118 (49.4) | ||||
Device (iPhone), n (%) | 303 (87.8) | 89 (84.0) | 214 (89.5) | .20 | |||
Working (Yes), n (%) | 241 (69.9) | 65 (61.3) | 176 (73.6) | .03 | |||
<.001 | |||||||
Hispanic/Latinos | 106 (30.7) | 106 (100.0) | 0 (0.0) | ||||
Non-Hispanic white | 184 (53.3) | 0 (0.0) | 184 (77.0) | ||||
African-American/black | 25 (7.2) | 0 (0.0) | 25 (10.5) | ||||
American Indian/Alaskan Native | 3 (0.9) | 0 (0.0) | 3 (1.3) | ||||
Asian | 24 (7.0) | 0 (0.0) | 24 (10.0) | ||||
Other | 3 (0.9) | 0 (0.0) | 3 (1.3) | ||||
Speak Spanish (yes), n (%) | 113 (32.8) | 96 (90.6) | 17 (7.1) | <.001 | |||
.09 | |||||||
Comfortable | 71 (20.6) | 17 (16.0) | 54 (22.6) | ||||
Can't make ends meet | 80 (23.2) | 32 (30.2) | 48 (20.1) | ||||
Have enough to get along | 194 (56.2) | 57 (53.8) | 137 (57.3) | ||||
.28 | |||||||
Married/Partnered | 135 (39.1) | 35 (33.0) | 100 (41.8) | ||||
Separated/Widowed/Divorced | 33 (9.6) | 12 (11.3) | 21 (8.8) | ||||
Single | 177 (51.3) | 59 (55.7) | 118 (49.4) |
aParticipants who did not self-report Hispanic/Latinos status (n=3) have not been compared.
Association between baseline demographic variables and Patient Health Questionnaire-9 scores.
Baseline variables | Cohen |
|
Income satisfaction | 0.264 | <0.001 |
Income | 0.226 | 0.02 |
Spanish speaker | 0.139 | 0.029 |
Education | 0.160 | 0.076 |
Working | 0.103 | 0.096 |
Hispanic/Latinos | 0.098 | 0.101 |
Marital status | 0.107 | 0.15 |
Race | 0.161 | 0.15 |
Comparison of self-reported income satisfaction and baseline Patient Health Questionnaire-9 (PHQ-9) score between Hispanic/Latino and non-Hispanic/Latino participants.
Study costs beyond the initial infrastructure developed for BRIGHTEN V1 included participant payments (US $7540), website/enrollment portal/database development (US $4601), and total recruitment efforts (US $14,471; see
Overall participation in the study (as measured by assessment completion, as opposed to intervention app use) decreased by approximately 50% from week 1 to week 4, with more than 4 out of 5 participants dropping (14%) out by the end of 12 weeks. At week 4, participants contributed twice as much passive data (ie, momentary Global Positioning System data) compared with that provided in survey assessments requiring active participation (
Changes in weekly PHQ-9 scores were significantly associated with baseline severity of depressive symptoms (ie, mild, moderate, and severe;
At the cohort level, disability based on SDS ratings decreased by an average 0.74 points (
Participant acquisition costs.
Recruitment approach | Amount spent (US $) | Participants reached, n | Cost per participant (US $) |
Targeted Social Media (trialspark.com for Spanish Speakers) | 7800 | 86 | 90.70 |
Craigslist.com (Spanish advertisements) | 5275 | 303 | 17.41 |
Craigslist.com (English advertisements) | 946 | 637 | 1.49 |
Comparison of participant attrition in the study across survey types and passive data stratified by Hispanic/Latinos and Non-Hispanic/Latinos. GPS: Global Positioning System; PHQ-2: 2-item Patient Health Questionnaire; PHQ-9: 9-item Patient Health Questionnaire; SDS: Sheehan Disability Scale.
Comparison of Kaplan-Meier survival estimates for Hispanic/Latino and non-Hispanic/Latino participants during the course of the study (1-84) days.
Comparison of number of days participants were active across different treatment arms in the study. EnR: enrolled but not randomized; EVO: Project Evolution; HTips: health tips; iPST: internet-based problem-solving therapy.
Summary of estimates comparing weekly change in Patient Health Questionnaire-9 scores using a generalized estimating equations model.
Fixed effects | Effect size, beta (SE) | |
Intercept | 8.28 (0.77) | <.001 |
Gender (male) | .09 (0.50) | .85 |
Age | −.02 (0.02) | .23 |
Weeks 1-4 | 1.33 (0.55) | .02 |
Weeks 5-12 | 1.33 (0.72) | .06 |
Treatment (EVOa) | .03 (0.57) | .96 |
Treatment (HTipsb) | −.93 (0.56) | .09 |
Treatment (iPSTc) | −.39 (0.53) | .45 |
Hispanic/Latinos (yes) | −0.15 (0.43) | .73 |
Baseline state (moderate) | 5.35 (0.39) | <.001 |
Baseline state (severe) | 12.26 (0.46) | <.001 |
Weeks 1-4: baseline state (moderate) | −1.96 (0.67) | .004 |
Weeks 5-12: baseline state (moderate) | −2.66 (0.96) | .006 |
Weeks 1-4: baseline state (severe) | −4.19 (0.77) | <.001 |
Weeks 5-12: baseline state (severe) | −4.31 (1.04) | <.001 |
aEVO: Project Evolution.
bHTips: health tips.
ciPST: internet-based problem-solving therapy.
Comparison of weekly mean Patient Health Questionnaire-9 (PHQ-9) scores with mean SEs stratified by baseline depression state.
Summary of estimates comparing weekly change in Sheehan Disability Scale score using a generalized estimating equations model.
Fixed effects | Effect size, beta (SE) | |
Intercept | 10.91 (1.61) | <.001 |
Gender (male) | .64 (0.85) | .46 |
Age | .00 (0.04) | .89 |
Treatment (EVOa) | .32 (1.14) | .78 |
Treatment (HTipsb) | −.74 (1.07) | .49 |
Treatment (iPSTc) | −.12 (1.04) | .91 |
Weeks 2-4 | −.70 (0.33) | .03 |
Weeks 5-12 | −1.09 (0.47) | .02 |
Hispanic/Latinos (yes) | .12 (0.82) | .88 |
aEVO: Project Evolution.
bHTips: health tips.
ciPST: internet-based problem-solving therapy.
To our knowledge, BRIGHTEN V2 is the first large-scale effort to target the remote recruitment of Hispanic/Latino individuals with depression in the United States using digital health assessments and interventions that were translated into Spanish and administered solely on smartphones. We screened and enrolled one of the largest cohorts of Hispanic/Latino individuals with depression to date. Previous work has suggested that the lack of utilization of mental health care could be attributed to (1) cultural beliefs about mental health problems, (2) ineffective and inappropriate therapies, or (3) access problems or other barriers [
Similar to our previous work [
Potential issues recruiting US Hispanic/Latino individuals for mental health research may hinge on (1) reluctance to be randomized, given the high number of the enrolled participants who tried to switch the initial randomly assigned intervention app and (2) privacy concerns such as the possibility that some of our lower income participants could be sharing the smartphones with other family members, potentially reducing the willingness to participate and causing high initial dropout [
Another potential issue in the study was the possible delay in receiving the intervention. The stratified equipoise randomization occurred after eligible participants attempted the assigned assessments (or after 72 hours, whichever came first); given that participants may have been waiting for their assigned intervention following their initial exposure to the assessment app, they may have lost interest in participating. Another consideration involves the appropriate incentive structure (eg, timing and amount of compensation) to maximize retention and engagement, as this factor is not well understood among such underrepresented samples such as ours. It is an empirical question to understand how the amount of payment affects one’s participation in a given trial. Indeed, in the first version of this study (BRIGHTEN V1), we found that participants who received bonus payments remained in the study longer than those who did not receive bonuses [
Despite the poor engagement of the active components in this study, it is clear from the findings (and those from other mobile-based studies) that there is still a tremendous potential to capture passive data from smartphone use. This form of data capture is much less burdensome as it does not require the user to actively engage with an app. If one only considers the passive data compliance versus that of the active surveys in our study, passive data offers a viable opportunity to develop an individualized digital baseline (digital fingerprint) and investigate deviations from baseline phone usage to behavioral fluctuations. However, using cohort-level signals in passive data to predict depression states remains modest at best [
Similar to our earlier findings in the original study [
mHealth platforms have the potential to deliver on-demand and as needed assessment and intervention alternatives despite known barriers of time constraints, cost, stigma, and cultural and language differences. Although mHealth holds great promise for closing the treatment gap for underserved communities, recruitment and retention remain problematic in such populations, and more research is needed to figure out better engagement strategies to best leverage mobile apps (eg, appropriate incentive levels, culturally responsive content and notifications along with user-centered design approaches [
Our study offers preliminary lessons learned from doing such work in an understudied sample of Hispanic/Latino smartphone users. Scaling these types of remote assessments and interventions will hinge on the acceptance of such technology by both care teams and patients. This will be a problem for future research using remote technologies at scale to recruit and engage targeted communities (eg, Hispanic/Latino adults with depression) and will depend on understanding the population’s needs and addressing barriers to using mental health interventions via mobile apps.
Comparison of demographic variables.
CONSORT‐EHEALTH checklist (V 1.6.1).
generalized estimating equations
enrolled but not randomized
Project Evolution
health tips
internet-based problem-solving therapy
Patient Health Questionnaire-9
Sheehan Disability Scale
Support for this research was provided by the National Institute of Mental Health (PAA R34MH100466, T32MH0182607, K24MH074717; BNR T32MH073553) and the National Institute on Aging (JAA P30AG15272). The authors thank Thomas Egan and Tojo Chemmachel for their help with data collection and data monitoring; Cecilia & Joaquin Anguera (author JAA’s parents) for their help with culturally relevant translations within each app, website, video, and survey presented; Diana Albert for assistance in Web design; Diego Castaneda & Alinne Barrera for their willingness to speak in our promotional video; and Elias Chaibub Neto for helpful insights during the data analysis phase. The authors would also like to especially thank all the participants whose time and efforts made this work possible. We would also like to thank the entire Akili Interactive team as well as Wow Labz (especially R Omanakuttan
AG is cofounder, chief science advisor, and shareholder of Akili Interactive Labs, a company that develops cognitive training software. AG has a patent for a game-based cognitive training intervention, “Enhancing cognition in the presence of distraction and/or interruption,” on which the cognitive training app (Project: EVO) that was used in this study was based. No other author has any conflict of interest to report.