Published on in Vol 24 , No 2 (2022) :February

Preprints (earlier versions) of this paper are available at, first published .
Sensing Apps and Public Data Sets for Digital Phenotyping of Mental Health: Systematic Review

Sensing Apps and Public Data Sets for Digital Phenotyping of Mental Health: Systematic Review

Sensing Apps and Public Data Sets for Digital Phenotyping of Mental Health: Systematic Review


1Laboratory of Intelligent Distributed Systems, Federal University of Maranhão, São Luís, Brazil

2Health Research Institute, University of Limerick, Limerick, Ireland

3NeuroInovation & Technological Laboratory, Federal University of Delta do Parnaíba, Parnaíba, Brazil

4College of Computer Science and Technology, China University of Petroleum (East China), Qingdao, China

5Instituto de Telecomunicações, Covilhã, Portugal

6Federal Institute of Maranhão, Araioses, Brazil

Corresponding Author:

Ariel Soares Teles, PhD

Federal Institute of Maranhão

Rua José de Alencar, S/N, Bairro Cumprida

Araioses, 65570-000


Phone: 55 86995501313


Background: Mental disorders are normally diagnosed exclusively on the basis of symptoms, which are identified from patients’ interviews and self-reported experiences. To make mental health diagnoses and monitoring more objective, different solutions have been proposed such as digital phenotyping of mental health (DPMH), which can expand the ability to identify and monitor health conditions based on the interactions of people with digital technologies.

Objective: This article aims to identify and characterize the sensing applications and public data sets for DPMH from a technical perspective.

Methods: We performed a systematic review of scientific literature and data sets. We searched 8 digital libraries and 20 data set repositories to find results that met the selection criteria. We conducted a data extraction process from the selected articles and data sets. For this purpose, a form was designed to extract relevant information, thus enabling us to answer the research questions and identify open issues and research trends.

Results: A total of 31 sensing apps and 8 data sets were identified and reviewed. Sensing apps explore different context data sources (eg, positioning, inertial, ambient) to support DPMH studies. These apps are designed to analyze and process collected data to classify (n=11) and predict (n=6) mental states/disorders, and also to investigate existing correlations between context data and mental states/disorders (n=6). Moreover, general-purpose sensing apps are developed to focus only on contextual data collection (n=9). The reviewed data sets contain context data that model different aspects of human behavior, such as sociability, mood, physical activity, sleep, with some also being multimodal.

Conclusions: This systematic review provides in-depth analysis regarding solutions for DPMH. Results show growth in proposals for DPMH sensing apps in recent years, as opposed to a scarcity of public data sets. The review shows that there are features that can be measured on smart devices that can act as proxies for mental status and well-being; however, it should be noted that the combined evidence for high-quality features for mental states remains limited. DPMH presents a great perspective for future research, mainly to reach the needed maturity for applications in clinical settings.

J Med Internet Res 2022;24(2):e28735




Mental health issues have a high prevalence, with 1 in 10 people worldwide experiencing them at any one time [1] and common mental disorders such as depression being closely linked to suicide [2]. Mental disorders are “generally characterized by some combination of abnormal thoughts, emotions, behavior and relationships with others” [3]. Examples are depression, schizophrenia, excessive anxiety and stress, disorders caused by drug and alcohol abuse, and personality and delusional disorders. These disorders pose a significant burden on societies, both emotionally and financially. For example, the cost of mental health disorders in the European Union is estimated at €600 billion (~US $451 billion), or 4% of gross domestic product [4]. COVID-19 has had a further negative impact on global mental health [5].

Mental disorders are usually diagnosed exclusively on the basis of symptoms, which are identified from patients’ interviews and self-reported experiences. Sometimes these experiences are gathered using ecological momentary assessment (EMA) solutions [6], but mostly therapists rely on patients remembering such experiences during sessions. EMA solutions are used as a research method to collect, at fixed or random moments, reports from individuals about perceptions of their behaviors and feelings, and what they have done or experienced. It is well known that the intervening time and current state of the patient bias his/her memory of the experience. In addition, biological tests to assist diagnosis remain hard to be developed [7]. Based on the need to develop solutions able to objectively diagnose and monitor mental health, different solutions have been proposed, such as mobile apps [8,9] and machine learning (ML) solutions [10], which are even more indicated today due to the global pandemic situation [11,12]. Digital phenotype solutions are examples that can expand the ability to identify and diagnose health conditions from the interactions of people with digital technologies [13]. Specifically, digital phenotyping of mental health (DPMH) [14] seems to be a promising approach not only to deal with the problem of diagnosing the issue, but also to be applied to the treatment.

The omnipresent adoption of pervasive devices, including smartphones and wearable sensors, provides novel opportunities for tracking mental health status and disorders. Digital phenotyping refers to the “moment-by-moment quantification of the individual-level human phenotype in-situ using data from smartphones and other personal digital devices” [15], thereby removing limitations created by the aforementioned bias in self-reports.

DPMH solutions require collecting and analyzing large amounts of different types of social and behavioral data that can represent experiences of the users and their interactions with people, places, and devices. These context data can be passively gathered, for instance, from ubiquitous sensors, social media, and health care systems [16]. After collection, pieces of raw data are usually preprocessed and transformed into useful data or data sets to be mined [17]. For example, these data sets may be analyzed or used as input to build ML models [18], including for DPMH, to produce valuable insights and evidence. Therefore, DPMH sensing apps are primarily responsible for collecting and preprocessing data, with the data sets produced being important for developing such models. This study systematically reviews the sensing apps and data sets for DPMH.


In the last few years, the number of smart devices, that is, mobile (eg, smartphone, tablet) and wearable (eg, smart band, smartwatch) devices, has grown globally. They have enabled the development of research in the health area, including mental health [10]. The term “digital phenotype”, defined by Jain and colleagues [13], refers to the identification of human behavior patterns, whereas “digital phenotyping” is a monitoring approach that can collect patients’ behavioral markers passively [19]. Therefore, DPMH solutions aim at collecting multimodal pieces of information from digital devices using sensing apps to combine them with electronic medical records to objectively contribute to the identification of symptoms of mental disorders. In this context, sensing apps are tools for mobile and wearable devices used to collect useful user information.

Our vision of the digital phenotyping process organized in layers is presented in Figure 1. The process starts at the first layer with the collection of raw data from different sources (eg, global positioning system [GPS] sensors, keyboard inputs, voice, and social media). These data can be collected both actively, in which user inputs are explicitly required, and passively [20], which only requires the user’s permission to access context data. In the next layer, these data are processed to provide high-level information. High-level information represents not only human behaviors (eg, sociability, physical activity) and habits (eg, mobility, sleep) but also other information of interest for professionals (eg, environmental context, mood). Next, human behavioral patterns that compose digital phenotypes (eg, biomarkers, mood patterns) can be recognized using computational tools (eg, ML, data mining, statistical models). Finally, we visualize the application layer, which corresponds to digital phenotypes used by health professionals for evidence-based mental health care.

Figure 1. The process of digital phenotyping.
View this figure

Related Work

Since the aforementioned concepts were proposed in the literature, many research studies have been performed. For this reason, researchers have also reviewed different aspects regarding this research topic. Table 1 presents a list composed of related reviews.

Table 1. List of related review articles.
Garcia-Ceja et al [10]A survey on mental health monitoring using mobile and wearable sensors focused on multimodal sensing and machine learning solutions.
Cornet and Holden [21]An SLRa on passive sensing using specifically smartphones focused on health and well-being.
De-La-Hoz-Franco et al [22]An SLR aimed at finding data sets composed of sensor data for human activity recognition.
Trifan et al [23]This SLR aimed to identify studies on the passive use of smartphones for generating outcomes related to health and well-being. It identified that one of the areas most explored by mobile passive sensing is mental health.
Seppälä et al [24]An SLR on mobile solutions focused on uncovering associations between sensor data and symptoms of mental disorders (ie, behavioral markers).
Liang et al [14]A comprehensive survey addressing different topics on DPMHb.
Benoit et al [20]This SLR sought to map DPMH tools that use machine learning algorithms across the schizophrenia spectrum and bipolar disorders.
Antosik-Wójcińska et al [25]This work presents an overview of studies about smartphone systems focused on monitoring or detecting bipolar disorder.

aSLR: systematic literature review.

bDPMH: digital phenotyping of mental health.

This review differs from the previous ones in the following aspects: First, instead of focusing on a specific mental state/disorder, this review presents an overview of how different types of devices and detection modalities have been used to monitor a wide variety of different mental states within the DPMH area. Second, this review covers not only active collection solutions, which are emphasized in most reviews, but also passive sensing proposals. Third, this review focuses on the technical features of sensing apps and data sets (eg, size, sensors used to collect data, and types of context data). Technical features can be identified to serve as a basis for the use or development of new apps (eg, physical and virtual sensors used to collect data, operating systems for which the apps were developed, types of context data collected, inferred information). Finally, not all previous reviews were conducted systematically. Our article therefore provides researchers with an overview of the available technological framework for DPMH and can serve as a preliminary guide for current and further research.

Objectives and Research Questions

This systematic review intends to provide a technical characterization and summary of sensing apps and public data sets for DPMH. By “public” we mean data sets that are available for free download for use in other research endeavors. These 2 topics (ie, sensing apps and public data sets) are jointly addressed in this review as complementary content. When researchers do not have access to DPMH data sets, they need sensing apps. This paper therefore can be a starting point not only to gain knowledge on the current sensing apps for DPMH (which consequently enables the development of new solutions), but also to find reusable ones. Therefore, the objectives of this article are to (1) present results from a systematic search on digital libraries and data set repositories, and then identify and categorize them by considering their characteristics; (2) summarize their main features (measurable pieces of data that can be used for analysis or creation of ML models, such as data collection time stamp, context data produced by DPMH solutions, and data self-reported by users), which are useful for researchers, either mental health or information technology ones, to conduct further investigation and comment on their usefulness; and (3) identify trends in and research opportunities for DPMH. Results of this systematic review are also relevant for data engineers and ML specialists who make efforts in developing DPMH solutions.

To achieve the objectives of this systematic review, we defined the following research questions for sensing apps (SA-RQs) and data sets (DS-RQs):

SA-RQ1: What context data are collected through DPMH sensing apps?

SA-RQ2: What high-level information can be inferred from the context data collected by DPMH sensing apps?

SA-RQ3: How is the identified high-level information used to support mental health?

DS-RQ1: What features are available in public data sets for DPMH?

DS-RQ2: What high-level information can be derived from public data sets for DPMH?


This study was conducted based on the guidelines for systematic literature reviews in software engineering proposed by Kitchenham and Charters [26]. This review followed 3 main phases: planning, conducting research, and dissemination of results. These phases were supported by the [27] tool, which provides an online shared work environment for planning and executing systematic reviews. In this section, we present how this review was planned and conducted.

Search Strategy

The search aimed to identify data sets and studies that have presented sensing apps capable of collecting data. Two (JM and IM) researchers conducted an exhaustive search on January 14, 2021, on data set repositories and digital libraries. The search for data sets was performed in 20 repositories (Multimedia Appendix 1). The search for articles reporting sensing apps was conducted in the following digital libraries: ACM Digital Library, DOAJ, IEEE Xplore, Web of Science, PubMed, PsycInfo, ScienceDirect, and Scopus. These databases were selected because they collect reliable studies related to mental health informatics.

We designed the search strings to retrieve data sets and articles presenting sensing apps for DPMH (Table 2). These search strings were carefully designed to meet the research focus. In the string to search data sets, we defined the 2 main terms (ie, mental health and digital phenotyping) and decided to use Boolean “OR” as the link for them to get comprehensive results. The search string for articles was developed based on the review objective, research questions, and their motivations. We used keywords and their synonyms to maximize results. To avoid missing papers, we evaluated the suitability of the string in a pilot search, in which we used those studies developed by Liang et al [14] (ScienceDirect) and Torous et al [15] (PubMed) as control articles. This pilot search was able to retrieve the cited studies, thus demonstrating its ability to find articles relevant for this review. At the end of the search, duplicate data sets and articles were identified and removed using the tool.

Table 2. Keywords and their synonyms.
Data setsData set repositories“mental health” OR “digital phenotyping”
Sensing appsDigital libraries(“mental health” OR “mental disorder*” OR “mental illness” OR “mental state” OR “mental disease”) AND (“mobile device” OR “smartphone*” OR “wearable device*” OR “sensor*” OR “wearable*” OR “mobile application*” OR “mobile health” OR “mHealth” OR “mobile phone*” OR “sensor data”) AND (“passive detection” OR “data collection” OR “digital phenotype” OR “digital phenotyping” OR “digital health” OR “monitoring” OR “passive sensing”)

Selection Criteria

A set of selection criteria was defined to track research articles and data sets. Textbox 1 presents the selection criteria for scientific studies with sensing apps and data sets. Importantly, no date range limits were applied to the literature included in the review. In the selection of scientific articles, criterion EC1 excluded studies presenting the development of EMA apps, and papers that do not present a new DPMH solution (eg, studies using a DPMH solution previously described/published in another paper). For data set selection, criterion EC1 excluded those data sets that were not publicly available, that is, those protected and not accessible to be reused by other researchers.

In the selection phase, 2 researchers (JM and IM) performed the data set selection process based on the inclusion and exclusion criteria. In a second step, the same 2 researchers independently performed the study selection process. This process consisted of 3 sequential phases: (1) study screening by means of metadata analysis (ie, title, abstract, and keywords); (2) full-text analysis of the articles selected in the screening phase; and (3) conducting backward snowballing [28]. Next, the level of agreement between the selections was calculated using the Cohen κ coefficient [29]. In the end, the 2 researchers conducted discussions to resolve selection conflicts and, when there was no consensus, judges (2 other authors, namely, AT and DV) deliberated on the disagreements.

Selection criteria.

Inclusion criteria (IC)

Scientific articles

IC1: Primary studies that present pervasive solutions to collect data for digital phenotyping of mental health.

IC2: Full papers.

IC3: Papers in English language.

Data sets

IC1: Available to be downloaded and used in other research studies (ie, public data set).

IC2: Focused on mental health or specific mental disorders.

IC3: Relevant data (eg, behavioral, physiological, social) for mental health collected through pervasive technologies.

IC4: Content in English language.

Exclusion criteria (EC)

Scientific articles

EC1: Articles presenting research on digital phenotyping of mental health without involving a proposal of a pervasive solution.

EC2: Gray literature.

EC3: Articles that have other publications with a more current and complete version of the proposed solution.

Data sets

EC1: Not publicly available.

EC2: With no content related to mental health.

EC3: Data on treatments of patients with mental disorders without using pervasive devices.

EC4: Content in languages different from English.

EC5: Online surveys on ethnographic characteristics and prevalence of mental disorders.

EC6: Composed exclusively by multimedia data (eg, video, audio) or electroencephalography data.

Textbox 1. Selection criteria.

Data Extraction

In this step, data were extracted from the selected articles and data sets to answer the research questions defined in this review. For this purpose, a data extraction form was designed by 2 authors (JM and IM) and validated by the judges. Specifically, we designed the items in the form to extract relevant information presented by the reviewed studies and data sets, thus enabling us to answer the research questions, and identify open issues and research trends. Multimedia Appendix 2 presents the items in the data extraction form.

Study Selection

An overview of the review process with results is presented in Figures 1 and 2. In Figure 2, 8 digital libraries were used to search for scientific articles that presented sensing apps for DPMH. A total of 2374 articles were returned. We removed 926 duplicate articles. The inclusion and exclusion criteria from Textbox 1 were applied to select 26 selected studies. The Cohen κ statistical test showed an agreement level of ≈0.87 between researchers, which is considered an almost perfect agreement [29]. Next, researchers used the 1-level backward snowballing approach and added 5 articles. This resulted in 31 articles for inclusion in the data extraction process.

In Figure 3, 20 data set repositories were searched to return 2581 data sets with 471 duplicates that were removed. After applying selection criteria (Textbox 1) and resolving conflicts, 8 data sets remained for analysis.

Figure 2. PRISMA-based flowchart describing the selection of studies.
View this figure
Figure 3. Flowchart describing the selection of data sets.
View this figure

Sensing Apps

Table 3 summarizes the 31 apps identified, which are presented in ascending order by year of publication. Multimedia Appendix 3 presents the full version of the table. Context data sources are categorized as follows to present the sensors used by the apps based on the work by Palaghias et al [30]: ambient (eg, microphone, camera), positioning (eg, GPS, Wi-Fi), virtual (eg, phone calls, SMS text messages), and inertial (eg, accelerometer, gyroscope). Table 3 also presents high-level information inferred and types of analyses performed on the collected data. Apps that do not infer information (ie, defined as “It does not infer information”) are only intended to collect data from smart devices. In this case, collected data are usually sent to servers for analysis. These apps are flagged as “Raw data collection” in Table 3.

Table 3. Summary of reviewed sensing apps.
AppContext data sourceHigh-level informationType of analysis
Funf [31]Positioning, inertial, and virtualIt does not infer informationRaw data collection
Mobilyze [32]Positioning, inertial, virtual, and ambientMood, emotions, cognitive/motivational states, physical activity, social contextMental state prediction
Purple Robot [33]Positioning, inertial, and virtualIt does not infer informationRaw data collection
AWARE [34]Positioning, inertial, and virtualIt does not infer informationRaw data collection
Sensus [35]Positioning, inertial, virtual, and ambientIt does not infer informationRaw data collection
MOSS [36]Positioning and virtualPhysical activity, mobility, device usage, sociability, app usageMental state classification
Beiwe [15]Positioning, inertial, virtual, and ambientIt does not infer informationRaw data collection
EVO [37]Positioning, inertial, and virtualIt does not infer informationRaw data collection
CrossCheck [38]Positioning, inertial, virtual, and ambientSleep, sociability, mobility, physical activity, device usageMental state prediction
SituMan [39]Positioning and inertialDaily routine situations (eg, working, studying)It recognizes daily routine situations using fuzzy logic
EmotionSense [40]Positioning, inertial, virtual, and ambientSemantic locations, physical activity, sociabilityCorrelation analysis and mental state classification
StudentLife [41]Positioning, inertial, virtual, and ambientSociability, mobility, physical activity, device usageCorrelation analysis
Undefined [42]Positioning, inertial, and ambientPhysical activity, mobility, and sociabilityCorrelation analysis
AMoSS [43]PositioningMobilityMental state prediction
eB2 [44]Positioning and virtualMobilityMental state classification
EARS [45]Positioning, inertial, virtual, and ambientIt does not infer informationRaw data collection
SleepGuard [46]Inertial and ambientPosture/position of body when sleepingMental state classification
Moment [47]VirtualIt does not infer informationMental state classification
TypeOfMood [48]VirtualIt does not infer informationMental state classification
RADAR-base [49]Positioning, inertial, virtual, and ambientIt does not infer informationRaw data collection
SHADO [50]Positioning, inertial, and ambientPhysical activity, mobility, sleep, sociabilityCorrelation analysis and mental state classification
InSTIL [51]Positioning, inertial, virtual, and ambientIt does not infer informationRaw data collection
Lamp [52]PositioningPhysical activityCorrelation analysis
SOLVD [53]Positioning, inertial, virtual and ambientMobility, sociability, context of daily life (eg, duration of sleep)Correlation analysis
STDD [54]Inertial, virtual, and ambientPhysical activity, mood, sociability, sleepMental state classification
Moodable [55]Positioning, virtual, and ambientSociability and mobilityMental state classification
Cogito Companion [56]Positioning and VirtualMood, stress level, and well-beingMental state classification
Strength Within Me [57]VirtualSleep, mobility, and sociabilityMental state prediction
EuStress [58]AmbientIt does not infer informationMental state prediction
Mood Triggers [59]Positioning, inertial, virtual, and ambientMobility and sociabilityMental state prediction
Data Collector [60]Positioning and inertialPhysical activity and mobilityMental state classification

Data Set Characterization

Table 4 shows the 8 selected data sets in descending order by number of participants. Two of them have sleep quality data: data sets DS1 and DS7, in which the data are derived from activity trackers such as Fitbit, smartwatches, and smartphones. We identified 2 data sets (DS3 and DS5) with data collected from various sensors, which we refer to as multimodal. We identified 2 data sets (DS3 and DS5) that were generated by the StudentLife [41] and Beiwe [15] sensing apps, respectively, shown in Table 3.

Table 4. Summary of DPMH data sets.
Data setStudyHigh-level informationFeaturesDevice type/operating SystemNumber of participantsStudy durationSize
DS1a [61][62]Sleep qualityFitbit data (eg, heart rate, sleep duration, sleep time, wake time)Watch Fitbit4823-11 nights392.32 KB
DS2 [63][64]ActivityActigraph (time stamp, activity measurement from the actigraph watch)Actigraph watch55Average 12.6 days4.3 MB
DS3 [65,66][41,67]Multimodal (stress, sleep, mood, physical activity, sociability, well-being)Self-report questionnaires, activity, audio, Bluetooth encounters, conversation, lightness, GPSb coordinates, phone charge, screen on/off, Wi-Fi IDsSmartphone (Android)4866 days230 MB/5 GB
DS4 [68][69,70]SociabilitySelf-reports, battery level, Bluetooth encountersSmartphone (Android, iOS)324 weeks9.7 MB
DS5 [71][15]Multimodal (mobility, sociability, sleep)Self-report questionnaires, accelerometer, app logs, Bluetooth encounters, call logs, GPS coordinates, power state, Wi-FiSmartphone (Android, iOS)63 months776.7 MB
DS6 [72][73]Mood, depression symptomsSelf-report questionnairesSmartphone (Android, iOS)314 days2.7 MB
DS7 [74]Sleep qualityStart, end, sleep quality, time in bed, wake-up time, sleep notes, heart rate, number of stepsWearable device and smartphone (iOS)14 years66.11 KB
DS8 [75]MoodSelf-reported moodMobile social network (Twitter app)12 years131 KB

aDS: data set.

bGPS: global positioning system.

Data set DS1 [61] presents sleep data (eg, total sleep time and sleep efficiency) obtained from Fitbit Charge HR activity trackers used by 482 individuals [62], while data set DS2 [63] includes actigraphic data collected from patients with unipolar and bipolar disorders and 32 healthy controls [64]. Data set DS3 [65,66] contains data gathered from different sensors and EMA questionnaires collected from smartphones of 48 undergraduate and graduate students over 66 days [41,67]. Data set DS4 [68] comprises Bluetooth device scan, battery level, and EMA data collected at regular intervals for 4 weeks [69,70], while data set DS5 [71] presents passive data (eg, GPS, Wi-Fi, Bluetooth, and accelerometer) and active data (EMA survey responses) collected over 3 months [15]. Data set DS6 [72] contains EMA assessments of depression symptoms using the 9-item Patient Health Questionnaire (PHQ-9) [73]. Data set DS7 [74] presents sleep data collected through the Sleep Cycle mobile app [76]. Finally, data set DS8 [75] presents values extracted from Twitter posts collected from a person using Exist [77] over 2 years.

Context Data Collected by DPMH Sensing Apps (SA-RQ1)

Sensing apps identified in this review collect context data from mobile and wearable devices to support DPMH. At a high level, the sensors that measure context data can be seen as physical and virtual sensors [78], which generate a diversified set of behavioral data. Physical sensors are hardware components embedded or connected to devices responsible for collecting context data. Some examples are accelerometers to measure user activity, light sensors to measure ambient light levels, and GPS to collect user’s locations. Virtual sensors represent software components capable of recording interactions of individuals with devices or using a number of physical sensors (or other virtual ones) to construct a higher-level feature. Examples of such sensors are social interaction sensors that may use Bluetooth encounters (ie, co-location information between individuals or places), Wi-Fi network, and sound data to infer social activity; and user–device interaction sensor, which measures user interactions with devices (eg, call logs, SMS text messages, app usage, screen on/off).

Figure 4 presents a heat map of the combination of context data sources for the 31 sensing apps, showing the most used sensors in DPMH solutions. In this analysis, we investigated the frequency of the combination of each type of context data source, highlighting the main sets of sensors explored by the sensing apps. For example, Bluetooth encounters are often combined with accelerometer (n=10), battery level (n=8), calls (n=10), GPS (n=10), screen on/off (n=7), SMS text messages (n=9), and Wi-Fi (n=8), while app usage logs are often combined with accelerometer (n=7), calls (n=9), GPS (n=8), and SMS text messages (n=8). We also identified from this analysis that step count (Fitbit), cell tower ID, and gyroscope are combined less often with other context data sources. The analysis of the combination of context data sources (Figure 4) demonstrates an interest in performing data fusion to identify multiple high-level information and emphasizes the combination of context data sources resulting from the interest in monitoring such information. For example, we identify an interest in recognizing sociability information by combining call logs with Bluetooth encounters (n=10) and SMS text messages (n=17). We also recognize that GPS is often combined with Wi-Fi (n=10) to recognize mobility aspects. In addition, the interest in monitoring multiple high-level information in the same app resulted in different combinations of context data sources. For example, the combination of GPS with call logs (n=20), accelerometer (n=17), and screen on/off (n=10) is a result of an interest in monitoring sociability, physical activity, and device usage patterns, respectively.

Figure 4. Context data sources used in the reviewed studies. GPS: global positioning system.
View this figure

High-Level Information Identified by Sensing Apps (SA-RQ2)

From the context data collected by sensing apps, researchers can extract high-level information representing different types of situations (eg, sociability, mobility). Table 3 presents the situations of interest identified from context data. Sensing apps aimed to identify information related to the physical and environmental aspects of the monitored individuals, such as mobility patterns [38] (eg, places visited, total distance traveled, time spent in locations), physical activities (activity type and duration), daily routine situations (eg, working, studying), and environmental context (eg, ambient temperature).

Figure 5 shows the types of high-level information generated by the sensing apps. The 3 types of information that stand out are human behavioral patterns related to mobility, sociability, and physical activity (n≥10). Information about the individual’s condition was also derived, such as mood and sleep quality.

Researchers also explored information about device usage, which was derived from logs such as calls, SMS text messages, screen on/off events, and app usage. In general, studies have been able to build apps that achieve promising results of performance metrics (eg, accuracy, sensitivity, specificity) in identifying useful high-level information for mental health professionals. By contrast, there are some researchers developing apps that have not transformed context data into high-level information (ie, they focus only on raw data collection), and these are not shown in Figure 5.

Figure 5. High-level information summary.
View this figure

Support for Monitoring Mental Health (SA-RQ3)

The sensing apps identified used high-level information to provide a variety of mental health services. Table 3 shows the types of analyses performed based on the high-level information identified. Some apps infer daily routine situations and send recommendations in real time [58], thus aiming to provide tools to improve services of health professionals. Most approaches to support mental health monitoring were as follows: correlation, classification, and prediction. Correlation analyses associate features extracted from high-level information with mental states of the monitored individual, that is, they aim to find evidence that identified behaviors have significant correlations with psychological well-being [79]. Researchers also used identified behaviors to design ML models capable of classifying and predicting mental states [32,80], which can be used as decision support tools for health professionals. Lastly, some studies [31,81,82] did not report on additional analyses, but concentrated on describing the features of their sensing apps to facilitate DPMH research.

Figure 6 shows the mental states/disorders studied by DPMH research. Apps classified as “Mental states in general” did not focus on a specific mental disorder; instead, they are generic to be used in studies for different mental health disorders. We found 14 articles with a focus on individuals with depression. Other mental states/disorders are schizophrenia, mood, suicidal ideation, stress, loneliness, anxiety, and psychotic symptoms, all with between 1 and 3 studies returned in our search. We identified 11 articles that did not specifically address a particular mental disorder in their studies.

Figure 6. Mental states/disorders targeted by sensing apps.
View this figure

Features Available in Data Sets (DS-RQ1)

The selected data sets have several types of features extracted from context data collected by sensing apps. These features model various aspects of human behavior that can be applied to the development process of new tools for monitoring and intervention in mental health. Table 4 presents the features available in the selected data sets. Data sets DS1 and DS7 contain features related to sleep. They provide information such as sleep start and end, sleep quality, time in bed, wake-up, sleep notes. Data sets DS4 and DS8 have features related to the social aspect such as self-reports of social interactions and Bluetooth encounter data, while data sets DS6 and DS8 provide actigraph data and self-reports, respectively. Data sets DS4 and DS5 have features capable of modeling more than 1 human behavior (ie, multimodal), thus providing data from different sources. These sources provide multimodal context data that can be fused to generate meaningful high-level information [10]. Moreover, multimodal data sets can support DPMH research under different aspects of interest for professionals, such as patient’s mobility and sociability.

Possible High-Level Information Derived From Data Sets (DS-RQ2)

The selected data sets have features capable of modeling different types of human behavior. Therefore, to understand the potential for applying these data to DPMH, we identified high-level information that can be derived from these data sets based on the available context data. Table 4 presents high-level information inferred. Explicitly, these data sets can model the situations listed in Textbox 2.

Additionally, some data sets contain high-level information such as mood, mental status, and mental disorder symptoms. These types of information are self-reported by participants using questionnaires (eg, PHQ-9) and EMA solutions through smart devices.

Situations modeled by data sets.


This can be quantified using context data that allow characterizing social relationships of the participants such as interactions on online social networks, and face-to-face and device-mediated interactions [83]. These data sets contain context data such as posts on social networks, Bluetooth encounters, global positioning system (GPS) coordinates, or conversational activity inferred from microphone signals.

Physical activity

This is routinely measured using accelerometer and GPS data, resulting in either a log of user physical activities or an aggregate measure of energy expenditure.


This is mostly measured in terms of sleep quality and sleep duration of the participants. In general, these data sets have features such as sleep quality, total sleep time, time in bed, and wake-up inferred from contextual data such as heart rate and screen on/off logs, and ambient light.


These data sets comprise several types of context data (eg, accelerometer, ambient light, battery level, Bluetooth, GPS, screen on/off, questionnaires [9-item Patient Health Questionnaire]), which allow characterizing more than 1 behavior of the participants such as sociability, mobility, and physical activity.

Textbox 2. Situations modeled by data sets.

Principal Findings

Our review shows that there are features that can be measured on smart devices that can act as proxies for mental status and well-being, but it should be noted that the combined evidence for high-quality features for mental states remains limited. Researchers have conducted several types of analysis on the data collected. In principle, we recognize a trend to design features from the data collected (Figure 7) to train ML models capable of classifying mental states/disorders (n=11) and predicting future mental states/disorders (n=6). We also note a substantial effort in analyzing correlations between features designed from the collected data and mental states/disorders (n=6). This type of analysis aims to find evidence of the viability and usefulness of DPMH for clinical practice. Furthermore, there are apps that only collect raw context data (n=9) to be analyzed subsequently, and 1 app (SituMan [39]) focused on the recognition of daily routine situations.

Figure 7. Number of published studies by year and types of analysis.
View this figure

The literature mostly reports on the measurement of mobility, sociability, sleep, physical activity, and mood. Mobility represents high-level information derived from the movement sequence of individuals. These patterns are identified by processing GPS and Wi-Fi samples, which allow for the recognition of mobility traces. Sociability is measured using context data sources such as call logs, SMS text messages, Bluetooth encounters, and microphone data. These pieces of data allow identifying physical and virtual social interactions. Sleep information is measured by contextual data fusion such as ambient light, movement activity, screen on/off, and ambient sound. In addition, researchers have used Fitbit data to recognize sleep quality. Physical activity is recognized using data from inertial sensors (eg, accelerometer, magnetometer, gyroscope), making it possible to classify different types of activities such as walking, running, and stationary. Finally, mood has been recognized using different context data sources, such as accelerometer and heart rate monitor of wearable devices, combined with self-reports.

The different ways in which these features are inferred and reported make it impossible to compare results across studies, or combine data sets to achieve greater statistical power. For this reason, we believe the research community would benefit from a clear standard on the measurement of these behaviors. The data sets identified and studies in this review provide an interesting starting point for such consensus building. Particularly, the StudentLife data set [41] has been explored by many studies that propose solutions capable of supporting mental health professionals. Different solutions have used this data set to detect human behavioral patterns and perform association, classification, and prediction of mental states. For example, by using the StudentLife data set, Saeb et al [80] analyzed the correlation between mobility patterns identified from GPS samples and depressive symptoms reported by students. Farhan et al [84] designed a multiview biclustering model using various features (accelerometer, screen state, light, conversation data, and GPS) to identify clusters representing behavior subgroups. Morshed et al [81] developed a computational method to predict mood stability from behavioral features (eg, frequency of conversation, number of location changes, and duration of different physical activities) extracted from accelerometer, microphone, GPS, and Wi-Fi. Recently, de Moura et al [82,83] developed a solution capable of detecting sociability patterns and routine changes in social event streams (ie, conversation events).

A related issue is the predominance of solutions developed for Android OS, for which all apps have a version. This is expected as Android provides an open development platform, different from iOS, with significantly more flexibility to gather the data of interest. The divergent approaches to sensing on iOS and Android yield further issues in terms of standardization and the collection of comparable results across large cohorts, invariably with both Android and iOS users.

Our review further shows that studies use a mix of smartphone-based sensing and wearable device sensing. The latter may be useful where smartphones do not provide quality data (eg, for heart rate, physical activity during sport, or sleep quality), but do pose an issue in terms of interpretability of data given the variety of wearable devices available on the market, each of which use different algorithms. The interpretability of resulting information is further confounded as some of the most popular devices use proprietary algorithms to measure the behaviors of interest or provide aggregate data. Standards would need to consider the commercial pressure for device manufacturers that results in algorithms being proprietary and thus making it difficult to compare information from different devices.

Regarding the year of publication of the studies, most articles (n= 9) have been published in the last 3 years (Figure 7). These data reveal a growing trend in the number of solutions proposed for DPMH.

Research Opportunities

From this review, we are able to identify different research opportunities for DPMH sensing apps, which are open issues for further investigation.

Wearable-Based Solutions

Raw data have been generated mainly in smartphones, so few sensing apps have taken advantage of the potential of wearable devices to produce monitored individual’s data ubiquitously. Wearables are capable of providing a lot of useful information about human behavior [79]. For example, wearable devices such as smartwatches and wristbands can collect users’ context data even when they are performing intense physical activities such as running and swimming. Therefore, as these devices are smaller, meaning more imperceptible to the user, they can enrich the physiological data collection [85].

Explainable Models With a Focus on Human Behavior

DPMH sensing apps that perform data analysis to design intelligent models have used traditional ML algorithms in different tasks [20]. These models sometimes lack transparency, which is not helpful for mental health professionals because evidence in decision support tools is required to be explainable. Although traditional ML models are very useful for generating valuable information that supports mental health treatment, an explanation of how they generate their outputs is desirable. This is fundamental because professionals need to interpret the patient’s behavior to perform assessments and interventions. Therefore, explainable models [86] seems to be the way to apply machine and deep learning techniques more suitable to DPMH.

Real-Time Inference Engines

Most sensing apps perform offline data analysis after collecting raw data (eg, to create ML models, to correlate self-reports with context data). Therefore, few solutions provide inference engines to produce high-level information in real time. These generated situations of interest are useful to have a better insight into the patient’s behavior and to allow interventions to adapt to this information in real time. This is crucial in extreme cases such as signs of suicidal ideation, but generally useful where the goal is to implement ecological momentary interventions or just-in-time interventions that rely on just-in-time information on user status. In this sense, both rule-based engines (eg, fuzzy logic [39], complex event processing [82]) and ML-based approaches [20] are promising tools to process context data efficiently and infer high-level information in DPMH.

Extensible Solutions

Sensing apps are not able to be customized for use in other research. Although general-purpose (eg, Sensus [35]) and reusable (eg, Beiwe [15] and SituMan [39]) apps can be applied to other research, none of the solutions identified in this review is extensible. Proposals of framework, middleware, and library are examples of extensible solutions that provide services, reusable code, and are prepared to be modified or consumed by apps. They would be very useful to allow DPMH researchers to extend solution’s capabilities to different requirements. Therefore, this could reduce costs and time for research in specific scenarios.

By analyzing the results of the public data set review, we clearly identify the scarcity of data sets (n=8). This low number may be related to the privacy of information collected from study participants. DPMH researchers should possibly be concerned about whether collected data will become public, which could enable to identify participants from them. DPMH data sets may have sensitive personal information about the mental health treatment or monitoring, hence ethical issues arise [87]. Moreover, ethics committees where studies are recorded may restrict the sharing of collected data to the public. This barrier can generate great difficulty for the development of new research, because new ML models and engines for inferring high-level information are not possible to be designed and trained. Differential privacy seems to be a promising tool to break this barrier [88].

Another open issue is the standardization of data sets. Currently, there is no standard for data representation (eg, data type, precision, file format) and collection (eg, frequency, duration, presence of time stamps). As a result, data sets cannot be combined, nor can we easily compare the performance of different approaches or algorithms. Proposals for standardization would be a major contribution to the DPMH field.

It is beneficial for such standardization that there are efforts to design general-purpose sensing apps. We propose that the research community should endeavor to work on such apps collaboratively and make these apps available on a non-for-profit basis. This could not only result in an efficient use of commonly agreed standards, but would also reduce the wasteful effort of developing custom sensing apps. Such initiatives, however, are difficult to start and maintain, as has been shown by brave endeavors such as Beiwe [15], Funf [31], Purple Robot [33], and Sensus [35], which show that keeping such platforms up-to-date is an expensive process that can only be warranted if continued use guarantees continued resources for maintenance and further development.

Notwithstanding the benefits we believe would be derived from such standards, it should be acknowledged that self-reports will likely remain an important modality to improve the quality of automatically measured behaviors, or to measure behaviors or states that cannot be automatically measured. An opportunity that is not widely leveraged is using the automatically measured behaviors to trigger such self-reports. This would allow self-reports to be more appropriate to the user’s context, further inform automated measures in case sensor measurements do not provide a clear enough picture, and be less intrusive. Software for such a functionality has been proposed previously [39,89], and we believe such a functionality should be part of standardized tools for capturing DPMH.

Finally, data sets are composed of few study participants. It may be difficult for researchers in attracting participants to the research and, at the same time, making them remain until the end of the study. The low number of participants can potentially compromise the use and validation of some data contained in the data sets, and this directly reflects the use of data sets in other DPMH surveys, where it requires a high number of participants to be validated.

Limitations and Future Work

A first limitation is that data sets and articles published in languages other than English were not included in this review. Second, the search for sensing apps was restricted to 8 digital libraries, although we searched 20 sources with numerous public data sets. Finally, our review is limited by studies reported in the published literature and data sets available to be downloaded.

In addition, we did not focus on security and privacy aspects of DPMH apps in this review. Therefore, our plans include a systematic analysis on the security and privacy features provided by DPMH apps. As this is an extremely sensitive aspect in the development of new functionalities for current and new DPMH mobile systems, a particular characterization with deeper analysis is required. Therefore, we plan to dedicate efforts on this topic for further investigation.


In this article, we described a systematic review that resulted in a deep analysis of 31 sensing apps and 8 public data sets for DPMH. Results showed a growth in DPMH sensing apps in recent years as opposed to a scarcity of public data sets. We answered the research questions, then showing, for example, the most used context data and their respective sources, the different types of high-level information generated by the analysis of the collected data, the features available in data sets, and the mental disorders that researchers have focused. From the results, we were able to identify trends and open issues that hinder the development of research in the DPMH area. As a consequence, by considering the growth in proposals for DPMH sensing apps and the impact of the COVID-19 outbreak on global mental health, we believe that DPMH presents a great perspective for future research not only to overcome open issues discussed in this review, but also to reach the needed maturity for application in clinical settings.


This research was funded by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES - Finance code 001); by the Fundação de Amparo à Pesquisa e ao Desenvolvimento Científico e Tecnológico do Maranhão - FAPEMA (via Grants UNIVERSAL-00745/19, BD-01075/20, BEPP-01608/21, BEPP-01783/21, BEPP-01768/21), the state of Maranhão research agency; by the FCT/MCTES through national funds and when applicable co-funded European Union funds under the Project UIDB/50008/2020; and by the Brazilian National Council for Scientific and Technological Development - CNPq, via Grants 313036/2020-9 and 305133/2019-5.

Conflicts of Interest

None declared.

Multimedia Appendix 1

List of 20 data set repositories.

XLSX File (Microsoft Excel File), 102 KB

Multimedia Appendix 2

Items used in the data extraction process.

DOCX File , 8 KB

Multimedia Appendix 3

Reviewed sensing apps.

DOCX File , 10 KB

  1. Ritchie H, Roser M. Mental Health. Our World in Data Internet.   URL: [accessed 2021-06-15]
  2. World Health Organization. Suicide Data.   URL: [accessed 2021-06-15]
  3. World Health Organization. Mental Disorders.   URL: [accessed 2021-06-15]
  4. Health at a Glance: Europe 2018: State of Health in the EU Cycle.   URL: https:/​/www.​​social-issues-migration-health/​health-at-a-glance-europe-2018_health_glance_eur-2018-en [accessed 2021-06-15]
  5. Mack DL, DaSilva AW, Rogers C, Hedlund E, Murphy EI, Vojdanovski V, et al. Mental Health and Behavior of College Students During the COVID-19 Pandemic: Longitudinal Mobile Smartphone and Ecological Momentary Assessment Study, Part II. J Med Internet Res 2021 Jun 04;23(6):e28892 [FREE Full text] [CrossRef] [Medline]
  6. Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol 2008 Apr 01;4(1):1-32. [CrossRef] [Medline]
  7. Kapur S, Phillips AG, Insel TR. Why has it taken so long for biological psychiatry to develop clinical tests and what to do about it? Mol Psychiatry 2012 Dec 7;17(12):1174-1179. [CrossRef] [Medline]
  8. Teles A, Rodrigues I, Viana D, Silva F, Coutinho L, Endler M. Mobile Mental Health: A Review of Applications for Depression Assistance. 2019 Presented at: 2019 IEEE 32nd International Symposium on Computer-Based Medical Systems (CBMS); June 5-7, 2019; Cordoba, Spain p. 708-713. [CrossRef]
  9. Bauer M, Glenn T, Geddes J, Gitlin M, Grof P, Kessing LV, et al. Smartphones in mental health: a critical review of background issues, current status and future concerns. Int J Bipolar Disord 2020 Jan 10;8(1):2-19 [FREE Full text] [CrossRef] [Medline]
  10. Garcia-Ceja E, Riegler M, Nordgreen T, Jakobsen P, Oedegaard KJ, Tørresen J. Mental health monitoring with multimodal sensing and machine learning: A survey. Pervasive and Mobile Computing 2018 Dec;51:1-26. [CrossRef]
  11. Wind TR, Rijkeboer M, Andersson G, Riper H. The COVID-19 pandemic: The 'black swan' for mental health care and a turning point for e-health. Internet Interv 2020 Apr;20:100317 [FREE Full text] [CrossRef] [Medline]
  12. Torous J, Jän Myrick K, Rauseo-Ricupero N, Firth J. Digital Mental Health and COVID-19: Using Technology Today to Accelerate the Curve on Access and Quality Tomorrow. JMIR Ment Health 2020 Mar 26;7(3):e18848 [FREE Full text] [CrossRef] [Medline]
  13. Jain SH, Powers BW, Hawkins JB, Brownstein JS. The digital phenotype. Nat Biotechnol 2015 May;33(5):462-463. [CrossRef] [Medline]
  14. Liang Y, Zheng X, Zeng DD. A survey on big data-driven digital phenotyping of mental health. Information Fusion 2019 Dec;52:290-307. [CrossRef]
  15. Torous J, Kiang MV, Lorme J, Onnela J. New Tools for New Research in Psychiatry: A Scalable and Customizable Platform to Empower Data Driven Smartphone Research. JMIR Ment Health 2016 May 05;3(2):e16 [FREE Full text] [CrossRef] [Medline]
  16. Teles A, Barros F, Rodrigues I, Barbosa A, Silva F, Coutinho L. Internet of things applied to mental health: Concepts, applications, and perspectives. In: IoT and ICT for Healthcare Applications. Cham, Switzerland: Springer International Publishing; 2020:33.
  17. Tsai C, Lai C, Chao H, Vasilakos AV. Big data analytics: a survey. Journal of Big Data 2015 Oct 1;2(1):1-32. [CrossRef]
  18. Lan K, Wang D, Fong S, Liu L, Wong KKL, Dey N. A Survey of Data Mining and Deep Learning in Bioinformatics. J Med Syst 2018 Jun 28;42(8):139-120. [CrossRef] [Medline]
  19. Insel TR. Digital Phenotyping: Technology for a New Science of Behavior. JAMA 2017 Oct 03;318(13):1215-1216. [CrossRef] [Medline]
  20. Benoit J, Onyeaka H, Keshavan M, Torous J. Systematic Review of Digital Phenotyping and Machine Learning in Psychosis Spectrum Illnesses. Harv Rev Psychiatry 2020 Aug 12;28(5):296-304. [CrossRef]
  21. Cornet VP, Holden RJ. Systematic review of smartphone-based passive sensing for health and wellbeing. J Biomed Inform 2018 Jan;77:120-132 [FREE Full text] [CrossRef] [Medline]
  22. De-La-Hoz-Franco E, Ariza-Colpas P, Quero JM, Espinilla M. Sensor-Based Datasets for Human Activity Recognition – A Systematic Review of Literature. IEEE Access 2018;6:59192-59210. [CrossRef]
  23. Trifan A, Oliveira M, Oliveira JL. Passive Sensing of Health Outcomes Through Smartphones: Systematic Review of Current Solutions and Possible Limitations. JMIR Mhealth Uhealth 2019 Aug 23;7(8):e12649 [FREE Full text] [CrossRef] [Medline]
  24. Seppälä J, De Vita I, Jämsä T, Miettunen J, Isohanni M, Rubinstein K, M-RESIST Group, et al. Mobile Phone and Wearable Sensor-Based mHealth Approaches for Psychiatric Disorders and Symptoms: Systematic Review. JMIR Ment Health 2019 Feb 20;6(2):e9819 [FREE Full text] [CrossRef] [Medline]
  25. Antosik-Wójcińska AZ, Dominiak M, Chojnacka M, Kaczmarek-Majer K, Opara KR, Radziszewska W, Święcicki. Smartphone as a monitoring tool for bipolar disorder: a systematic review including data analysis, machine learning algorithms and predictive modelling. Int J Med Inform 2020 Jun;138:104131. [CrossRef] [Medline]
  26. Kitchenham B, Charters S. EBSE Technical Report. Guidelines for Performing Systematic Literature Reviews in Software Engineering. 2007.   URL: [accessed 2021-06-15]
  27. Perform Systematic Literature Reviews.   URL: [accessed 2021-06-15]
  28. Wohlin C. Guidelines for snowballing in systematic literature studies and a replication in software engineering. In: Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering - EASE ?. New York, NY: ACM Press; 2014 Presented at: 18th International Conference on Evaluation and Assessment in Software Engineering; May 13-14, 2014; London, UK. [CrossRef]
  29. Viera A, Garrett J. Understanding interobserver agreement: the kappa statistic. Fam Med 2005;37(5):360-363.
  30. Palaghias N, Hoseinitabatabaei SA, Nati M, Gluhak A, Moessner K. A Survey on Mobile Social Signal Processing. ACM Comput. Surv 2016 May 02;48(4):1-52. [CrossRef]
  31. Aharony N, Pan W, Ip C, Khayal I, Pentland A. Social fMRI: Investigating and shaping social mechanisms in the real world. Pervasive and Mobile Computing 2011 Dec;7(6):643-659. [CrossRef]
  32. Burns MN, Begale M, Duffecy J, Gergle D, Karr CJ, Giangrande E, et al. Harnessing context sensing to develop a mobile intervention for depression. J Med Internet Res 2011 Aug 12;13(3):e55 [FREE Full text] [CrossRef] [Medline]
  33. Schueller SM, Begale M, Penedo FJ, Mohr DC. Purple: a modular system for developing and deploying behavioral intervention technologies. J Med Internet Res 2014 Jul 30;16(7):e181 [FREE Full text] [CrossRef] [Medline]
  34. Ferreira D, Kostakos V, Dey AK. AWARE: Mobile Context Instrumentation Framework. Front. ICT 2015 Apr 20;2:1-9. [CrossRef]
  35. Xiong H, Huang Y, Barnes L, Gerber M. Sensus: A Cross-Platform, General-Purpose System for Mobile Crowdsensing in Human-Subject Studies. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. New York, NY: ACM Press; 2016 Presented at: 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 12-16, 2016; Heidelberg, Germany p. 415-426. [CrossRef]
  36. Wahle F, Kowatsch T, Fleisch E, Rufer M, Weidt S. Mobile Sensing and Support for People With Depression: A Pilot Trial in the Wild. JMIR Mhealth Uhealth 2016 Sep 21;4(3):e111 [FREE Full text] [CrossRef] [Medline]
  37. Anguera JA, Jordan JT, Castaneda D, Gazzaley A, Areán PA. Conducting a fully mobile and randomised clinical trial for depression: access, engagement and expense. BMJ Innov 2016 Jan 04;2(1):14-21 [FREE Full text] [CrossRef] [Medline]
  38. Wang R, Scherer E, Tseng V, Ben-Zeev D, Aung M, Abdullah S. CrossCheck: Toward passive sensing and detection of mental health changes in people with schizophrenia. In: Proceedings of the 20 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ?. New York, NY: ACM Press; 2016 Presented at: 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 12-16, 2016; New York, NY p. 886-897. [CrossRef]
  39. Soares Teles A, Rocha A, José da Silva E Silva F, Correia Lopes J, O'Sullivan D, Van de Ven P, et al. Enriching Mental Health Mobile Assessment and Intervention with Situation Awareness. Sensors (Basel) 2017 Jan 10;17(1):127 [FREE Full text] [CrossRef] [Medline]
  40. Servia-Rodríguez S, Rachuri K, Mascolo C, Rentfrow P, Lathia N, Sandstrom G. Mobile Sensing at the Service of Mental Well-being: a Large-scale Longitudinal Study. In: Proceedings of the 26th International Conference on World Wide Web. New York, NY: ACM Press; 2017 Presented at: 26th International Conference on World Wide Web; April 3-7, 2017; Perth, Australia p. 103-112. [CrossRef]
  41. Wang R, Chen F, Chen Z, Li T, Harari G, Tignor S. StudentLife: Using smartphones to assess mental health and academic performance of college students. In: Mobile Health. Cham, Switzerland: Springer International Publishing; 2017:7.
  42. Ben-Zeev D, Scherer EA, Brian RM, Mistler LA, Campbell AT, Wang R. Use of Multimodal Technology to Identify Digital Correlates of Violence Among Inpatients With Serious Mental Illness: A Pilot Study. Psychiatr Serv 2017 Oct 01;68(10):1088-1092 [FREE Full text] [CrossRef] [Medline]
  43. Palmius N, Saunders KEA, Carr O, Geddes JR, Goodwin GM, De Vos M. Group-Personalized Regression Models for Predicting Mental Health Scores From Objective Mobile Phone Data Streams: Observational Study. J Med Internet Res 2018 Oct 22;20(10):e10194 [FREE Full text] [CrossRef] [Medline]
  44. Berrouiguet S, Ramírez D, Barrigón ML, Moreno-Muñoz P, Carmona Camacho R, Baca-García E, et al. Combining Continuous Smartphone Native Sensors Data Capture and Unsupervised Data Mining Techniques for Behavioral Changes Detection: A Case Series of the Evidence-Based Behavior (eB2) Study. JMIR Mhealth Uhealth 2018 Dec 10;6(12):e197 [FREE Full text] [CrossRef] [Medline]
  45. Lind MN, Byrne ML, Wicks G, Smidt AM, Allen NB. The Effortless Assessment of Risk States (EARS) Tool: An Interpersonal Approach to Mobile Sensing. JMIR Ment Health 2018 Aug 28;5(3):e10334 [FREE Full text] [CrossRef] [Medline]
  46. Chang L, Lu J, Wang J, Chen X, Fang D, Tang Z, et al. SleepGuard. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol 2018 Sep 18;2(3):1-34. [CrossRef]
  47. Elhai JD, Tiamiyu MF, Weeks JW, Levine JC, Picard KJ, Hall BJ. Depression and emotion regulation predict objective smartphone use measured over one week. Personality and Individual Differences 2018 Oct;133:21-28. [CrossRef]
  48. Mastoras R, Iakovakis D, Hadjidimitriou S, Charisis V, Kassie S, Alsaadi T, et al. Touchscreen typing pattern analysis for remote detection of the depressive tendency. Sci Rep 2019 Sep 16;9(1):13414 [FREE Full text] [CrossRef] [Medline]
  49. Ranjan Y, Rashid Z, Stewart C, Conde P, Begale M, Verbeeck D, Hyve, RADAR-CNS Consortium. RADAR-Base: Open Source Mobile Health Platform for Collecting, Monitoring, and Analyzing Data Using Sensors, Wearables, and Mobile Devices. JMIR Mhealth Uhealth 2019 Aug 01;7(8):e11734 [FREE Full text] [CrossRef] [Medline]
  50. Sarda A, Munuswamy S, Sarda S, Subramanian V. Using Passive Smartphone Sensing for Improved Risk Stratification of Patients With Depression and Diabetes: Cross-Sectional Observational Study. JMIR Mhealth Uhealth 2019 Jan 29;7(1):e11041 [FREE Full text] [CrossRef] [Medline]
  51. Barnett S, Huckvale K, Christensen H, Venkatesh S, Mouzakis K, Vasa R. Intelligent Sensing to Inform and Learn (InSTIL): A Scalable and Governance-Aware Platform for Universal, Smartphone-Based Digital Phenotyping for Research and Clinical Applications. J Med Internet Res 2019 Nov 06;21(11):e16399 [FREE Full text] [CrossRef] [Medline]
  52. Wisniewski H, Henson P, Torous J. Using a Smartphone App to Identify Clinically Relevant Behavior Trends Symptom Report, Cognition Scores, and Exercise Levels: A Case Series. Front Psychiatry 2019 Sep 23;10:652 [FREE Full text] [CrossRef] [Medline]
  53. Cao J, Truong AL, Banu S, Shah AA, Sabharwal A, Moukaddam N. Tracking and Predicting Depressive Symptoms of Adolescents Using Smartphone-Based Self-Reports, Parental Evaluations, and Passive Phone Sensor Data: Development and Usability Study. JMIR Ment Health 2020 Jan 24;7(1):e14045 [FREE Full text] [CrossRef] [Medline]
  54. Narziev N, Goh H, Toshnazarov K, Lee SA, Chung K, Noh Y. STDD: Short-Term Depression Detection with Passive Sensing. Sensors (Basel) 2020 Mar 04;20(5):1396 [FREE Full text] [CrossRef] [Medline]
  55. Dogrucu A, Perucic A, Isaro A, Ball D, Toto E, Rundensteiner EA, et al. Moodable: On feasibility of instantaneous depression assessment using machine learning on voice samples with retrospectively harvested smartphone and social media data. Smart Health 2020 Jul;17:100118. [CrossRef]
  56. Betthauser LM, Stearns-Yoder KA, McGarity S, Smith V, Place S, Brenner LA. Mobile App for Mental Health Monitoring and Clinical Outreach in Veterans: Mixed Methods Feasibility and Acceptability Study. J Med Internet Res 2020 Aug 11;22(8):e15506 [FREE Full text] [CrossRef] [Medline]
  57. Haines-Delmont A, Chahal G, Bruen AJ, Wall A, Khan CT, Sadashiv R, et al. Testing Suicide Risk Prediction Algorithms Using Phone Measurements With Patients in Acute Mental Health Settings: Feasibility Study. JMIR Mhealth Uhealth 2020 Jun 26;8(6):e15901 [FREE Full text] [CrossRef] [Medline]
  58. Silva E, Aguiar J, Reis LP, Sá JOE, Gonçalves J, Carvalho V. Stress among Portuguese Medical Students: the EuStress Solution. J Med Syst 2020 Jan 02;44(2):45. [CrossRef] [Medline]
  59. Jacobson NC, Chung YJ. Passive Sensing of Prediction of Moment-To-Moment Depressed Mood among Undergraduates with Clinical Levels of Depression Sample Using Smartphones. Sensors (Basel) 2020 Jun 24;20(12):1-16 [FREE Full text] [CrossRef] [Medline]
  60. Masud MT, Mamun MA, Thapa K, Lee D, Griffiths MD, Yang S. Unobtrusive monitoring of behavior and movement patterns to detect clinical depression severity level via smartphone. J Biomed Inform 2020 Mar;103:103371 [FREE Full text] [CrossRef] [Medline]
  61. Teo J, Davila S, Yang C, Hii A, Pua C, Yap J. Supplementary Data 1 for "Digital phenotyping by consumer wearables identifies sleep-associated markers of cardiovascular disease risk and biological aging". Figshare. 2019.   URL: [accessed 2021-06-15]
  62. Teo JX, Davila S, Yang C, Hii AA, Pua CJ, Yap J, et al. Digital phenotyping by consumer wearables identifies sleep-associated markers of cardiovascular disease risk and biological aging. Commun Biol 2019 Oct 04;2(1):361-310 [FREE Full text] [CrossRef] [Medline]
  63. Garcia-Ceja E, Riegler M, Jakobsen P, Torresen J, Nordgreen T, Oedegaard K. Depresjon Dataset. Zenodo. 2018.   URL: [accessed 2021-06-15]
  64. Garcia-Ceja E, Riegler M, Jakobsen P, Tørresen J, Nordgreen T, Oedegaard K. Depresjon: A motor activity database of depression episodes in unipolar and bipolar patients. In: Proceedings of the 9th ACM Multimedia Systems Conference. New York, NY: ACM Press; 2018 Presented at: 9th ACM Multimedia Systems Conference; June 12-15, 2018; Amsterdam, the Netherlands p. 472-477. [CrossRef]
  65. Fryer D. StudentLife data in RData format. Zenodo. 2019.   URL: [accessed 2021-06-15]
  66. Campbell A. StudentLife Study.   URL: [accessed 2021-06-15]
  67. Wang R, Chen F, Chen Z, Li T, Harari G, Tignor S. StudentLife: Assessing mental health, academic performancebehavioral trends of college students using smartphones. In: Proceedings of the 2014 ACM International Joint Conference on PervasiveUbiquitous Computing. New York, NY. USA: ACM; 2014 Presented at: 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing; September 13-17, 2014; Seattle, Washington p. 3-14. [CrossRef]
  68. Boonstra T, Nicholas J, Wong Q, Shaw F, Townsend S, Christensen H. Using the Socialise app to collect smartphone sensor data for mental health research: A feasibility study. Zenodo. 2018.   URL: [accessed 2021-06-15]
  69. Boonstra T, Werner-Seidler A, O?Dea B, Larsen M, Christensen H. Smartphone app to investigate the relationship between social connectivity and mental health. 2017 Presented at: 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; July 11-15, 2017; Jeju Island, South Korea p. 287-290. [CrossRef]
  70. Boonstra TW, Nicholas J, Wong QJ, Shaw F, Townsend S, Christensen H. Using Mobile Phone Sensor Technology for Mental Health Research: Integrated Analysis to Identify Hidden Challenges and Potential Solutions. J Med Internet Res 2018 Jul 30;20(7):e10131 [FREE Full text] [CrossRef] [Medline]
  71. Kiang M, Lorme J, Onnela JP. Public Sample Beiwe Dataset. Zenodo. 2018.   URL: [accessed 2021-06-15]
  72. Burchert S, Kerber A, Zimmermann J, Knaevelsrud C. 14-day smartphone ambulatory assessment of depression symptoms and mood dynamics in a general population sample: comparison with the PHQ-9 depression screening. Zenodo. 2019.   URL: [accessed 2021-06-15]
  73. Burchert S, Kerber A, Zimmermann J, Knaevelsrud C. Screening accuracy of a 14-day smartphone ambulatory assessment of depression symptoms and mood dynamics in a general population sample: Comparison with the PHQ-9 depression screening. PLoS One 2021;16(1):e0244955-e0244950 [FREE Full text] [CrossRef] [Medline]
  74. Dana Diotte. Sleep Data. Kaggle.   URL: [accessed 2021-06-15]
  75. Gorgolewski C. Self-tracking. Kaggle. 2018.   URL: [accessed 2021-06-15]
  76. Apple Inc.. Sleep Cycle - Sleep Tracker. App Store.   URL: [accessed 2021-06-15]
  77. Sharp J, Cooper B. Exist.   URL: [accessed 2021-06-15]
  78. Kabadayi S, Pridgen A, Julien C. Virtual sensors: abstracting data from physical sensors. In: Proceedings of the International Symposium on a World of Wireless, Mobile and Multimedia Networks. 2006 Presented at: 2006 International Symposium on a World of Wireless, Mobile and Multimedia Networks; June 26-19, 2006; Buffalo-Niagara Falls, NY p. 587-592. [CrossRef]
  79. Rohani DA, Faurholt-Jepsen M, Kessing LV, Bardram JE. Correlations Between Objective Behavioral Features Collected From Mobile and Wearable Devices and Depressive Mood Symptoms in Patients With Affective Disorders: Systematic Review. JMIR Mhealth Uhealth 2018 Aug 13;6(8):e165 [FREE Full text] [CrossRef] [Medline]
  80. Saeb S, Lattie EG, Schueller SM, Kording KP, Mohr DC. The relationship between mobile phone location sensor data and depressive symptom severity. PeerJ 2016;4:e2537 [FREE Full text] [CrossRef] [Medline]
  81. Morshed MB, Saha K, Li R, D'Mello SK, De Choudhury M, Abowd GD, et al. Prediction of Mood Instability with Passive Sensing. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol 2019 Sep 09;3(3):1-21 [FREE Full text] [CrossRef]
  82. de Moura IR, Teles AS, Endler M, Coutinho LR, da Silva E Silva FJ. Recognizing Context-Aware Human Sociability Patterns Using Pervasive Monitoring for Supporting Mental Health Professionals. Sensors (Basel) 2020 Dec 25;21(1):86 [FREE Full text] [CrossRef] [Medline]
  83. Moura I, Teles A, Silva F, Viana D, Coutinho L, Barros F, et al. Mental health ubiquitous monitoring supported by social situation awareness: A systematic review. J Biomed Inform 2020 Jul;107:103454 [FREE Full text] [CrossRef] [Medline]
  84. Farhan A, Lu J, Bi J, Russell A, Wang B, Bamis A. Multi-view Bi-clustering to Identify Smartphone Sensing Features Indicative of Depression. In: Proceedings of the 2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies. 2016 Presented at: 2016 IEEE First International Conference on Connected Health: Applications, Systems and Engineering Technologies; June 27-29, 2016; Washington, DC p. 264-273. [CrossRef]
  85. Seneviratne S, Hu Y, Nguyen T, Lan G, Khalifa S, Thilakarathna K, et al. A Survey of Wearable Devices and Challenges. IEEE Commun. Surv. Tutorials 2017;19(4):2573-2620. [CrossRef]
  86. Adadi A, Berrada M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018;6:52138-52160. [CrossRef]
  87. Maher NA, Senders JT, Hulsbergen AF, Lamba N, Parker M, Onnela J, et al. Passive data collection and use in healthcare: A systematic review of ethical issues. Int J Med Inform 2019 Sep;129:242-247. [CrossRef] [Medline]
  88. Gong M, Xie Y, Pan K, Feng K, Qin A. A Survey on Differentially Private Machine Learning [Review Article]. IEEE Comput. Intell. Mag 2020 May;15(2):49-64. [CrossRef]
  89. van de Ven P, O'Brien H, Henriques R, Klein M, Msetfi R, Nelson J, E-COMPARED Consortium. ULTEMAT: A mobile framework for smart ecological momentary assessments and interventions. Internet Interv 2017 Sep;9:74-81 [FREE Full text] [CrossRef] [Medline]

DPMH: digital phenotyping of mental health
DS-RQs: research questions data sets
EC: exclusion criteria
EMA: ecological momentary assessment
GPS: global positioning system
IC: inclusion criteria
ML: machine learning
PHQ-9: 9-item Patient Health Questionnaire
SA-RQs: research questions for sensing apps

Edited by A Mavragani; submitted 12.03.21; peer-reviewed by C Smith, MA Ahmad, H Mehdizadeh; comments to author 01.05.21; revised version received 20.06.21; accepted 23.12.21; published 17.02.22


©Jean P M Mendes, Ivan R Moura, Pepijn Van de Ven, Davi Viana, Francisco J S Silva, Luciano R Coutinho, Silmar Teixeira, Joel J P C Rodrigues, Ariel Soares Teles. Originally published in the Journal of Medical Internet Research (, 17.02.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.