HOPES -- An Integrative Digital Phenotyping Platform for Data Collection, Monitoring and Machine Learning

We describe the development of, and early experiences with, comprehensive Digital Phenotyping platform: Health Outcomes through Positive Engagement and Self-Empowerment (HOPES). HOPES is based on the open-source Beiwe platform but adds a much wider range of data collection, including the integration of wearable data sources and further sensor collection from the smartphone. Requirements were in part derived from a concurrent clinical trial for schizophrenia. This trial required development of significant capabilities in HOPES in security, privacy, ease-of-use and scalability, based on a careful combination of public cloud and on-premises operation. We describe new data pipelines to clean, process, present and analyze data. This includes a set of dashboards customized to the needs of the research study operations and for clinical care. A test use of HOPES is described by analyzing the digital behaviors of 20 participants during the SARS-CoV-2 pandemic.

provider. Another major concern revolves around data security and privacy preservation. These two aspects have been primary motivators in our design choices and investigations.

Clinical Study on Digital Phenotyping in Schizophrenia
The HOPES platform was designed, developed and refined concurrently to support a clinical study. The HOPE-S (Health Outcomes via Positive Engagement in Schizophrenia) study [21] was launched in November 2019. HOPE-S is an observational study on individuals with schizophrenia who were recently discharged from a psychiatric hospitalization. The aim of the study is to determine whether digital phenotyping data is associated with clinical and health utilization outcomes. Key events recorded over the 6-month observation period include readmission, outpatient non-attendances (i.e., defaults) and unscheduled service use, e.g., emergency department attendance and Mobile Crisis Team (MCT) activations. The primary study outcomes are the ability to predict relapse and/or readmission within 6 months, with secondary outcomes being the associations between digital phenotyping data and healthcare utilization, psychiatric symptoms severity and functional status assessed during research visits. Ethics approval has been granted by Singapore's National Healthcare Group (NHG) Domain Specific Review Board (DSRB Reference no.: 2019/00720) and it has been registered on clinicaltrials.gov.
The first phase of HOPE-S is observational. During this phase, we are examining the deployment feasibility and acceptability of a wide range of digital sensors, while performing the analyses required to achieve the outcomes illustrated above. In the process we are collecting large amounts of data for our analyses. This data will subsequently be used to develop machine learning algorithms to predict changes in symptom severity and other important clinical outcomes, as opposed to merely analyzing associations. During a subsequent phase of the study, we will deploy interventions such as early warnings of relapses, which will allow pre-emptive steps to be taken to prevent participant setbacks or rehospitalization.

1.3
HOPES: A General-Purpose Platform for Digital Phenotyping HOPES stands for Health Outcomes through Positive Engagement and Self-Empowerment. It is based on, and extends, the existing platform Beiwe [22,23]. Our contributions include: 1. the integration of wearable devices, where we have experimented with both wrist and ring devices; 2. the use of further sensors on the smartphone; 3. an efficient onboarding method for participants; 4. a suite of user interfaces including data collection and quality management tools, clinical summarization dashboards, and general-purpose research dashboards for use in exploratory data analysis and building anomaly detection algorithms; 5. assurances for data security and the preservation of user privacy.
The platform is designed to be reliably deployed at scale and makes use of both public cloud and controlled on-premises computing infrastructure. We recognize the broad spectrum of potential applications beyond mental health and the growing set of digital sensors and their capabilities that may be appropriate for different applications. We have therefore designed HOPES to be flexible and extensible to accommodate new device and sensor integration, new data dashboards, etc.
While the data collected during the HOPE-S study is rich, it is also at times noisy and incomplete, as is to be expected when dealing with real human behavior and varying data reliability among sensors. To address these challenges, we have developed a data collection dashboard and multiple data visualization and exploration tools, which have proven invaluable for monitoring and ensuring participant compliance on a daily basis in the research study. We have also developed a feature engineering pipeline to construct useful insights for the HOPE-S study and to compensate for various shortfalls in the raw data. These dashboards have been found to be easy to use by research coordinators involved in the HOPE-S study, who have been able to easily recognize problems and contact the participant if their data is not being received. We will also illustrate the dashboards that our data scientists have used to look for patterns and an anomaly detection dashboard that raises alerts on irregularities in the data. All the data then feeds into full-blown statistical analysis and our ongoing development of predictive machine learning algorithms.
The rest of the paper is organized as follows. In Section 2, we review several existing opensource digital phenotyping platforms, highlighting their respective strengths and weaknesses. In Section 3, we describe the overall architecture of the HOPES platform. Section 4 describes the enhancements to Beiwe that the HOPES platform provides, guided by the requirements of the HOPE-S study and other planned future uses (including for purposes beyond mental health). In Section 5, we show an early and simple example of the use of our collected data on 20 participants in which we compare user data before and after Singapore's SARS-CoV-2 "lockdown" went into effect. In Section 6, we give some overall conclusions we can draw from our experiences in digital phenotyping.

Existing Platforms
There are several existing open-source digital phenotyping platforms, including Beiwe [22,23], Purple Robot [24,25,26], AWARE [27,28], and RADAR-base (Remote Assessment of Disease And Relapse) [29,30,31]. Each contains a core smartphone application (or "app") that performs passive sensor data collection in the background and a server backend in charge of receiving the data. Note that digital phenotyping is not limited to smartphones -indeed wearables also provide some significant differentiated capabilities and there are other sources such as fixed detectors. Some platforms such as Beiwe and RADAR-base support active data collection in the form of surveys and/or capture data from wearable devices such as wrist-or arm-wearable devices by providing a common data interface.
From our assessment, Purple Robot has the most complete coverage of Android sensors and features amongst the platforms we reviewed. The user can select which sensors to turn on and set the data sampling frequency, however, the platform does not support iOS. AWARE supports both Android and iOS, and has nearly full coverage of Android sensors and features. Like Purple Robot, it also allows the user to configure sensors and features. RADAR-base has recently added iOS support and uses both passive (phone/sensor) and active (survey and questionnaire) data collection. Although it covers fewer phone features and sensors than Purple Robot and AWARE, it has a very attractive user interface and very robust system for surveys and questionnaires. Beiwe is a smartphone-based digital phenotyping research platform that supports both Android and iOS, and has a decent coverage of phone sensors and features. Moreover, the platform supports active feature collection from simple surveys. Apart from the data collection backend that receives data from participants' phones, Beiwe also has a backend for data analytics.
We have based the framework for the HOPES platform on Beiwe for several reasons. Firstly, Beiwe supports both Android and iOS, a requirement for any generic digital phenotyping platform to be widely adopted. Secondly, our platform analysis and comparison tests during March 2019 showed that Beiwe was the most ready at that time to deploy. Our decision was also based on our review of a number of Git repositories and publications, as well as previous practical applications of the platforms in clinical studies and trials.
To access data beyond the smartphone sensors, we choose the Fitbit wrist device. after carrying out a technical and usability comparison between several popular devices in the commercial market. Specifically, we have compared Fitbit Charge 3, Huawei Honor A2, Xiaomi Mi Band 3, Actxa Spur+ and Hey Plus. We found that Fitbit was distinguished by ease-of-use, battery life, and reliability and it has been validated to be reasonably accurate against gold standard devices for measurement of sleep [20]. We also evaluated a number of external sleep measurement devices (such as mattress pads) but did not find them suitable for our purposes.

The HOPES Platform and Its First Use in the HOPE-S Study
To support large scale data aggregation of wearables, mobile phones, and other data sources, we defined a set of requirements and then built our platform to be secure and scalable. Building on top of the existing Beiwe platform, we created the HOPES platform by expanding the functional capabilities for easier participant onboarding, enhanced data collection monitoring, optimized data upload, extended security features, expanded data processing and analytics pipeline, and a scalable deployment architecture. The goal was to obtain easy and secure onboarding, almost unlimited scaling, high operational security and improved privacy assurances. While we were immediately driven by meeting the strict requirements for the HOPE-S study, along the way we became aware of expanded requirements for a wider range of participant monitoring requirements and we took those into account in our architecture and design, so we would be ready for further deployments. In this section we describe the platform requirements and our resulting HOPES system architecture; we then detail the features collected for the HOPE-S study, the enhancements we made to the Android app, the platform backend and the security protocols. We provide our motivation and a high-level description, leaving further details and information about miscellaneous improvements to the supplementary materials.

Platform Requirements
The HOPES platform is designed to be a reliable, low-maintenance digital phenotyping collection and aggregation platform. It is designed to support research protocols as well as scale to larger production platforms including self-service registration. The requirements and their implemented capabilities are enumerated in Table 1. To successfully implement such a broad set of requirements, we carefully studied and focused on the user experience for onboarding new participants, and built a platform that leverages the best software engineering, design principles, and cloud architecture capabilities.

Overall System Architecture
The high-level solution architecture of HOPES, as used for the HOPE-S study, is shown in Figure 1. On each participant's smartphone, we install two apps: the Fitbit app and the HOPES app (on the smartphone). Every participant is required to wear a Fitbit watch for a certain portion of the day (enough to collect required data, but also removable for charging, showering, etc.). Fitbit raw data is collected by the Fitbit app and sent to a Fitbit server (the Fitbit Cloud) for processing and computation of high-level features (e.g., the estimation of sleep stages). Phone data is collected on the smartphone by the HOPES app and sent to data upload server hosted in a public cloud -we used Amazon Web Services (AWS). The data processing backend server (at the R&D or clinical premises) periodically pulls data from both the Fitbit cloud and the AWS data upload server for subsequent processing, described in the following sections. The data is always de-identified when in a publicly accessible cloud environment, and all transmission and storage are encrypted. Certain variables such as location are also obfuscated at time of collection for privacy preservation. More details on the solution architecture are described in the supplementary material.
For the backend R&D analytics we developed a set of data processing pipelines and various dashboards for monitoring, visualizing and analyzing data (as shown in Figure 2). The data processing pipelines clean (manage missing, duplicated and erroneous data), convert and reorganize data into more usable forms. These dashboards are used by research coordinators and clinicians, researchers, data analysts and technical team members involved in the conduct of the study. A general-purpose research dashboard supports exploratory analytics. In each case, roles and responsibilities determine access controls to various attributes of the data. Physical controls, supervision and accountability measures are also deployed to make sure there is no unauthorized access to data. Further description is given in subsequent sections, and greater detail in the supplementary material.

Features Supported by the Platform
The following six categories of features are obtained from the HOPES smartphone app. In each case we will indicate "new" if it is a new feature added by us or an enhancement, otherwise, it is an existing feature in the Beiwe distribution: Location: GPS coordinates are used to detect deviations from typical travel patterns and to compute a measure of variance or entropy in the locations visited by a participant. To protect user privacy, the raw GPS coordinates are obfuscated via a random displacement (from the origin) which is unique for every participant.  Beiwe app can capture incoming and outgoing phone calls and SMS messages. However, in many countries most people use free social messaging apps as their primary method for text and voice communication, e.g., in Singapore WhatsApp is dominant. We therefore make use of the Android Accessibility Service API to acquire message metadata from social messaging apps. So far, we have only implemented this for WhatsApp but it can be easily extended to other social messaging apps. The duration and timing of mobile service phone calls and WhatsApp calls sent and received are recorded. Likewise, the length and timing of SMS and WhatsApp messages sent and received are also recorded. Importantly, for privacy protection, we never record or transmit any content of any communication, and we hash the identity or contact number of the counterparty.

Finger taps (new):
Taps provide two types of information that may be related to a person's health. The speed at which a person taps may give a hint of their neuropsychological function [32]; for example, a fatigued person may tap more slowly, or some diseases may cause small, uncontrollable movements. There is also some evidence that finger taps may be used to detect depression [33]. The apps a person uses (determined from their taps) also gives an indication of their behavior. For example, a patient with mental illness who is relapsing might be found to have significantly altered communications, reflected in the number and speed of taps made in the various apps. We seek to capture typing error rates, which could be affected by physical or mental condition. We can determine this from how often the delete or backspace key on the keyboard is tapped. To measure tapping speed, we also need to know whether the person is typing on the keyboard or navigating in a social messaging app. Characteristics and metadata of finger taps on the phone screen are recorded, such as the number and timestamps of taps into apps, different key strokes (from the enter key, delete key, backspace key, alphabet keys, number keys, and punctuation keys), and the group categorization of the apps tapped are also recorded. As a privacy preservation measure, captured keystrokes are converted into a token (such as "alphabetic", "numerical", "punctuation", etc.). The app will only store and download the token, and the specific keys that are struck are not recorded.

Phone states:
The app can record the Wi-Fi state, the Bluetooth state, and the power state (screen on/off and power-down event) of the phone. The Wi-Fi and Bluetooth scan result can to some extent tell some information about the location of the device especially when the GPS location is not available. However, this data is sensitive and needs to be de-identified and encrypted. The power state feature is usually combined with other features such as taps to tell the usage behavior of the phone by the participant.

Ambient light (new):
The app can record the intensity of ambient light through the smartphone's built-in light sensor (not the camera). This could detect, for example, whether a participant is in a comfortable sleeping environment, and studies have suggested there is correlation between a patient's mental health and their preferred environmental lighting [34].
Since sleep and heart-rate are important indications of people's mental health status, we record the following three categories of features from the Fitbit wearable (obtained directly from the Fitbit cloud).
Sleep: Sleep information during the day and night are recorded, including a breakdown of different sleep stages with timestamps.
Steps: The total number of steps in time intervals specified by Fitbit.
Heart rate: The number of heart beats in time intervals specified by the Fitbit. Approximations of other measures of interest, such as heart rate variability, may be computed from the heart rate data.
For the HOPE-S study, we have used the following features from above: location, sociability indices, finger taps, accelerometer, power state, ambient light, sleep, steps, and heart rate.

Backend Data Processing Pipeline
We have rebuilt Beiwe data processing back-end in Python 3 to systematically process data files, reformat the raw data, and extract high-level features. A considerable amount of feature engineering is being performed on the backend to clean the data, correct data shortcomings, combine different data sources into joint features, and feed downstream machine learning systems. For example, upon consultation with our clinical partners, we construct high level features that are likely to provide useful signals regarding the mental health of the participants in the HOPE-S study. Our current analyses in the study make use of time series of daily or hourly samples of intuitively-identified measurements from sleep, steps, heart rate, location, and sociability indices. Some examples include daily totals of the number of hours of sleep, steps, and of communications initiated and received. Constructing such features is often necessary in situations with small amounts of or noisy data. As an example, when no sleep data is recorded by the Fitbit for a whole day, it isn't clear whether the participant didn't sleep, or whether they just didn't wear the Fitbit to bed. We can resolve this ambiguity by looking at the heart rate measurements, which are recorded continually while the Fitbit is worn. If heart rate data is missing for more than allotted allowance, we can reasonably assume the participant wasn't wearing the Fitbit during sleep. As another example, we have developed an Android app grouper that uses information from the Google Play Store to classify all apps into seven classes defined by us (i.e., social messenger, social media, entertainment, map navigation, utility tools, games, and android system (other vendor-specific or system apps that cannot be found in Play Store)). This class information is used in the taps data features when classifying a user's phone activity, e.g., "in social media apps", "in gaming apps", etc. In summary, this step bridges the gap between data collection and downstream machine learning modules. Details on the data processing pipeline, high-level feature extraction, and the seven classes of the app grouper are given in the supplementary material.

3.5
Platform Improvements We have made many improvements to the Android app and are in the process of extending these improvements to the iOS app. In this section, we will only describe the most significant improvements, other improvements are in the supplementary material. We also use two system variations: the prototype or development system and the deployed system. Some features may apply to only one of the systems.

Scanning QR Codes for Simple User Registration
To facilitate the user registration process and to allow one-way encryption for better data security, study participant kits were prepared and a single page onboarding document was generated with all the information necessary to onboard a participant. The process is designed for a non-technical self-service onboarding process. In the deployed system, multiple QR codes are scanned. They include information for certificate-based authentication to further strengthen security via host verification. For details on QR registration, please refer to Section 2 of the supplementary material.

Data Compression
In order to scale up to a very large number of users, we need to reduce the utilized communication bandwidth as much as possible. One solution is to compress the data before sending it to the server. We have therefore added this option when creating a study, which may be selected on the backend console by checking the "enable compression" checkbox. Since the efficiency of compression is reduced significantly on encrypted data compression has to be done before encryption. This feature is implemented only in the prototype system.

Security Enhancements
The HOPES platform is re-designed on top of Beiwe to ensure data confidentiality, data integrity, system auditing, high availability, authentication, authorization to support large scale deployments with a distributed pipeline and separation of duties throughout the architecture to minimize the risk of data breach and preserve the privacy of data throughout the lifecycle. In the original Beiwe platform, data is decrypted in the data collection server and re-encrypted using the study key. This poses certain amount of risk because the data collection server is directly facing public Internet. In our HOPES platform, data is encrypted at all times while on the phone and in the data collection infrastructure, and is only decrypted in clinical or R&D premise. The decryption key is only accessible by clinical or R&D premise, so in principle the data is not decryptable on the phone nor in the data collection infrastructure. Data is only reidentified when needed for qualified clinical purposes, and only by clinical staff.

Dashboards, User Interfaces and Data Analysis
Ensuring complete data collection is important. A variety of issues can result in not receiving data as expected, including technical failures, participants not adhering to the guidelines on the device usage, or participants failing to wear their device. Monitoring this process becomes especially challenging at scale. We have therefore created a data collection dashboard (see Figure 3) to facilitate monitoring of data collected.
The data collection dashboard is populated using the metadata extracted during the downloading phase of Fitbit and Phone data. The AWS Lambda function (which is set to trigger every five minutes) is set up to retrieve these data from their respective S3 buckets and creates an HTML file. To fill the dashboard so as to ensure that the participants comply with the study requirements, the following data types are observed and closely monitored: Location, Sociability, Taps in App, Last HOPES Uploaded, Last Fitbit Uploaded and Sleep. Color codes denote the severity of the data collection status, red being "need to take an action", orange means "need to closely monitor" and green being "normal".
The data collection dashboard does not require decrypted data and thus is constructed before decryption. As a result, it can be hosted on the upload server with little security risk. However, it does not show full historical data completion status which is sometimes needed. Hence, we developed the data completion dashboard which is described in detail in the supplementary material.

4.1
Data Visualization Toolkit We have developed a data visualization toolkit for visualizing and exploring the collected data. The toolkit can also perform some basic statistical analyses, such as the comparison of features between defined date ranges. Figure 4 gives an illustration of the various types of graphs that can be plotted using the visualizer; these include most of the features discussed above and can be viewed in graphical form. For further details on the usage and capability of the data visualizer, please refer to the supplementary material.

Clinician Dashboard
The clinician dashboard, illustrated in Figure 5 is designed for clinicians to preview general trends in participants' digital marker data, and may be useful during clinical encounters. Based on previous studies and the observations of our clinical partners, we have decided to report sleep, sociability and mobility data for the current version of the clinician dashboard.
Sleep is plotted based on total sleep duration and sleep efficiency; the latter depicted by the color. Sociability is plotted using number of messages exchanged and the number of calls of duration more than one minute. Mobility is based on the time away from home (time spent away from sleeping location) and the radius of gyration (maximum distance travelled from home). These graphs are drawn based on averages over three time-frames: the current week is seven days before 0:00 am of the current day; the past week is 7 days prior to the current week; the past month is 30 days prior to the past week. An example further explaining the clinician dashboard can be found in the supplementary material.

Anomaly Detection Dashboard
In order to support a wide variety of applications attempting to analyze and identify interesting changes amongst the many features being collected by the platform, we have implemented a generic purpose anomaly detection system and dashboard. The system is comprised of several anomaly detection algorithms on the backend that report their findings via an anomaly detection dashboard. The dashboard is designed to create alerts about possible irregularities arising in the digital phenotyping data each day.
There are many approaches in machine learning to anomaly detection in time series data. One approach is to train a time series model on historical data from the participant and then have that model forecast what the data will look like in the future. When new data arrives, we compare it with the forecast and score the prediction based on how "good" or "bad" it is. For example, a simple scoring mechanism compares the empirical distribution of the residuals (i.e., the errors of the fitted model's predictions on the training set) to the realized prediction error on new data.
We have experimented with several time series models, including the broad class of autoregressive integrated moving average (ARIMA) models [35] and the class of Gaussian processes [36], fitting them to a subset of digital phenotyping features that we have initially selected as important for our HOPE-S study (see the supplementary material for details of the features). We note that these two choices of models are able to capture periodic effects, which are important for our HOPE-S study, since participants' behaviors may change markedly on the weekends. Selecting the most appropriate model will depend on the data and application at hand. We train the models every day on all past data and compute the predictions of the digital phenotyping features for the next day. At the end of the following day, the realized digital phenotyping features are compared to the predictions and scored, and these scores are transformed to be interpreted as "the probability that the observed data is an anomaly." The final score is therefore a number between zero and one, where higher values constitute alerts.
In Figure 6, we display an example of what the anomaly detection dashboard looks like on a given day. Each row corresponds to a participant, and each column corresponds to a different anomaly detection score. The participant's identifier and the last date their scores were successfully updated is displayed, along with the anomaly scores for each feature. The score from a multivariate model is also displayed, which may capture interdependencies between features that affect whether or not a measurement is anomalous. For example, major disruptions in sleep naturally coincide with periods of long-distance travel (abnormally large radius of gyration). Note that cells are colored according to the severity of the scores. While this dashboard is mainly used for research at this point, if reliable anomalies are detected they could be promoted to be used on the clinician dashboard.

Example Analysis: Measuring the Effect of Singapore's "Circuit-breaker"
Due to the SARS-CoV-2 (COVID-19) pandemic, Singapore has imposed a stay-at-home order or cordon sanitaire which is formally called "the 2020 Singapore Circuit Breaker measures" or CB. This lockdown was in effect from 7 April 2020 until 1 June 2020, after which gradual stages of reopening have occurred. During this period, people were required to stay at home as much as possible, avoid non-essential travel and social visits, and to maintain social distancing in public. As a result of the lockdown, we would expect to see effects in some digital phenotyping features. As a test for our digital phenotyping system, we performed and report here a data comparison using 20 participants' data before and after the start of this "circuit breaker".  Table 2: Comparison of 6-week digital phenotyping data before (from 45 days before to 3 days before) and after (from 3 days after to 45 days after) Singapore's Circuit-Breaker (CB) was instituted on 7 April 2020. Data recorded for participants with complete data both before and after the start of circuit breaker. Table 2 shows a subset of features that show statistically significant difference before and after the circuit breaker was instituted on 7 Apr 2020. Not surprisingly, since people were required to stay at home, the home-time has increased and the number of significant locations visited has decreased. Features related to physical activity (heart-rate, steps, and acceleration) have also decreased as might be expected. Both sleep and sleep efficiency have also decreased amongst these participants. It is also noteworthy that participants appear to use a fewer number of apps, perhaps because there is no need for some apps such as maps for navigation or those checking bus arrival times; however, it appears they spend more time in entertainment apps. Moreover, the ambient light indoors is generally dimmer than it is outdoors, and so the observed decrease in maximum ambient light is also as expected.
We compared our results with another study based on Fitbit use, the Health Insights Singapore (hiSG) study [37]. Daily steps count decreased by ~35% in our study and ~42% in hiSG; the minimum heart rate decreased by 1.1 bpm in our study and the resting heart rate decreased by 1.6 bpm in the hiSG study; sleep efficiency decreased by 0.8% in our study and by 0.2% in the hiSG study. All comparisons between both studies were consistent in demonstrating changes before and after onset of circuit breaker measures in Singapore.

Conclusion
Digital phenotyping is a promising area in healthcare but requires great care and effort in designing a system that is easy to use, safe in terms of data security and privacy, and collects data with enough details and reliability to be useful in research and patient care. We found the Beiwe platform to be a suitable base that we could use and extend to create the HOPES platform. Our main extensions have been adding many more data sources for collection and in integrating the use of a wearable, and the development of a large set of monitoring and participant management platforms.
We were also driven by meeting all the requirements of a clinical research study for schizophrenia (HOPE-S). This required us to develop significant enhancements in security, privacy, ease-of-use and scalability, choosing a careful combination of public cloud and onpremises operation.
Since massive amounts of diverse data are collected and in digital phenotyping, we have had to create new mechanisms to clean, process, present, explore and analyze data. These need to serve the needs of clinical research study operations, clinical care, platform developers and researchers, hence a range of platforms and data platforms have been developed.
Our initial platform is in use in a clinical trial (HOPE-S) and interim results will soon be reported. An initial test using SARS-CoV-2 as a test-case yielded meaningful and expected results consistent with expected lockdown behaviors, and was consistent with an independently conducted study in the same country.

HOPES Solution Architecture
The HOPES platform infrastructure was separated out into functions of administration, data upload, data encryption, wearable data collection, operational management. Figure 1 shows the separation of the layers between the cloud environment and the on-premise data analytics environment. The data collection, monitoring, and aggregation infrastructure are separated into logical networks that share common encrypted storage, and access to appropriate encryption keys to ensure the data collected is consistent. Access to the encrypted study data, and the study decryption keys is exclusively provided to authorized data analytical services on the R&D Premise for processing. The collection of wearable data from cloud data collection sources is independent of the collection of HOPES application data, but both are normalized into a common set of formats and encrypted using the public key of each respective participant's data to ensure consistent data processing. The private key is only accessible to authorized data analytics pipelines. The solution has separated administration to only authorized administrators accessing through private VPN connections. Monitoring dashboards are separated into a different logical network accessible through a different VPN gateway, and only metadata is visible to operators. Secure download is made possible through private authorized connections, and secured credentials. Once data is processed by the analytics pipeline, de-identified processed data is made available to data scientists and clinicians for further analysis visible in the data exploration tooling, anomaly dashboard, clinician dashboard, and as raw information for further analysis. Data is encrypted at all times while in the data collection infrastructure, and only decrypted during the analytics pipeline.  Figure 2 represents the data collection infrastructure enhanced to leverage the best cloud and security architecture through separation of capability from data collection, data administration, as well as auto-scale and load balancing at every level. The infrastructure is stateless transaction-to-transaction. The architecture implements a secure certificate-based authentication and a rotating credential to ensure only authorized valid participants are connecting into the infrastructure. Automated patching, web application firewalls, distributeddenial of service protection, credential vaults, and secure software development practices Firewall ensure a robust integrity for the infrastructure. An automation framework was written to deploy the software reliably and securely.

QR Scanning for Participant Registration
To facilitate user registration, we implemented a QR code reader to replace manual entry of credentials. To register a participant's phone to the study server, the registration information is generated on the server side with an asymmetric encryption and decryption key. The registration information together with the encryption key is stored in the QR code and can be sent to the participant via email or as a print-out. A migration feature was added to allow for users to migrate to new study phones and maintain the de-identification credentials as well as maintain integrity of study data. As shown in Figure 3(a-d), the participant's onboarding sheet is securely generated by the study administrators, passed to the clinicians for onboarding, and then when participants are ready to be onboarded, the sheet is used to configure the HOPES app by scanning the QR codes. The onboarding consists of six color-coded steps used to input the necessary information for logging into the server, defining the data randomization, and connecting into the infrastructure. For simplicity, and the application ensures each step is followed in order. The QR codes contain the randomized de-identification information, the encryption keys for encoding the data, as well as secret key to connect into the infrastructure. In the latest version, the QR codes are encrypted to ensure confidentiality of the information offline. The keys are only valid during the study duration, and are invalidated upon participant completion of the study, or the study completes. This information is never made available during processing, and securely stored offline by the study administrators, and destroyed after the study is complete.
This process has allowed us to onboard participants easily without the error of inputting credentials, server addresses, facilitate self-service onboarding, and simple onboarding for our participants.

HOPES Debug Interface
The debug console can help test almost every functionality of the app directly on the phone without the need to connect to a computer running Android Studio. It logs every feature that can be collected. It serves two main purposes: for technical troubleshooting (certain features may work differently on certain brands of phones with certain Android versions) and to give more technically-oriented users or inquirers some degree of privacy assurance by showing what exact information is collected and sent to the server. For example, Figure 4 shows an example of the taps log being captured. The debug console can connect to every feature logger. Data from that feature's listener can be displayed in the console before being encrypted and written to file. The debug console is directly accessible when no study has been registered. After study registration, it will be locked by a password or totally disabled depending on the study settings. This is to prevent interference to the data collection for the study.

Newly-Added Digital Phenotyping Features
Below, we describe features that are newly added or enhanced on top of the Beiwe distribution, expanding on those already stated in the main paper. i.
Sensor: Pedometer Although step counts are readily captured by most wrist-wearable devices like Fitbit, it is still beneficial to capture step counts on the phone, since some people do not wear smart bands or watches. For those that do wear wrist devices, the differences between the two sources of step data can provide some interesting information. For example, if the wrist device registers steps during a particular period but phone does not, this may suggest that the user is merely walking around their home or office (i.e., not travelling), in which case their phone may have been left on a desk or table. ii.
Sensor: Ambient Light We capture ambient light since studies have suggested there is correlation between a patient's mental health and their preferred environmental lighting. Additionally, the ambient light in a person's sleep environment would likely affect sleep quality, which may in turn have an influence on their mental wellness. In our implementation of the "study settings", the researcher can set a time interval during which the ambient light reading is taken. iii.
Sensor: Magnetometer The magnetometer returns the direction of magnetic fields passing through the phone. This information tells us about the orientation and alignment of the phone, which in turn can determine the rotational motion of the phone. iv.
Capturing Taps Taps provide two types of information that may be related to a person's health. The speed at which a person taps may give a hint of their wellbeing; for example, a fatigued person may tap more slowly, or some diseases may cause small, uncontrollable movements. The apps a person uses (determined from their taps) also gives an indication of their behavior. For example, a relapsing schizophrenia patient may have significantly altered communication, reflected in the number and speed of taps he/she made in each app. We make use of Android application overlay to capture taps. In particular, it creates an invisible tiny popup window on the screen and watches for every tap outside the window. It then queries the Android Usage Stat Manager to get the app name in which the last tap was made. We record the timestamp, app-name, and screen orientation of the phone for each tap. v.
Accessibility Taps The taps-capturing method (described in Section 4.1.4) using the Android application overlay cannot capture Android system buttons such as Home, Back, and Recent Apps. Additionally, it cannot capture more detailed information about the button that has been tapped due to Android's privacy preserving strategies. We attempt to capture typing error rates, which we believe can be affected by a person's the physical or mental condition. We can determine this from how often the DELETE key on the keyboard is tapped. To measure tapping speed, we also need to know whether the person is typing on the keyboard or navigating in a social messaging app.

vi.
Sociability Messages For our study, changes in a participant's sociability (i.e., their communication with others) may be related to his/her mental health status. This may be reflected in their activity in social messaging apps. In particular, the number of incoming and outgoing messages, the number of message senders to which the participant has replied, and the lengths of outgoing messages, for example, may indicate a degree of social engagement. The original Beiwe app captures incoming and outgoing SMS messages. However, in Singapore most people use social messaging apps like WhatsApp as their primary method for text communication. We therefore make use of the Android Accessibility service privilege to acquire message meta data from social messaging apps. We have so far only implemented this for WhatsApp, but it can be easily extended to other social messaging apps in the future. To protect user privacy, we do not capture the content of message, but only the meta data of each message, i.e., the timestamp, direction (incoming or outgoing), sender/receiver hashed identity (not their names), message length, and message type (image, text, voice, etc.).

vii. Sociability Calls
Similar to Sociability Messages, our app also leverages the Android Accessibility service privilege to capture calls within social messaging apps since, in Singapore, a significant number of phone calls are made using social messaging apps rather than the phone's SIM card.
Following the data format of the call log feature implemented in Beiwe, the HOPES app records the timestamp, duration, direction (incoming or outgoing), type (voice or video call), and hashed sender/receiver identity of every call within WhatsApp. Depending on the need, this can be easily extended to other social messaging apps.
viii. Time Zones In general, changes in time zone can have a significant impact on both one's physical and mental health, as well as one's usage data. If not handled properly, it can cause large discrepancies in clinical predictions for participants who often travel across time zones. For every feature that we capture, we have therefore added one additional field recording the current system time zone.

4.1
Fitbit Setup We used the Fitbit Charge 3 in our study. User accounts were pre-created with system generated email addresses and passwords. Separate "main" application was created in order to download data from user accounts, following "Authorization Code Grant Flow". Data was downloaded using the Fitbit Web API, in terms of intraday time series.

4.2
Fitbit Data Download Architecture We used AWS lambda, to extract user data from the Fitbit cloud, update access tokens, write encrypted data to files, calculate meta data, and then upload into AWS S3 buckets. The Fitbit data downloader lambda is triggered every hour by a rule setup in AWS CloudWatch. Access token of each user was setup in the AWS Secret Manager and updated as necessary.

Back-end Data Processing Pipeline
The data processing back-end is designed to reformat and process data for downstream machine learning models. We created a master script called ./periodic-run.sh which is scheduled to run periodically with a configurable time interval. Sequentially, it will:

Data Visualization Toolkit
The data visualization toolkit has been developed as a web interface using Jupyter notebook. The dashboard is highly configurable. The user needs to specify an input root path, inside which the directory structure can be either 'RootPath/StudyName/PatientName/FeatureName/timestamp-files' or 'RootPath/StudyName/PatientName/FeatureName.csv(.gz)', i.e., it can view data files both before and after concatenation (as described in Section 4.3). In the master configuration dashboard (as shown in Figure 5), the user can choose which study, participant, and feature to view. In other checkboxes, drop-down lists, and sliders, the user can set various options and choose different types of graphs to plot. For example, given the heart-rate data (which is in the form of a time-series with a heart-rate value every 5 seconds), the user can plot the maximum, minimum, mean, etc. for every interval of 1 week, 1 day, 1 hour, etc. You can also choose different columns in the CSV file. This can be plotted in the form of a line plot, a bar plot, or a scattered plot, among others. The example in Figure 5 shows a box-plot of heart-rate every day.
The main advantage of this toolkit is that it is highly customizable. In practice, users often want to display a fixed set of specific graphs on some specific data. Instead of clicking through various control items in the master configuration dashboard every time, they only need to do that once, copy and paste the configuration parameters (in Figure 5 immediately below the Update Plot button) into their custom scripts when calling the draw function. An example is shown in the overview dashboard Figure 6. This toolkit is designed to be general-purpose. The user can also manually load individual CSV files, not necessarily from this study, and visualize them. The only requirement on the CSV files is that it must contain a column called timestamp or datetime. while the color of the graph indicates the average radius of gyration in kilometers. As shown in Figure 8, these features are calculated by taking the average in following time frames: • CW -Current Week (7 days before 0000 hours today) • PW -Past Week (7 days before the current week) • PM -Past Month (30 days before the past week) Figure 9a, an exemplary working adult of age 35, goes to the office, actively associates with friends, has kids and is married, generally has 6 hours of good sleep, often takes calls, sends messages to communicate with friends and colleagues, commutes to the office during the weekdays and goes out during the weekend.   Observations: Sleep duration is high compared to a working adult. Similarly, sociability is low compared to a working adult as he doesn't have many friends. Since he preferers to stay at home and play video games, mobility is low compared to a working adult. But because of the possible emerging relapse during the last week, total sleeping hours and sleep efficiency has dropped. Because of pandemic restrictions, this participant may be isolating himself, and sociability and mobility have have dropped significantly.

Data Completion Dashboard
The data completion dashboard (see Figure 10) shows the historical completion status of the actual decrypted data from every participant over a customizable period (default is the last 90 days). It is implemented as a component module in the data processing pipeline and thus is updated every time when the data processing pipeline updates.

Features Selected in Anomaly Detection Dashboard
We have fitted several univariate time-series models to each of the following 12 daily digital phenotyping features; these were motivated by our HOPE-S study: • sleep mean efficiency -the mean of the sleep efficiency scores during all periods of sleep; • sleep tot hrs -the total amount of time spent (in hours) asleep; • # steps -the total number of steps taken throughout the day; • # walks -the total number of consecutive periods of steps (sampling interval is one minute), which we define as a walk; • steps / min walk -the rate (in steps per minute) during all periods of a walk; • social # sent -the number of messages and images sent on WhatsApp; • social # recv -the number of messages and images received on WhatsApp; • social # contact exch -the number of unique contacts that the participant both sent and received at least one message in WhatsApp; • # taps -the number of taps in all apps; • mean intap dur -the mean of the duration of intervals between all screen taps; • RoG -the radius of gyration as measured by GPS and computed by the Beiwe backend; • light mean lum -the mean recorded lumens by the ambient light sensor.