Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Wednesday, July 01, 2020 at 8:00 PM to 10:00 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 22.10.20 in Vol 22, No 10 (2020): October

Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/23297, first published Aug 06, 2020.

This paper is in the following e-collection/theme issue:

    Original Paper

    COVID-19 Self-Reported Symptom Tracking Programs in the United States: Framework Synthesis

    1Uniformed Services University, Bethesda, MD, United States

    2Health Services Research Program, Henry M Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, United States

    Corresponding Author:

    Miranda Lynn Janvrin, MPH

    Health Services Research Program

    Henry M Jackson Foundation for the Advancement of Military Medicine

    6720B Rockledge Dr

    Suite 605

    Bethesda, MD

    United States

    Phone: 1 6035403059

    Email: miranda.janvrin.ctr@usuhs.edu


    ABSTRACT

    Background: With the continued spread of COVID-19 in the United States, identifying potential outbreaks before infected individuals cross the clinical threshold is key to allowing public health officials time to ensure local health care institutions are adequately prepared. In response to this need, researchers have developed participatory surveillance technologies that allow individuals to report emerging symptoms daily so that their data can be extrapolated and disseminated to local health care authorities.

    Objective: This study uses a framework synthesis to evaluate existing self-reported symptom tracking programs in the United States for COVID-19 as an early-warning tool for probable clusters of infection. This in turn will inform decision makers and health care planners about these technologies and the usefulness of their information to aid in federal, state, and local efforts to mobilize effective current and future pandemic responses.

    Methods: Programs were identified through keyword searches and snowball sampling, then screened for inclusion. A best fit framework was constructed for all programs that met the inclusion criteria by collating information collected from each into a table for easy comparison.

    Results: We screened 8 programs; 6 were included in our final framework synthesis. We identified multiple common data elements, including demographic information like race, age, gender, and affiliation (all were associated with universities, medical schools, or schools of public health). Dissimilarities included collection of data regarding smoking status, mental well-being, and suspected exposure to COVID-19.

    Conclusions: Several programs currently exist that track COVID-19 symptoms from participants on a semiregular basis. Coordination between symptom tracking program research teams and local and state authorities is currently lacking, presenting an opportunity for collaboration to avoid duplication of efforts and more comprehensive knowledge dissemination.

    J Med Internet Res 2020;22(10):e23297

    doi:10.2196/23297

    KEYWORDS



    Introduction

    Background

    A 2019 outbreak of febrile respiratory illness in Wuhan, China, quickly evolved into the COVID-19 pandemic [1]. The disease has affected over 200 countries and territories worldwide. Globally, there are more than 18 million confirmed cases and over 700,000 deaths attributed to this flu-like illness, as of August 6, 2020 [2]. In the United States alone, there are more than 4.5 million confirmed cases and over 150,000 deaths [3]. The true number of those affected may be much higher due to the slow rollout and lack of availability of testing in the United States compared to other countries [4].

    The United States, as well as other countries, has combatted this pandemic and sought to flatten the curve via social distancing, testing, isolation, and contact tracing [5]. Despite best efforts, the virus spread quickly with serious implications. In the first month of testing, the hospitalization rate was 4.6 per 100,000 people in the United States. Hospitalization rates were highest among adults over 65 years as well as those with underlying conditions [6]. At the present time, there is no specific antiviral treatment for COVID-19. Management of symptoms focuses on supportive care and oxygen therapy, both of which involve a plethora of hospital resources [5]. Modeling of COVID-19 shows that the pandemic has the potential to cause regional shortages of hospital beds, intensive care unit (ICU) beds, ventilators, and medical staff, which could lead to difficult ethical decisions [7]. A recent study suggests that COVID-19 will likely become endemic like cold and flu viruses [8]. There is a need to predict where resources should be distributed before potential patients with COVID-19 enter the hospital setting to alleviate strain on medical staff and facilities.

    Epidemiological surveillance is fundamental in coordinating both immediate and long-term strategies for the detection and prevention of infectious disease outbreaks [9,10]. However, since collecting and disseminating these data take several weeks, during highly transmissible outbreaks, it may not be entirely reflective of the current prevalence of the disease. As these data are used to inform health authorities and prompt a public response, the resulting time delay can lead to inappropriate or inadequate response to actual need. Additionally, the collected data may be incomplete or insufficient to discern regional demographics that may impact effective intervention and treatment [11,12].

    To overcome the limitations of epidemiological surveillance, internet-based technologies have been developed to estimate and monitor real-time changes in population, soliciting participation from the public at large [12,13]. One approach that has been introduced is self-reported symptom tracking. Symptom tracking is a form of crowd-sourced participatory surveillance that solicits individuals to report their health status on a daily or weekly basis, often with emails or notifications to prompt timely response, allowing researchers to see potential changes in the population before seeing changes in clinical presentation at hospitals and medical centers. Symptom tracking is used primarily to track and forecast influenza activity throughout the country; however, researchers have been looking to apply this technology to other diseases, such as COVID-19 [12,13]. Participatory surveillance such as this may prove vital to complement epidemiological surveillance during highly transmissible epidemics, as it allows for the detection of outbreaks before they reach the clinical threshold, affording more time for logistical support and appropriate allocation of resources [11]. Research by Baltrusaitis et al [13] indicates that collected participatory surveillance of influenza later correlated with confirmed epidemiological surveillance data. With the current highly transmissible and deadly COVID-19 pandemic putting a strain on portions of the United States health care system, participatory surveillance is more important than ever to bolster local prevention efforts [14].

    Research Purpose

    This study uses a framework synthesis to inform decision making about the utility of existing self-reported symptom tracking programs for COVID-19, with a focus on the US population, as an early-warning tool for probable clusters of infection. Due to the rapidly changing nature of both the pandemic and work in this area, this research will be updated at 6- and 12-month intervals.

    Objective

    The purpose of this framework analysis is to assess the number and scope of self-reported symptom tracker programs focused on the United States and COVID-19. An innovative best fit framework analysis was chosen because of its strength, utility, and appropriateness in drawing conclusions for an evolving subject [15,16]. According to Booth and Carroll [17], the best fit framework approach is considered a highly structured and pragmatic methodology for research synthesis suited for qualitative research with specific questions, a limited time frame, and issues that have been previously identified; this served the purpose of our research well [18]. The outcomes of this synthesis and its updates should inform decision makers and health care planners about these technologies and the information they can ascertain from them in order to aid in federal, state, and local efforts to combat the pandemic both now and in the future.


    Methods

    A framework analysis was conducted to assess symptom tracking programs. A best fit framework was constructed by collating information collected from each program into a table for easy comparison between programs.

    Target Population

    This framework synthesis sought to identify programs that track COVID-19 symptoms in the US population for all ages, genders, and ethnicities. Inclusion and exclusion criteria are shown below:

    • Inclusion criteria: programs were included if they aimed to capture and geographically collate self-reported potential symptoms of COVID-19 and if they were available for use in the United States. For our purpose, a symptom tracking tool is defined as a program that allows individuals to report symptoms of COVID-19 to identify geographic areas with emerging or changes in progression of disease.
    • Exclusion criteria: programs were excluded if they did not track specific symptoms for COVID-19, were symptom checkers for individual use only, or were not targeting the US population.

    Program Identification

    Programs were identified using Google search for keywords (“symptom trackers covid,” “symptom trackers coronavirus,” “symptom tracking covid,” “symptom tracking coronavirus,” “daily symptom tracking covid,” “daily symptom tracking coronavirus,” “self-reporting covid,” “self-reporting coronavirus”). The time frame for the search for programs ranges from April 7, 2020, to May 9, 2020. Further, we used snowball sampling to identify other symptom tracker programs for COVID-19.

    Screening Method

    Reviewers (JK, MJ, TK) screened programs to determine if inclusion criteria were met. Reviewers (MJ, JK, TK) then extracted data from program websites using a standardized form. To complete the collection of information not available via the program webpages, we contacted the managers of the programs via email.

    Synthesis Method

    Data relating to program characteristics were extracted from all included programs and organized into a table format, which was used to guide data collection and build the framework for analysis. Data were then synthesized in order to form meaningful statements about the programs.


    Results

    We identified 6 programs that met the inclusion criteria. Information was gathered from the public webpages of all eligible symptom trackers (BeatCOVID19Now, COVIDcast, COVIDNearYou, COVID Symptom Tracker, HelpBeatCOVID19, and HowWeFeel) (Table 1). Two programs, C19Check and the Department of Defense’s MySymptoms.mil, were excluded from our synthesis since they are symptom checkers that do not identify probable clusters of emerging infection.

    All of the included programs were affiliated with a university, school of medicine, or school of public health. Half of the programs (n=3) included were based in Boston, Massachusetts, and affiliated with Harvard University (COVIDNearYou, COVID Symptom Tracker, and HowWeFeel), with COVIDNearYou also collecting data from participants in Canada and Mexico. Two other programs are based elsewhere within the United States (COVIDcast and HelpBeatCOVID19), and one is based in Australia, designed for international use (BeatCOVID19Now).

    The number of responses, defined as unique symptom entries by an individual, to each program varied widely, with the lowest being ~27,000 (BeatCOVID19Now) and the highest being 2,573,240 (COVIDcast). COVID Symptom Tracker collected data from patients currently enrolled in large cohort studies and clinical trials not related to COVID-19 and had obtained much of their initial influx of responses through that mechanism. Two-thirds of the programs had fewer than 100,000 responses. Three programs utilized a website to collect data, while two exclusively used an app available for both Apple and Android devices (COVID Symptom Tracker and HowWeFeel), and only one utilized a survey on a social media platform (Facebook). While most of the programs had no form of follow-up with participants, COVIDNearYou and HelpBeatCOVID19 sent text message reminders, and COVID Symptom Tracker sent phone notifications every third day.

    Table 1. Overview of self-reported symptom tracker programs.
    View this table

    The programs collected a variety of data elements, but several were common among them (Table 2). All of the symptom trackers collected demographic data on the participant’s age, gender, and zip code. They also all collected information on symptoms experienced by the participant, although the time frame considered varied from the present to 7 days prior. Additionally, every program asked if the participant had been tested for COVID-19 at the time of the survey. Five of the six trackers also asked for information on any chronic conditions that the participant is experiencing, and if they are or are not a smoker.

    Some of the programs had special interest in certain topics that were not explored by others. Only four of the programs asked the participant if they had been exposed to anyone who had COVID-19, while two asked if the participant came into direct contact with the public. Two programs asked if participants had received an annual flu shot this past year. Two programs asked questions related to the impact of the pandemic on participant’s mental health. Two programs asked the participant to answer questions about others in their household in addition to themselves.

    Table 2. Data elements across programs.
    View this table

    Discussion

    Principal Results

    Self-reported symptom trackers have been shown to be beneficial in tracking and monitoring the spread and progression of influenza each year and may prove to be vital as the United States continues to loosen shelter-in-place guidelines across the country. Due to the nature of the rapidly changing pandemic, this resource will be updated at both 6- and 12-month intervals to better reflect the evolving pandemic response.

    Two of the programs were created by groups who already have existing infrastructure for tracking influenza outbreaks each year, BeatCOVID19Now, which is a derivative of Flu-iiQ, and COVIDNearYou, the sister tracker to FluNearYou. Flu-iiQ, in particular, was developed to solicit patient-reported outcome measures during large-scale clinical trials to measure the presence or absence of disease within a small subset of a population, allowing for extremely sensitive measurements without requiring thousands of responses [19]. The flexibility of these programs to track symptoms associated with diverse flu-like illness is imperative in identifying outbreaks of disease both for the purposes of this current pandemic as well as future flu and other respiratory disease outbreaks [20].

    The data elements collected varied between programs, but all asked for zip code data, which means that even groups that do not currently have their data geolocated on maps have the potential to do so in the future in order to make data accessible to state and local health officials. They also all collect data regarding testing status, which enables local, state, or national program managers or planners to see the impact of current testing expansion efforts. Almost all of the programs asked about race and/or ethnicity, which may highlight racial disparities in testing, symptoms, unemployment status, and other chronic health conditions. The similarities in the data elements being collected by the different programs indicates that collaboration to build a larger, single picture is a possibility; standardization could be beneficial to the programs and to the local leaders and planners, health care providers, and researchers who would receive the outputs. The differences in collected data highlight areas of focus between the programs that other programs may want to consider incorporating as well.

    Notable differences between the programs include unique data elements as well as the manner of recruitment. Two of the programs, BeatCOVID19Now and COVIDcast, are collecting information related to the mental health impact of the pandemic. This topic is currently being discussed in the scientific community since individuals with current mental health conditions can be at higher risk for infections [21,22]. Additionally, mental health conditions can be made worse by the anxiety and fear brought on by the pandemic [21]. Individuals without existing mental health conditions may develop emotional responses to the pandemic similar to disaster scenarios, particularly those who are working in response to the pandemic or those who are more susceptible to infection. Quarantine in general can spur a number of emotional responses that can remain after stay-at-home orders are lifted [22]. These programs could help to track the effect of mental health during the COVID-19 pandemic and help to inform prevention efforts for future pandemics requiring social isolation and quarantine. Another key difference was the reach of each program. Programs that partnered with or heavily relied on social media platforms (COVIDcast and HowWeFeel) had significantly more responses than those that did not utilize social media, suggesting that social media is a powerful recruitment tool for these efforts, even more so now since people depend on these platforms to stay connected due to social distancing measures. Therefore, its use should be considered by other groups going forward.

    One of the notable results of this synthesis is the demonstrated overlap or duplication of effort between the programs. Each program is competing for the same group of potential respondents, who are more than likely going to be completing only one group’s survey. Without ongoing coordination between groups, the beneficiaries of their work—the public, lawmakers, state and local health care officials, etc—will not obtain information reflective of the full potential of symptom tracking. Although many of the groups recognize this, active collaboration between the groups has been a difficult process, even among the groups located in the same city (eg, Boston, Massachusetts) and based in the same institution.

    A key challenge facing these programs is a lack of recognition at the national level. Only one of the trackers, COVIDNearYou, had a partnership with the Centers for Disease Control and Prevention, an extension of their ongoing partnership for FluNearYou. Despite this long-term collaboration, there is no outward support from the agency urging people to engage with this new program. The lack of local, state, or national promotion or outward partnership further exacerbates the potential for gaps between programs. Additionally, there is the potential that endorsement by local authorities or agencies could increase the number of responses, reaching people who were previously unaware of these programs and influencing them to contribute their data, which would in turn would allow for more complete data. This has been found to be the case in the United Kingdom, where the National Health System has endorsed the sister application to COVID Symptom Tracker, based at King’s College in London. Because of this, at the time of interview, they had received ten times as many responses as their US counterparts [23].

    Limitations

    Several limitations must be acknowledged for this study. First, our analysis was limited to English language programs, and therefore may have missed nuances of data collection which are more important to non-English speaking residents. Second, although the speed of framework analysis enables rapid evaluation of commonalities, it does not provide the in-depth rigor of a full systematic review. Third, our collected data did evaluate differences in the number of responses to each program but not analyze the effectiveness, market penetration, or user demographics of evaluated programs. Fourth, we recognize that program participation is limited to only those who have access to the internet or cellular phone service, creating an unintended disparity among respondents based on their access to and utilization of technology. Therefore, the underlying reasons for the difference in response rate remain beyond the scope of this study. Last, this synthesis does not provide critical appraisal of programs or evaluate programs for effectiveness.

    Conclusion

    Self-reported symptom tracking programs offer potential benefits as states and counties continue to reopen after the large-scale stay-at-home orders. Frequently reported data with high participation in geographic areas would allow officials to better monitor potential emerging hotspots and institute public health policy and reallocate resources more quickly to combat the spread of disease. However, there are unique challenges to address with self-reported symptom tracking programs to ensure successful implementation. Recognition or endorsement at the national, state, or local levels; increased funding to expand social media advertisements and partnerships; and collaboration between existing programs to generate a more comprehensive data picture would be essential steps in bolstering the utility of symptom tracking programs to achieve optimal effectiveness. If these challenges are addressed and symptom tracking programs become more widely used, the reopening process could be safer in the short term with the potential to monitor communities more closely for long-term management of the COVID-19 pandemic or future outbreaks.

    Acknowledgments

    The contents of this publication are the sole responsibility of the authors and do not necessarily reflect the views, assertions, opinions, or policies of the Uniformed Services University of the Health Sciences, the Henry M Jackson Foundation for the Advancement of Military Medicine, or the Departments of the Army, Navy, or Air Force. Mention of trade names, commercial products, or organizations does not imply endorsement by the US Government.

    This study was funded through a grant from the US Department of Defense, Defense Health Agency (award #HU0001-17-2-0001). The funding agency played no part in the design, analysis, and interpretation of data or writing of the manuscript.

    Conflicts of Interest

    None declared.

    References

    1. Hui D, I Azhar E, Madani T, Ntoumi F, Kock R, Dar O, et al. The continuing 2019-nCoV epidemic threat of novel coronaviruses to global health - The latest 2019 novel coronavirus outbreak in Wuhan, China. Int J Infect Dis 2020 Feb;91:264-266 [FREE Full text] [CrossRef] [Medline]
    2. COVID19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU). Johns Hopkins Corona Virus Resource Center.   URL: https://coronavirus.jhu.edu/map.html [accessed 2020-08-06]
    3. Coronavirus Disease 2019 (COVID-19). Centers for Disease Control and Prevention.   URL: https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/cases-in-us.html [accessed 2020-08-06]
    4. Cohen J, Kupferschmidt K. Countries test tactics in 'war' against COVID-19. Science 2020 Mar 20;367(6484):1287-1288. [CrossRef] [Medline]
    5. Nicola M, O'Neill N, Sohrabi C, Khan M, Agha M, Agha R. Evidence based management guideline for the COVID-19 pandemic - Review article. Int J Surg 2020 May;77:206-216 [FREE Full text] [CrossRef] [Medline]
    6. Garg S, Kim L, Whitaker M, O'Halloran A, Cummings C, Holstein R, et al. Hospitalization Rates and Characteristics of Patients Hospitalized with Laboratory-Confirmed Coronavirus Disease 2019 - COVID-NET, 14 States, March 1-30, 2020. MMWR Morb Mortal Wkly Rep 2020 Apr 17;69(15):458-464 [FREE Full text] [CrossRef] [Medline]
    7. Emanuel EJ, Persad G, Upshur R, Thome B, Parker M, Glickman A, et al. Fair Allocation of Scarce Medical Resources in the Time of Covid-19. N Engl J Med 2020 May 21;382(21):2049-2055. [CrossRef]
    8. Kissler SM, Tedijanto C, Goldstein E, Grad YH, Lipsitch M. Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period. Science 2020 May 22;368(6493):860-868 [FREE Full text] [CrossRef] [Medline]
    9. Buehler JW, Centers for Disease Control and Prevention. CDC's vision for public health surveillance in the 21st century. MMWR Suppl 2012 Jul 27;61(3):1-2. [Medline]
    10. Burrell C, Howard C, Murphy F. Fenner and White's Medical Virology, Fifth Edition. London, UK: Academic Press; Nov 30, 2016.
    11. Brownstein JS, Chu S, Marathe A, Marathe MV, Nguyen AT, Paolotti D, et al. Combining Participatory Influenza Surveillance with Modeling and Forecasting: Three Alternative Approaches. JMIR Public Health Surveill 2017 Nov 01;3(4):e83 [FREE Full text] [CrossRef] [Medline]
    12. Lu FS, Hou S, Baltrusaitis K, Shah M, Leskovec J, Sosic R, et al. Accurate Influenza Monitoring and Forecasting Using Novel Internet Data Streams: A Case Study in the Boston Metropolis. JMIR Public Health Surveill 2018 Jan 09;4(1):e4 [FREE Full text] [CrossRef] [Medline]
    13. Baltrusaitis K, Brownstein JS, Scarpino SV, Bakota E, Crawley AW, Conidi G, et al. Comparison of crowd-sourced, electronic health records based, and traditional health-care based influenza-tracking systems at multiple spatial resolutions in the United States of America. BMC Infect Dis 2018 Aug 15;18(1):403 [FREE Full text] [CrossRef] [Medline]
    14. Abir M, Cutter C, Nelson C. COVID-19: A Stress Test for A US Health Care System Already Under Stress. Health Affairs 2020 Mar 11 [FREE Full text] [CrossRef]
    15. Brunton G, Oliver S, Thomas J. Innovations in framework synthesis as a systematic review method. Res Synth Methods 2020 May 03;11(3):316-330. [CrossRef] [Medline]
    16. Koehlmoos T, Gazi R, Rashid M. Social franchising evaluations: A scoping review. LondonPPI-Centre, Social Science Research Unit, Institute of Education, University of London. 2011 Jun.   URL: https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3085 [accessed 2020-04-14]
    17. Booth A, Carroll C. How to build up the actionable knowledge base: the role of 'best fit' framework synthesis for studies of improvement in healthcare. BMJ Qual Saf 2015 Nov 25;24(11):700-708. [CrossRef] [Medline]
    18. Carroll C, Booth A, Cooper K. A worked example of "best fit" framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol 2011 Mar 16;11(1):29 [FREE Full text] [CrossRef] [Medline]
    19. Osborne R, Norquist J, Elsworth G, Busija L, Mehta V, Herring T, et al. Development and Validation of the Influenza Intensity and Impact Questionnaire (FluiiQ™). Value in Health 2011 Jul;14(5):687-699. [CrossRef] [Medline]
    20. Chunara R, Aman S, Smolinski M, Brownstein J. Flu Near You: An Online Self-Reported Influenza Surveillance System in the USA. Online J of Public Health Inform 2013;5(1):e [FREE Full text] [CrossRef]
    21. Menni C, Valdes AM, Freidin MB, Sudre CH, Nguyen LH, Drew DA, et al. Real-time tracking of self-reported symptoms to predict potential COVID-19. Nat Med 2020 Jul 11;26(7):1037-1040. [CrossRef] [Medline]
    22. Lochlainn M, Lee K, Sudre C. Key predictors of attending hospital with COVID19: An association study from the COVID Symptom Tracker App in 2,618,948 individuals. Preprint posted on April 29, 2020. medRxiv [FREE Full text] [CrossRef]
    23. Drew D, Nguyen L, Steves C, Menni C, Freydin M, Varsavsky T, COPE Consortium. Rapid implementation of mobile technology for real-time epidemiology of COVID-19. Science 2020 Jun 19;368(6497):1362-1367 [FREE Full text] [CrossRef] [Medline]


    Abbreviations

    CDC: Centers for Disease Control and Prevention
    ICU: intensive care unit


    Edited by G Eysenbach; submitted 06.08.20; peer-reviewed by A Cravioto, T Burke; comments to author 01.09.20; revised version received 02.09.20; accepted 14.09.20; published 22.10.20

    ©Tracey Pérez Koehlmoos, Miranda Lynn Janvrin, Jessica Korona-Bailey, Cathaleen Madsen, Rodney Sturdivant. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 22.10.2020.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.