Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, March 11, 2019 at 4:00 PM to 4:30 PM EST. We apologize in advance for any inconvenience this may cause you.

Who will be affected?

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 19.12.19 in Vol 21, No 12 (2019): December

Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/16661, first published Oct 11, 2019.

This paper is in the following e-collection/theme issue:

    Viewpoint

    A Call for a Public Health Agenda for Social Media Research

    UConn Center for mHealth and Social Media, University of Connecticut, Storrs, CT, United States

    Corresponding Author:

    Sherry Pagoto, PhD

    UConn Center for mHealth and Social Media

    University of Connecticut

    2006 Hillside Road, Unit 1248, Room 22

    Storrs, CT, 06268

    United States

    Phone: 1 6178770923

    Email: Sherry.Pagoto@uconn.edu


    ABSTRACT

    Research has revealed both the benefits and harms of social media use, but the public has very little guidance on how best to use social media to maximize the benefits to their health and well-being while minimizing the potential harms. Given that social media is intricately embedded in our lives, and we now have an entire generation of social media natives, the time has come for a public health research agenda to guide not only the public’s use of social media but also the design of social media platforms in ways that improve health and well-being. In this viewpoint we propose such a public health agenda for social media research that is framed around three broad questions: (1) How much social media use is unhealthy and what individual and contextual factors shape that relationship; (2) What are ways social media can be used to improve physical and mental well-being; and (3) How does health (mis)information spread, how does it shape attitudes, beliefs and behavior, and what policies or public health strategies are effective in disseminating legitimate health information while curbing the spread of health misinformation? We also discuss four key challenges that impede progress on this research agenda: negative sentiment about social media among the public and scientific community, a poorly regulated research landscape, poor access to social media data, and the lack of a cohesive academic field. Social media has revolutionized modern communication in ways that bring us closer to a global society, but we currently stand at an inflection point. A public health agenda for social media research will serve as a compass to guide us toward social media becoming a powerful tool for the public good.

    J Med Internet Res 2019;21(12):e16661

    doi:10.2196/16661

    KEYWORDS



    Introduction

    Recent leaks of Facebook Chief Executive Officer (CEO) Mark Zuckerberg warning employees about the implications for Facebook if a top Democratic candidate became president resulted in yet another wave of negative press for the company [1], a continuing backlash initially provoked by a 2014 Facebook research study that manipulated user newsfeeds [2,3]. The hashtag “#deleteFacebook” often goes viral following these incidents, as angry users publicly vow to disconnect their accounts [4]. Despite the backlash, in 2019, the Pew Research Center reported no decline in Facebook users over this period [5]. Although many people seemingly have serious concerns about Facebook and other social media platforms, social media bring a particular value to people’s lives that make us reluctant to disconnect. To the extent that people enjoy and even depend on the community and resources that social media bring to their lives, disconnecting may be felt like more of a loss than a gain. We know surprisingly little about the value that social media bring to people’s lives.

    Social media is a burgeoning area of study in the field of public health; however, much research covered in the popular media has focused on its harms [6-9]. Persistent negative headlines drive a narrative about social media that likely deteriorates public sentiment. To be sure, enough research has been conducted to know that some uses of social media are indeed harmful, but the public has very little guidance on how to modify their use to maximize the benefits to health and well-being while minimizing the potential harms. Given that social media are intricately embedded into our lives and we now have an entire generation of social media natives, the time has come to create a public health research agenda to guide the use and design of social media in ways that improve health and well-being. In this viewpoint, we propose such a public health agenda for social media research that is framed around three broad questions (Textbox 1):

    • How much social media use is unhealthy, and what individual and contextual factors shape that relationship?
    • What are the ways social media can be used to improve physical and mental well-being?
    • How does health (mis)information spread, how does it shape attitudes, beliefs, and behavior, and what policies or public health strategies are effective in disseminating legitimate health information while curbing the spread of health misinformation?

    Textbox 1. A public health agenda for social media research: three areas of inquiry and specific research questions.
    View this box

    Public Health Agenda for Social Media Research

    How Much Social Media Use Is Unhealthy and What Individual and Contextual Factors Shape That Relationship?

    Although studies demonstrating an association between greater social media use and poor health outcomes (eg, depression, loneliness, poor body image) [10-13] are plentiful and make for popular headlines, the literature on social media use and well-being is quite mixed [14]. Indeed, a recent meta-analysis of 31 studies found that higher social media use was associated with more significant online and offline social support [15]. Nonlinear relationships have also been observed such that both heavy and no social media usage have predicted worse mental health. Increasingly studies are pointing to moderating variables that influence the relationship between social media use and health-related outcomes [16,17]. At this point, studies that report simple associations between time spent broadly on social media and a specific health outcome do little to advance our knowledge about the impact of social media use on health [18]. More research on specific types of social media activities, the individual and contextual determinants of these activities, and how they lead to positive versus adverse health outcomes is now very much needed [19]. Further, longitudinal studies that identify factors that predict change in time spent in health-promoting versus harm-promoting social media activities and the impact of changes in time spent in these activities on health outcomes would create knowledge that could inform public health guidelines [19].

    Given how little we know about how to define the boundaries of problematic social media use, it is premature to devise solutions to curb it. Nevertheless, solutions are being proposed, most of which involve calls to limit social media use [20]. Notably, Senator Josh Hawley proposed the Social Media Addiction Reduction Technology (SMART) Act [21]. If it is passed, it will ban social media platform features assumed to promote compulsive use, such as infinite scrolling and autoplay, while requiring platforms to include features that implement time limits. However, little research has explored whether infinite scrolling and autoplay features negatively impact users.

    Further, encouraging time limits on social media usage overlooks that not all users have negative experiences on social media, and not all social media uses are inherently harmful. Policy and public health recommendations that focus on quantity instead of quality essentially equate a patient spending 6 hours a week engaging with her cancer Facebook group to help her cope with treatment to a cyberbully spending the same amount of time harassing classmates on Snapchat. Evidence-based guidelines are needed to prevent real harm that can result from discouraging or shaming social media use in people whose well-being is enriched by, or even dependent upon, their social media activities.

    New self-report measures are emerging that allow researchers to assess the degree to which one’s social media experiences are negative or positive [22,23]. Tools like this could be useful as we attempt to understand which specific uses are associated with negative and positive experiences and health outcomes. More objective and granular measures that capture time spent engaging in specific social media activities, such as engaging with a particular Facebook feed or group, are also needed to advance the research agenda. Currently, no such tools exist [24]. While self-reported measures are easy to administer, self-reported estimates of time spent in activities that are brief, unplanned, and unstructured (such as typical social media use [25]) are less accurate compared to estimates of time spent in more structured and prolonged activities, such as time spent working or commuting [24]. While there are software programs and mobile tools that track time spent using Facebook via a mobile app or web browser on a computer (eg, Screen Time [iPhone], Stay Free app [Android]) [26], these tools do not capture use across multiple devices. Software-tracked time also does not capture how much time the participant spent engaging in different activities. Until new tools to objectively measure specific social media use becomes available, ecological momentary assessment could be leveraged to track time spent engaging in specific social media uses, along with factors such as positive and negative affect, body dissatisfaction, sense of belonging, and loneliness, to provide insights into the relationship between specific uses and these outcomes. This approach could also be leveraged in interventions in which users self-monitor their social media activities and these factors, receive feedback about these relationships and set goals that maximize positive uses while minimizing negative uses.

    What Are Ways Social Media Can Be Used to Improve Physical and Mental Well-Being?

    Some commercial social media platforms are investing in health-related tools that have promising public health implications. For example, Facebook partnered with the American Blood Centers and the American Red Cross to produce a blood donation tool that alerts users to blood supply shortages in their area and notifies them of locations where they can donate [27]. Also, Facebook, Instagram, Pinterest, and Twitter all provide suicide hotline information to users who use suicide-related search terms. Pinterest removed antivaccine content on its platform and adopted a policy where vaccine-related searches produce health information from leading public health organizations [28]. Related, Facebook and Instagram reduced the ranking of groups and pages that spread misinformation about vaccines, rejected ads for vaccine-related misinformation, and provided pop-ups for searchers using vaccine-related search terms [29]. YouTube has pulled ads from videos containing antivaccine content to prevent sites from monetizing this content [30]. That commercial social media platforms are taking actions to improve public health is a promising trend. Very little research has explored the impact of these initiatives, most of which are very new. Facebook offers research awards on specific topics [31], but the public health community should attempt to shape their agenda. As is beginning to occur in the social sciences [32], partnerships between commercial platforms and public health scientists are needed to research the impact of these initiatives and to inform the development of new public health initiatives.

    Users are also discovering unique ways to leverage social media to improve their health. Online patient communities have proliferated on social media as a way for patients to connect with one another on specific health topics. “Peer-to-peer healthcare,” a term coined in 2011 by Susannah Fox at the Pew Research Center, refers to patients using social media to connect with other patients about their health [33]. Online patient communities have organically formed on nearly all social media platforms. They are often patient-initiated, patient-moderated, and open forum, meaning conversation threads can be initiated by members as they wish. Research has begun to examine online patient communities, and the benefits members gain from participation. For example, a review of 47 studies of online diabetes communities on multiple platforms found evidence for positive clinical, behavioral, psychosocial, and community outcomes, and very few negative outcomes [34]. A survey study of a breast cancer community on Twitter found that 80.9% of respondents reported that participation increased their breast cancer knowledge, 66% learned of clinical trial opportunities, 31% learned information that led them to seek a second opinion, 71.9% reported being inspired to increase their advocacy efforts, and 67% of those with high anxiety about their diagnosis reported declines in anxiety [35].

    Online patient communities may even impact health care costs. A longitudinal study of new enrollees of HealthUnlocked, a collection of online patient communities in the United Kingdom, revealed declines in emergency room visits [36]. Much more research is needed to fully understand the range of benefits people experience in these communities and how to amplify beneficial experiences. This research faces logistical and ethical challenges because many online communities are private, and the sheer size of them makes obtaining informed consent from all members difficult and unlikely. Further, the presence of researchers may not be desirable in spaces created by patients for patients [37]. For this reason, a community-based participatory research model, adapted for online communities, may be needed to create partnerships. Nevertheless, researchers must establish the value proposition for online community leaders and members to engage with them.

    Online patient communities are used not only for peer-to-peer support but also for advocacy efforts. Donna Helm Regen, moderator of the Pull the Plug on Tanning Beds Facebook page [38], lost her daughter, a tanning bed user, to melanoma before the dangers of tanning beds were well-publicized. She uses her Facebook page to inform the public about the harms of tanning beds and advocate for tanning bed legislation, often in partnership with nonprofit organizations and other advocacy groups with whom she has connected with on social media. Her efforts afforded her opportunities to make a national public service announcement [39] and to testify at state legislative hearings supporting tanning legislation. Social media give grassroots health advocacy efforts a platform to shore up support for their cause. This is another example of how encouraging arbitrary social media use limits may inadvertently squelch important public health efforts. People have discovered ways to use social media to impact their health, their community’s health, and public health, much of which has gone entirely unstudied. Research on what makes these efforts work and how to enhance their efficacy is needed to strengthen their impact on changing individual behavior, shifting public attitudes, and influencing public health policy.

    Patients are not the only ones creating online health communities. Researchers have begun to create online patient communities to implement and evaluate the efficacy of social media-delivered behavioral interventions and public health campaigns [40]. Typically, investigators will start a private online forum on a commercial social media platform (eg, private Facebook group), recruit members of a target population to join the group, host a feed that delivers behavioral intervention or public health messaging for some time, and then measure outcomes. Often these programs employ a health care professional or health coach as a moderator. Systematic reviews of studies evaluating such interventions for weight management [41], smoking [42], HIV testing [43], and cancer prevention [44] reveal promising outcomes. These online communities provide the benefit of both peer-to-peer support and evidence-based health information. However, this model has drawbacks. Researcher-derived communities are smaller than online patient communities in the real world, which raises concerns about scalability. They are also time-limited, which necessitates aggressive engagement strategies to strengthen ties between members of the community.

    Further, we know of no examples of evidence-based social media interventions having been implemented in a real-world setting. Many nonprofit public health organizations have online patient communities, but do not provide health promotion interventions within those communities, which presents an opportunity for partnerships with intervention researchers. The closest example is the Truth Initiative’s “Become an Ex” community for smokers [45]. Their platform provides evidence-based online resources for smoking cessation that community members can use to assist their attempts to quit. Much research is needed to examine how to generate meaningful patient engagement in online communities and how to scale these models for implementation at the community or population level [46].

    The promising data emerging about the health benefits of online patient communities support the notion that online social interactions that occur in tighter-knit communities created based on shared life experiences may have the most potential to impact health and well-being. A movement towards a tighter community-focused model of social media appears to be afoot at Facebook and Instagram. In March 2019, CEO Mark Zuckerberg announced [47]:

    Over the last 15 years, Facebook and Instagram have helped people connect with friends, communities, and interests in the digital equivalent of a town square. But people increasingly also want to connect privately in the digital equivalent of the living room. As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today's open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.

    To the extent that social media use shifts toward private forums built around shared life experiences, the quality of interactions and impact on well-being is likely to improve, which may, by design, diminish negative experiences. Research should inform the type of shift in platform features, tools, and design needed to shape users’ experiences in this way.

    How Does Health (Mis)information Spread, how Does It Shape Attitudes, Beliefs and Behavior, and What Policies or Public Health Strategies Are Effective in Disseminating Legitimate Health Information While Curbing the Spread of Health Misinformation?

    Overview

    Long before the term “fake news” was coined, concerns were expressed regarding the veracity of health claims online and elsewhere [48]. Bogus health claims come in two forms: misinformation and disinformation. Health misinformation refers to information that is false and spread by someone who believes it to be true, whereas health disinformation refers to information that is false and spread by someone aware it is false and is intent on misleading people [49]. The volume of both on social media has grown into a public health epidemic [50]. While research is nascent on strategies to combat this problem, a multi-prong approach is needed, including curbing its spread, inoculating people against false messages, and developing effective counter-messaging. Tackling this complex and urgent public health problem will require an interdisciplinary research agenda.

    Curbing the Spread of Health Misinformation

    To curb the spread of health misinformation and disinformation on social media, we must first have the means to identify three things: (1) false messaging; (2) messengers who spread them and the reasons why (ie, misinformation or disinformation); and (3) the means they are using to do so. While studies of health misinformation on social media have covered a variety of health topics, including infectious disease (eg, vaccines [51], antibiotics [52], cancer [53,54], electronic cigarettes [55], eating disorders [56,57], and nutrition [58,59]), a recent systematic review of 57 health misinformation studies revealed this area of work is dominated by infectious disease studies [60]. This review reported that the most common purveyors of health misinformation appear to be from people with no institutional affiliations.

    Studies of “social bot” activity reveal that these accounts flood the conversation on particular health topics [61,62]. Social bots are social media accounts designed to appear to be manned by a human, but they are automated to put forth or respond to content in specific ways [63]. Not all social bot accounts have nefarious intentions, with some instead serving helpful purposes, such as content aggregation [63] or gamifying health challenges [40]. Trolls are accounts operated by humans who are intentionally disruptive towards others and may or may not be compensated for doing so [64]. One study of antivaccination content spread by “social bots” and “trolls” found that these accounts are pervasive and designed to create false equivalency and sow discord about vaccines [61]. Social bot and troll accounts rapidly proliferate, which gives them extraordinary power to flood social media with messaging in ways that convey the false sense that a certain message is popular or well-accepted [63]. The use of social bots to conduct misinformation campaigns may be one reason health misinformation (and disinformation) is so much more plentiful on social media than legitimate health information [60].

    Commercial platforms have developed algorithms to identify the nefarious activities of bots and trolls, to reduce these activities and delete their accounts [65]; however, new accounts designed to circumvent the algorithms eventually emerge, making for a seemingly never-ending cat and mouse game [63]. Troll accounts are exceptionally resistant to algorithm detection, being manned by real people who may be paid to do so [64]. Surveillance studies are needed to understand the bot and troll ecosystem around health-related messaging and how it is evolving. Further, while social bots have been used for ill-intentions, research should examine how they may be used for good. For example, as part of an anti-bullying public service announcement, Monica Lewinsky released the “Goodness Bot,” a Twitter account that, when mentioned in reply to a bullying tweet, responds with a positive message [66].

    Research is also needed to improve our understanding of how legitimate health information is produced and spreads on social media. When it comes to health misinformation, a strong defense may require a strong offense. Many public health departments and nonprofit organizations use social media to disseminate health messaging. A study of state public health departments revealed that most have a social media presence [67]. Another study specifically focusing on state public health departments’ use of video-based messaging found that 43/51 departments have YouTube channels with a total of 6302 subscribers, 3957 videos, and 12,151,720 views [68]. Most large, health-related, nonprofit organizations also have a social media presence. A study of 24 of the largest nonprofit skin cancer organizations’ Facebook feeds found that the organizations had a collective total of 225,113 followers and in one year produced 824 posts that received 92,004 likes, 4148 comments, and 82,791 shares [69]. Interestingly, the message types that received the most shares were fear appeals and myth busters. While these organizations seem to be producing impressive reach, it is unclear how these numbers compare to the reach of health misinformation on the same topics. For example, a recent study of 1000 tweets using the words “tanning bed” and “tanning salon” revealed that most were made by tanners expressing positive sentiment about tanning, with only 4.3% of the tweets containing health warnings [70]. This suggests that skin cancer organizations messaging may not be penetrating social media spaces where people at risk are discussing their habits. More research is needed to understand the audiences of health-related organizations’ social media feeds, the extent to which their messaging reaches people who are at highest risk for the target health condition, and the impact of their messaging on attitudes, beliefs, and behavior. Research is also needed to inform health organizations’ social media–messaging content strategy.

    While health misinformation and disinformation may be abundant in some spaces on social media, we do not know much about the social media spaces that have very little and how that has occurred. For example, a review of online diabetes communities revealed very low rates of health misinformation [34]. This suggests that health misinformation is not a given in online spaces where patients connect but instead may have the tendency to flourish under specific circumstances. Some communities may self-police health misinformation, but how effective they are depends on their ability to identify it. Moderators of online patient communities, who are essentially a self-appointed army of volunteer community health workers, have very little training and guidance on how to identify and stem health misinformation, but many are likely motivated to do so. Public health researchers can inform the work of moderators, but we see few examples so far of such partnerships.

    Health Misinformation Inoculation

    Inoculating the public against health misinformation (and disinformation) requires an understanding of whether and how they influence attitudes, beliefs, and behavior, and who is susceptible to being duly influenced. The mere existence of health misinformation on certain topics does not necessarily mean it is changing people’s attitudes and behavior, and the same goes for legitimate health information. However, exposure to health misinformation may strengthen preexisting misinformed beliefs. Some evidence suggests that users seek out messaging and online communities that confirm their preexisting beliefs [51]. That said, certain message characteristics may influence message effectiveness. Recent measles outbreaks strongly suggest that antivaccination messages have had an impact on beliefs and behavior [71,72]. These messages often stoke fear or leverage conspiracy theories, which we know to be particularly effective message strategies [73]. Message effectiveness also depends on the characteristics of the messenger and the recipient. Transportation theory suggests that narrative messages are particularly powerful at affecting beliefs, attitudes, and behavior to the extent that people identify with the person sharing their story [74]. Research on the extent to which misinformed narratives versus health misinformation conveyed in other forms (eg, links) is more powerful for affecting message recipients is needed. Chou and colleagues at the National Cancer Institute have called for research into how characteristics of both the messenger and the message recipient affect message receptivity [75].

    Scientists, health care professionals, and medical journal editors have been called on to be messengers of evidence-based health information [76]; however, they have little evidence-based guidance on how to do so. One study of the impact of scientist demeanor on message credibility revealed that scientists exhibiting hostility in a debate were found to be less credible and trustworthy [77]. Knowledge about the complex relationships between message, messenger, and recipient is necessary to inform strategies that inoculate people against health misinformation. Finally, health literacy training is an inoculation strategy that may be useful in primary to post-secondary educational settings [75,78]. Low health literacy has been shown to affect how people evaluate health information they encounter online [79]. Given the volume of health information on social media (and the internet in general), children and adults alike require skills for sorting, vetting, and processing this information, as well as an understanding of how cognitive biases affect information processing. Research on health literacy education that effectively decreases susceptibility to online health misinformation is much needed.

    Developing Effective Counter-Messaging

    Some, albeit limited, research has focused on developing effective counter-messaging. For example, one study found that platform-based warnings such as “this tweet may contain misinformation” decreased the likelihood of the post being shared [80]. Similarly, another study revealed that misinformation correction by platforms and peers reduced misperceptions [81]. A significant challenge to developing effective counter-messaging is that some groups may be especially resistant to it, such as those who have already strongly embraced misinformation. Individual characteristics, such as the sunk cost investment in the misinformation (eg, time and resources spent disseminating it), the negative consequences of changing positions (eg, embarrassment, loss of community/social capital, shame about harm that may have occurred from action or inaction relating to the misinformation), and literacy, likely affect resistance [60]. Researchers focused on developing counter-messaging campaigns should leverage the expertise of human computing interaction and social media marketing experts who can help guide design and dissemination strategies that are best matched to the social media platform. A great need exists to update traditional health communication theories and strategies that were not informed by the unique form of communication afforded by social media and the ever-evolving challenges of the modern social media landscape.

    Challenges to a Public Health Agenda for Social Media Research

    Overview

    Four key challenges constrain progress on a public health agenda for social media research: (1) negative sentiment about social media and villainization of social media companies among the public and scientific community; (2) a poorly regulated research landscape; (3) inadequate access to social media data; and (4) lack of a cohesive academic field.

    Negative Sentiment

    Negative public sentiment about social media may be part of society’s general tendency to push back on new technology, a phenomenon that has a long history [82]. To be sure, scandals involving how social media companies handle privacy and data access have amplified the public response. The consequences of a persistently negative narrative are that it makes it difficult to shore up support for healthy solutions, which may result in an abrupt dismissal of signs of progress. To the extent that the academic community adopts a defeatist view about social media, we may squander our opportunity to shape the social media landscape in ways that improve public health.

    Poorly Regulated Research Landscape

    Research involving social media platforms is emerging rapidly in a poorly regulated landscape [83]. Federal guidelines for the ethical conduct of research do not specifically address the unique ethical challenges of working with social media data [84]. As such, scientists and institutional review boards (IRBs) have little guidance for addressing the ethical, legal, and social implications of social media data use. A qualitative study of university IRB members revealed concerns that investigators do not adequately describe potential risks or have clear plans to minimize risks in IRB applications, IRB members often do not have the knowledge to review such protocols adequately, and that IRBs are having difficulty keeping pace with rapidly changing technologies used in research [85].

    As discussed in two recent reviews [86,87], privacy, confidentiality, and informed consent are key ethical issues in social media research [88-90], though specific ethical considerations differ across study types (eg, surveillance, surveys or interviews, interventions) [91]. While not all research involving social media data meet the federal definition of human subjects research, and thus do not require informed consent, researchers still need to be aware that users may not realize that their public social media content can be used in research studies and may prefer that it not be used this way [88]. Even though use agreements address the potential for public social media content to be used in research and otherwise, the majority of users accept lengthy, difficult-to-read, use agreements without reading or understanding how their data can be accessed and used [92]. Standard rules and protocols to guide social media data use, as well as data sharing between industry and academia, are needed and must be rigorous in terms of privacy and data security. One collaborative working towards this goal is ReCODE Health [93], an academic group that has created a community for resource-sharing (eg, sample IRB documents) and education about the protection of human subjects [94]. For studies that require informed consent, we encourage researchers to add detailed information about the procedures for the collection, storage, and analysis of social media data [89]. For studies involving publicly posted content, we suggest researchers refrain from including exact quotes, which can be traced back to users in ways that could have the potential for harm [88-90].

    Data Access

    Access to much social media data is limited, including both public data and data that users give researchers consent to access. Following the Cambridge Analytica scandal, Facebook and Instagram changed their application program interfaces (APIs) to restrict data access by third parties [95]. While restricting data access seems to be a plausible solution, it also limits how data can be used for the public good. Furthermore, frequent changes to data access rules require researchers to face the exorbitant costs of acquiring data from third party applications that are agile to API changes but typically geared for commercial use. Such tools also add another layer of risk for privacy breach. When data access changes during a research study, it can compromise the ability to answer key research questions, forcing researchers to redesign studies midstream or compromise research quality.

    Facebook’s partnership with Social Science One, an organization designed to facilitate academic-industry partnerships “to advance the goals of social science in understanding and solving society’s great challenges,” [96] is a promising example of how such partnerships can work [97]. Social Science One has developed an industry-academic partnership model that is designed to navigate barriers to data access, such as consumer privacy, trade secrets, and proprietary content, within a structure that is committed to securing the trust of the academic community and the general public [32]. Facebook partnered with Social Science One to allow scientists to access Facebook data to study how social media influences democracy. Progress has moved slowly [98], and results of the partnership remain to be seen, but this effort is the first of its kind to tackle the challenges of academic-industry partnerships involving social media data.

    The barriers to access to social data by academic researchers is deeply troubling. Academic researchers have limited ability to conduct rigorous research that could change the way people use social media for the better. Technology and marketing companies have a greater ability to afford and access social media data, face fewer regulations, and have less oversight, particularly as it relates to protecting users. Companies also monetize this data in ways that are not transparent to the public [78]. More research is critically needed to develop new business models and technologies to protect user privacy while facilitating collaboration and data sharing between industry and academia [99].

    Lack of a Cohesive Academic Field

    Social media research has flourished in recent years, but a cohesive field has not yet emerged to bring this scientific community together. A cocitation analysis of 121 journals that have been cited at least five times in studies of health misinformation on social media revealed four disciplinary clusters: social psychology/communication, general science/medicine, infectious disease/vaccine and public health, and medical internet and biomedical science [60]. While the breadth of fields involved in this research is encouraging, several important fields are missing, including human computing interface, engineering, data science, and computer science. The researchers also found limited cross-citations between the psychology/communication cluster and the general science and medicine cluster, suggesting that researchers in different silos may not be exposed to each other’s work. The lack of a scientific field not only disperses this research across disparate journals but also across myriad scientific conferences, which provides few opportunities for scientists doing this work to build collaborations. Scientific networking opportunities are needed, as well as more transdisciplinary training programs. For example, public health training programs that offer coursework in analytic approaches, such as machine learning, natural language processing, and social network analyses, would better equip public health scientists to do this work; just as ethics, social science, health communications, and health policy coursework could better equip computer scientists, engineers, and data scientists to do this work.


    Discussion

    Social media has revolutionized modern communication in ways that bring us closer to a global society than ever before. How social media has been used thus far reflects the full range of human proclivities. However, we have the power to shape its course. The field of public health is an obvious leader in the charge to inform the use and design of social media to benefit public good. We put forth a public health agenda for social media research, not to be exhaustive, but to start a dialogue about the need for such an agenda. Other areas of research to be further explored include public health surveillance with social media data, social media marketing of healthy and unhealthy products, behavioral pattern analysis, social-behavioral biometrics, and pharmacoepidemiology, to name a few. Federal funding agencies can take the lead in shaping the course of this work by prioritizing it in their strategic plan and among their funding opportunities. The vast range of uses of social media data speak to the need to finally convene an interdisciplinary scientific field devoted to public health social media research.

    Acknowledgments

    Support was provided by NIH grants K24HL124366 and R34HL136979.

    Conflicts of Interest

    None declared.

    References

    1. Newton C. The Verge. 2019 Oct 01. All Hands on Deck   URL: https:/​/www.​theverge.com/​2019/​10/​1/​20756701/​mark-zuckerberg-facebook-leak-audio-ftc-antitrust-elizabeth-warren-tiktok-comments [accessed 2019-11-25]
    2. Goel V. The New York Times. 2014 Jun 29. Facebook Tinkers With Users' Emotions in News Feed Experiment, Stirring Outcry   URL: https:/​/www.​nytimes.com/​2014/​06/​30/​technology/​facebook-tinkers-with-users-emotions-in-news-feed-experiment-stirring-outcry.​html [accessed 2019-12-12]
    3. Kramer ADI, Guillory JE, Hancock JT. Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci U S A 2014 Jun 17;111(24):8788-8790 [FREE Full text] [CrossRef] [Medline]
    4. Gilbert B. Business Insider. 2019 Jul 25. Want to get rid of Facebook for Good? Here's how to do it   URL: https://www.businessinsider.com/how-to-delete-facebook-2018-3 [accessed 2019-11-25]
    5. Perrin A, Anderson M. Pew Research Center. 2019 Apr 10. Share of U.S. adults using social media, including Facebook, is mostly unchanged since 2018   URL: https:/​/www.​pewresearch.org/​fact-tank/​2019/​04/​10/​share-of-u-s-adults-using-social-media-including-facebook-is-mostly-unchanged-since-2018/​ [accessed 2019-10-11]
    6. Woods HC, Scott H. #Sleepyteens: Social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self-esteem. J Adolesc 2016 Aug;51:41-49. [CrossRef] [Medline]
    7. Hunt MG, Marx R, Lipson C, Young J. No More FOMO: Limiting Social Media Decreases Loneliness and Depression. Journal of Social and Clinical Psychology 2018 Dec;37(10):751-768. [CrossRef]
    8. Oyeyemi SO, Gabarron E, Wynn R. Ebola, Twitter, and misinformation: a dangerous combination? BMJ 2014 Oct 14;349:g6178. [CrossRef] [Medline]
    9. Kata A. Anti-vaccine activists, Web 2.0, and the postmodern paradigm--an overview of tactics and tropes used online by the anti-vaccination movement. Vaccine 2012 May 28;30(25):3778-3789. [CrossRef] [Medline]
    10. Rounsefell K, Gibson S, McLean S, Blair M, Molenaar A, Brennan L, et al. Social media, body image and food choices in healthy young adults: A mixed methods systematic review. Nutr Diet 2019 Oct 03:1. [CrossRef] [Medline]
    11. Marino C, Gini G, Vieno A, Spada MM. The associations between problematic Facebook use, psychological distress and well-being among adolescents and young adults: A systematic review and meta-analysis. J Affect Disord 2018 Jan 15;226:274-281. [CrossRef] [Medline]
    12. Primack BA, Shensa A, Sidani JE, Whaite EO, Lin LY, Rosen D, et al. Social Media Use and Perceived Social Isolation Among Young Adults in the U.S. Am J Prev Med 2017 Jul;53(1):1-8 [FREE Full text] [CrossRef] [Medline]
    13. Twenge J, Joiner T, Rogers M, Martin G. Increases in Depressive Symptoms, Suicide-Related Outcomes, and Suicide Rates Among U.S. Adolescents After 2010 and Links to Increased New Media Screen Time. Clinical Psychological Science 2017 Nov 14;6(1):3-17. [CrossRef]
    14. Muench F, Hayes M, Kuerbis A, Shao S. The independent relationship between trouble controlling Facebook use, time spent on the site and distress. J Behav Addict 2015 Sep;4(3):163-169 [FREE Full text] [CrossRef] [Medline]
    15. Liu D, Ainsworth SE, Baumeister RF. A Meta-Analysis of Social Networking Online and Social Capital. Review of General Psychology 2016 Dec;20(4):369-391. [CrossRef]
    16. Cole DA, Nick EA, Varga G, Smith D, Zelkowitz RL, Ford MA, et al. Are Aspects of Twitter Use Associated with Reduced Depressive Symptoms? The Moderating Role of In-Person Social Support. Cyberpsychol Behav Soc Netw 2019 Nov;22(11):692-699. [CrossRef] [Medline]
    17. Martinez-Pecino R, Garcia-Gavilán M. Likes and Problematic Instagram Use: The Moderating Role of Self-Esteem. Cyberpsychol Behav Soc Netw 2019 Jun;22(6):412-416. [CrossRef] [Medline]
    18. Kuss DJ, Griffiths MD. Social Networking Sites and Addiction: Ten Lessons Learned. Int J Environ Res Public Health 2017 Mar 17;14(3):311 [FREE Full text] [CrossRef] [Medline]
    19. Valkenburg PM, Koutamanis M, Vossen HGM. The concurrent and longitudinal relationships between adolescents' use of social network sites and their social self-esteem. Comput Human Behav 2017 Nov;76:35-41 [FREE Full text] [CrossRef] [Medline]
    20. Council on Communications and Media. Media Use in School-Aged Children and Adolescents. Pediatrics 2016 Nov 21;138(5):e20162592 [FREE Full text] [CrossRef] [Medline]
    21. Social Media Addiction Reduction Technology Act. 2019.   URL: https:/​/www.​hawley.senate.gov/​sites/​default/​files/​2019-07/​Social-Media-Addiction-Reduction-Technology-Act.​pdf [accessed 2019-11-25]
    22. Kent de Grey RG, Uchino BN, Baucom BR, Smith TW, Holton AE, Diener EF. Enemies and friends in high-tech places: the development and validation of the Online Social Experiences Measure. Digit Health 2019;5 [FREE Full text] [CrossRef] [Medline]
    23. Andreassen CS, Torsheim T, Brunborg GS, Pallesen S. Development of a Facebook Addiction Scale. Psychol Rep 2012 Apr;110(2):501-517. [CrossRef] [Medline]
    24. Sonnenberg B, Riediger M, Wrzus C, Wagner GG. Measuring time use in surveys - Concordance of survey and experience sampling measures. Soc Sci Res 2012 Sep;41(5):1037-1052. [CrossRef] [Medline]
    25. Junco R. Too much face and not enough books: The relationship between multiple indices of Facebook use and academic performance. Computers in Human Behavior 2012 Jan;28(1):187-198. [CrossRef]
    26. Revilla M, Ochoa C, Loewe G. Using Passive Data From a Meter to Complement Survey Data in Order to Study Online Behavior. Social Science Computer Review 2016 Mar 17;35(4):521-536. [CrossRef]
    27. Budaraju H. Facebook. 2019 Jun 12. Helping Increase Blood Donations in the US   URL: https://newsroom.fb.com/news/2019/06/us-blood-donations/ [accessed 2019-12-12]
    28. Ozoma I. Pinterest Newsroom. 2019 Aug 28. Bringing authoritative vaccine results to Pinterest search   URL: https://newsroom.pinterest.com/en/post/bringing-authoritative-vaccine-results-to-pinterest-search [accessed 2019-12-12]
    29. Bickert M. Facebook. 2019 Mar 07. Combatting Vaccine Misinformation   URL: https://newsroom.fb.com/news/2019/03/combatting-vaccine-misinformation/ [accessed 2019-12-12]
    30. YouTube Help. 2019 Jun. Advertiser-friendly content guidelines   URL: https://support.google.com/youtube/answer/6162278?hl=en [accessed 2019-12-12]
    31. Facebook Research. 2019. Research Awards   URL: https://research.fb.com/programs/research-awards/ [accessed 2019-12-12]
    32. King G, Persily N. A New Model for Industry–Academic Partnerships. APSC 2019 Aug 19:1-7. [CrossRef]
    33. Fox S. Pew Research Center. 2011 Feb 28. Peer-to-Peer Health Care   URL: https://www.pewresearch.org/internet/2011/02/28/peer-to-peer-health-care-2/ [accessed 2019-12-12]
    34. Litchman ML, Walker HR, Ng AH, Wawrzynski SE, Oser SM, Greenwood DA, et al. State of the Science: A Scoping Review and Gap Analysis of Diabetes Online Communities. J Diabetes Sci Technol 2019 May;13(3):466-492. [CrossRef] [Medline]
    35. Attai DJ, Cowher MS, Al-Hamadani M, Schoger JM, Staley AC, Landercasper J. Twitter Social Media is an Effective Tool for Breast Cancer Patient Education and Support: Patient-Reported Outcomes by Survey. J Med Internet Res 2015 Jul 30;17(7):e188 [FREE Full text] [CrossRef] [Medline]
    36. Costello RE, Anand A, Jameson Evans M, Dixon WG. Associations Between Engagement With an Online Health Community and Changes in Patient Activation and Health Care Utilization: Longitudinal Web-Based Survey. J Med Internet Res 2019 Aug 29;21(8):e13477 [FREE Full text] [CrossRef] [Medline]
    37. Chiauzzi E, Wicks P. Digital Trespass: Ethical and Terms-of-Use Violations by Researchers Accessing Data From an Online Patient Community. J Med Internet Res 2019 Feb 21;21(2):e11985 [FREE Full text] [CrossRef] [Medline]
    38. @BanTheBeds. Facebook. 2019. Ban Tanning Beds   URL: https://www.facebook.com/BanTheBeds/ [accessed 2019-11-25]
    39. American Academy of Dermatology Association. 2019. Donna's Personal Story   URL: https://www.aad.org/skin-cancer-awareness/story-donna-regan [accessed 2019-12-12]
    40. Pagoto S, Waring ME, May CN, Ding EY, Kunz WH, Hayes R, et al. Adapting Behavioral Interventions for Social Media Delivery. J Med Internet Res 2016 Jan 29;18(1):e24 [FREE Full text] [CrossRef] [Medline]
    41. An R, Ji M, Zhang S. Effectiveness of Social Media-based Interventions on Weight-related Behaviors and Body Weight Status: Review and Meta-analysis. Am J Health Behav 2017 Nov 01;41(6):670-682. [CrossRef] [Medline]
    42. Thrul J, Tormohlen KN, Meacham MC. Social media for tobacco smoking cessation intervention: a review of the literature. Curr Addict Rep 2019 Jun;6(2):126-138. [Medline]
    43. Cao B, Gupta S, Wang J, Hightow-Weidman LB, Muessig KE, Tang W, et al. Social Media Interventions to Promote HIV Testing, Linkage, Adherence, and Retention: Systematic Review and Meta-Analysis. J Med Internet Res 2017 Nov 24;19(11):e394 [FREE Full text] [CrossRef] [Medline]
    44. Han CJ, Lee YJ, Demiris G. Interventions Using Social Media for Cancer Prevention and Management: A Systematic Review. Cancer Nurs 2018;41(6):E19-E31 [FREE Full text] [CrossRef] [Medline]
    45. Lluch A, King NA, Blundell JE. Exercise in dietary restrained women: no effect on energy intake but change in hedonic ratings. Eur J Clin Nutr 1998 Apr;52(4):300-307 [FREE Full text] [CrossRef] [Medline]
    46. Pagoto S, Waring ME. A Call for a Science of Engagement: Comment on Rus and Cameron. Ann Behav Med 2016 Oct;50(5):690-691. [CrossRef] [Medline]
    47. Zuckerberg M. Facebook. 2019 Mar 06. A Privacy-Focused Vision for Social Networking   URL: https:/​/www.​facebook.com/​notes/​mark-zuckerberg/​a-privacy-focused-vision-for-social-networking/​10156700570096634/​ [accessed 2019-10-11]
    48. Lellis JC. Waving the Red Flag: FTC Regulation of Deceptive Weight-Loss Advertising 1951-2009. Health Commun 2016;31(1):47-59. [CrossRef] [Medline]
    49. Tandoc E, Lim D, Ling R. Diffusion of disinformation: How social media users respond to fake news and why. Journalism 2019 Aug 07:1. [CrossRef]
    50. Southwell BG, Niederdeppe J, Cappella JN, Gaysynsky A, Kelley DE, Oh A, et al. Misinformation as a Misunderstood Challenge to Public Health. Am J Prev Med 2019 Aug;57(2):282-285. [CrossRef] [Medline]
    51. Schmidt AL, Zollo F, Scala A, Betsch C, Quattrociocchi W. Polarization of the vaccination debate on Facebook. Vaccine 2018 Jun 14;36(25):3606-3612. [CrossRef] [Medline]
    52. Scanfeld D, Scanfeld V, Larson EL. Dissemination of health information through social networks: twitter and antibiotics. Am J Infect Control 2010 Apr;38(3):182-188 [FREE Full text] [CrossRef] [Medline]
    53. Park S, Oh H, Park G, Suh B, Bae WK, Kim JW, et al. The Source and Credibility of Colorectal Cancer Information on Twitter. Medicine (Baltimore) 2016 Feb;95(7):e2775 [FREE Full text] [CrossRef] [Medline]
    54. Chen L, Wang X, Peng T. Nature and Diffusion of Gynecologic Cancer-Related Misinformation on Social Media: Analysis of Tweets. J Med Internet Res 2018 Oct 16;20(10):e11515 [FREE Full text] [CrossRef] [Medline]
    55. Lazard AJ, Saffer AJ, Wilcox GB, Chung AD, Mackert MS, Bernhardt JM. E-Cigarette Social Media Messages: A Text Mining Analysis of Marketing and Consumer Conversations on Twitter. JMIR Public Health Surveill 2016 Dec 12;2(2):e171 [FREE Full text] [CrossRef] [Medline]
    56. Syed-Abdul S, Fernandez-Luque L, Jian W, Li Y, Crain S, Hsu M, et al. Misleading health-related information promoted through video-based social media: anorexia on YouTube. J Med Internet Res 2013 Feb 13;15(2):e30 [FREE Full text] [CrossRef] [Medline]
    57. Park M, Sun Y, McLaughlin ML. Social Media Propagation of Content Promoting Risky Health Behavior. Cyberpsychol Behav Soc Netw 2017 May;20(5):278-285. [CrossRef] [Medline]
    58. Dickinson KM, Watson MS, Prichard I. Are Clean Eating Blogs a Source of Healthy Recipes? A Comparative Study of the Nutrient Composition of Foods with and without Clean Eating Claims. Nutrients 2018 Oct 05;10(10):1-10 [FREE Full text] [CrossRef] [Medline]
    59. Ramachandran D, Kite J, Vassallo AJ, Chau JY, Partridge S, Freeman B, et al. Food Trends and Popular Nutrition Advice Online - Implications for Public Health. Online J Public Health Inform 2018;10(2):e213 [FREE Full text] [CrossRef] [Medline]
    60. Wang Y, McKee M, Torbica A, Stuckler D. Systematic Literature Review on the Spread of Health-related Misinformation on Social Media. Soc Sci Med 2019 Nov;240:112552 [FREE Full text] [CrossRef] [Medline]
    61. Broniatowski DA, Jamison AM, Qi S, AlKulaib L, Chen T, Benton A, et al. Weaponized Health Communication: Twitter Bots and Russian Trolls Amplify the Vaccine Debate. Am J Public Health 2018 Oct;108(10):1378-1384. [CrossRef] [Medline]
    62. Allem J, Ferrara E, Uppu SP, Cruz TB, Unger JB. E-Cigarette Surveillance With Social Media Data: Social Bots, Emerging Topics, and Trends. JMIR Public Health Surveill 2017 Dec 20;3(4):e98 [FREE Full text] [CrossRef] [Medline]
    63. Ferrara E, Varol O, Davis C, Menczer F, Flammini A. The rise of social bots. Commun. ACM 2016 Jun 24;59(7):96-104. [CrossRef]
    64. Jamison AM, Broniatowski DA, Quinn SC. Malicious Actors on Twitter: A Guide for Public Health Researchers. Am J Public Health 2019 May;109(5):688-692. [CrossRef] [Medline]
    65. Timberg C, Dwoskin E. The Washington Post. 2018 Jul 06. Twitter is sweeping out fake accounts like never before, putting user growth at risk   URL: https:/​/www.​washingtonpost.com/​technology/​2018/​07/​06/​twitter-is-sweeping-out-fake-accounts-like-never-before-putting-user-growth-risk/​ [accessed 2019-12-12]
    66. Heusner M. Campaign US. 2019 Nov 07. Ad of the Week: Monica Lewinsky's anti-bullying bot   URL: https://www.campaignlive.com/article/ad-week-monica-lewinskys-anti-bullying-bot/1664696 [accessed 2019-12-12]
    67. Thackeray R, Neiger BL, Smith AK, Van Wagenen SB. Adoption and use of social media among public health departments. BMC Public Health 2012 Mar 26;12:242 [FREE Full text] [CrossRef] [Medline]
    68. Duke CH, Yin J, Zhang X, Blankenship EB, Akuse SE, Shah GH, et al. Adopting YouTube to Promote Health: Analysis of State Health Departments. Perm J 2019;23:1-6 [FREE Full text] [CrossRef] [Medline]
    69. Nosrati A, Pimentel MA, Falzone A, Hegde R, Goel S, Chren M, et al. Skin cancer prevention messages on Facebook: Likes, shares, and comments. J Am Acad Dermatol 2018 Sep;79(3):582-585.e1 [FREE Full text] [CrossRef] [Medline]
    70. Waring ME, Baker K, Peluso A, May CN, Pagoto SL. Content analysis of Twitter chatter about indoor tanning. Transl Behav Med 2019 Jan 01;9(1):41-47 [FREE Full text] [CrossRef] [Medline]
    71. Jung M, Lin L, Viswanath K. Effect of media use on mothers' vaccination of their children in sub-Saharan Africa. Vaccine 2015 May 21;33(22):2551-2557. [CrossRef] [Medline]
    72. Moran MB, Frank LB, Chatterjee JS, Murphy ST, Baezconde-Garbanati L. Information scanning and vaccine safety concerns among African American, Mexican American, and non-Hispanic White women. Patient Educ Couns 2016 Jan;99(1):147-153 [FREE Full text] [CrossRef] [Medline]
    73. Wood MJ. Propagating and Debunking Conspiracy Theories on Twitter During the 2015-2016 Zika Virus Outbreak. Cyberpsychol Behav Soc Netw 2018 Aug;21(8):485-490 [FREE Full text] [CrossRef] [Medline]
    74. Green MC, Brock TC. The role of transportation in the persuasiveness of public narratives. J Pers Soc Psychol 2000 Nov;79(5):701-721. [CrossRef] [Medline]
    75. Chou WS, Oh A, Klein WMP. Addressing Health-Related Misinformation on Social Media. JAMA 2018 Dec 18;320(23):2417-2418. [CrossRef] [Medline]
    76. Armstrong PW, Naylor CD. Counteracting Health Misinformation: A Role for Medical Journals? JAMA 2019 May 21;321(19):1863-1864. [CrossRef] [Medline]
    77. König L, Jucks R. Hot topics in science communication: Aggressive language decreases trustworthiness and credibility in scientific debates. Public Underst Sci 2019 May;28(4):401-416. [CrossRef] [Medline]
    78. Christakis DA. The Challenges of Defining and Studying "Digital Addiction" in Children. JAMA 2019 Jun 18;321(23):2277-2278. [CrossRef] [Medline]
    79. Diviani N, van den Putte B, Meppelink CS, van Weert JCM. Exploring the role of health literacy in the evaluation of online health information: Insights from a mixed-methods study. Patient Educ Couns 2016 Jun;99(6):1017-1025. [CrossRef] [Medline]
    80. Ozturk P, Li H, Sakamoto Y. Combating Rumor Spread on Social Media: The Effectiveness of Refutation and Warning. In: IEEE Xplore Digital Library. 2015 Presented at: 48th Hawaii International Conference on System Sciences; 5-8 January; Kauai, HI, USA. [CrossRef]
    81. Bode L, Vraga EK. See Something, Say Something: Correction of Global Health Misinformation on Social Media. Health Commun 2018 Sep;33(9):1131-1140. [CrossRef] [Medline]
    82. McAweeney E, Madden M. Points Data Society. 2019 Sep 30. Beyond Tech Addiction   URL: https://points.datasociety.net/beyond-tech-addiction-965ce8e80c5c [accessed 2019-10-11]
    83. Pagoto S, Nebeker C. How scientists can take the lead in establishing ethical practices for social media research. J Am Med Inform Assoc 2019 Apr 01;26(4):311-313. [CrossRef] [Medline]
    84. Office for Human Research Protections. HHS. 2018 Jan 17. HHS and 15 Other Federal Departments and Agencies Announce an Interim Final Rule That Delays Both the Effective Date and General Compliance Date of the Revisions to the Federal Policy for the Protection of Human Subjects to July 19, 2018   URL: https://www.hhs.gov/ohrp/interim-final-rule-common-rule.html [accessed 2019-10-11]
    85. Nebeker C, Harlow J, Espinoza Giacinto R, Orozco-Linares R, Bloss CS, Weibel N. Ethical and regulatory challenges of research using pervasive sensing and other emerging technologies: IRB perspectives. AJOB Empir Bioeth 2017;8(4):266-276. [CrossRef] [Medline]
    86. Waring ME, Jake-Schoffman DE, Holovatska MM, Mejia C, Williams JC, Pagoto SL. Social Media and Obesity in Adults: a Review of Recent Research and Future Directions. Curr Diab Rep 2018 Apr 18;18(6):34. [CrossRef] [Medline]
    87. Arigo D, Pagoto S, Carter-Harris L, Lillie SE, Nebeker C. Using social media for health research: Methodological and ethical considerations for recruitment and intervention delivery. Digit Health 2018;4:2055207618771757 [FREE Full text] [CrossRef] [Medline]
    88. Henderson M, Johnson NF, Auld G. Silences of ethical practice: dilemmas for researchers using social media. Educational Research and Evaluation 2013 Aug;19(6):546-560. [CrossRef]
    89. Hunter RF, Gough A, O'Kane N, McKeown G, Fitzpatrick A, Walker T, et al. Ethical Issues in Social Media Research for Public Health. Am J Public Health 2018 Mar;108(3):343-348. [CrossRef] [Medline]
    90. Moreno MA, Goniu N, Moreno PS, Diekema D. Ethics of social media research: common concerns and practical considerations. Cyberpsychol Behav Soc Netw 2013 Sep;16(9):708-713 [FREE Full text] [CrossRef] [Medline]
    91. Sinnenberg L, Buttenheim AM, Padrez K, Mancheno C, Ungar L, Merchant RM. Twitter as a Tool for Health Research: A Systematic Review. Am J Public Health 2017 Jan;107(1):e1-e8. [CrossRef] [Medline]
    92. Sauro J. Measuring U. 2011 Jan 11. Do users read license agreements? 2011 2/13/2018;   URL: http://www.measuringu.com/blog/eula.php [accessed 2019-12-12]
    93. Torous J, Nebeker C. Navigating Ethics in the Digital Age: Introducing Connected and Open Research Ethics (CORE), a Tool for Researchers and Institutional Review Boards. J Med Internet Res 2017 Feb 08;19(2):e38 [FREE Full text] [CrossRef] [Medline]
    94. ReCODE Health. 2019. The ReCODE Health Team   URL: https://recode.health/team/ [accessed 2019-11-25]
    95. Schroepfer M. Facebook. 2018 Apr 04. An Update on Our Plans to Restrict Data Access on Facebook   URL: https://newsroom.fb.com/news/2018/04/restricting-data-access/ [accessed 2019-10-11]
    96. Social Science One. 2018.   URL: https://socialscience.one/ [accessed 2019-11-25]
    97. Social Science One. 2018. Our Facebook Partnership   URL: https://socialscience.one/our-facebook-partnership [accessed 2019-11-25]
    98. Social Science One. 2019 Sep 18. Social Science One Announces Access to Facebook Dataset of Publicly Shared URLs for Research   URL: https:/​/socialscience.​one/​blog/​social-science-one-announces-access-facebook-dataset-publicly-shared-urls [accessed 2019-11-25]
    99. Lazer D, Pentland A, Adamic L, Aral S, Barabasi A, Brewer D, et al. Social science. Computational social science. Science 2009 Feb 06;323(5915):721-723 [FREE Full text] [CrossRef] [Medline]


    Abbreviations

    API: application program interface
    CEO: Chief Executive Officer
    IRB: institutional review board
    SMART: Social Media Addiction Reduction Technology


    Edited by G Eysenbach; submitted 11.10.19; peer-reviewed by C Nebeker, JP Allem; comments to author 11.11.19; revised version received 25.11.19; accepted 09.12.19; published 19.12.19

    ©Sherry L. Pagoto, Molly E Waring, Ran Xu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 19.12.2019.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.