According to the Pew Research Center , the majority of Americans use social media websites such as Facebook (68%) and YouTube (75%), with roughly a quarter to one-third using other sites such as Snapchat, Instagram, LinkedIn, and Twitter. The sheer volume of data arising has proved to be an inviting target for both social good and ethically questionable practices alike. Social media data have driven important public health research, including monitoring disease outbreaks [ ], predicting health risk behaviors [ ], accessing hard-to-reach populations [ ], health promotion [ ], user health-communication patterns [ ], and mutual medical data sharing between patients [ ]. Some researchers have adopted a more participatory approach by engaging high-risk groups such as drug users to detect trends and encourage harm reduction [ ]. The analysis of these data has ushered in a variety of innovative analytic techniques such as natural language processing, network analysis, deep learning, and geolocation to provide further insight into these large datasets [ ].
With such a rapidly evolving landscape, this area has been no stranger to ethical controversy [, ]. Ethical questions have arisen in highly publicized cases such as the Facebook social contagion study [ , ], the release of an OKCupid dataset of 70,000 users [ ], and most recently, the use of 50 million user profiles on Facebook by Cambridge Analytica during the 2016 US presidential campaign [ ]. In each of these cases, large quantities of user profile data compromised user privacy or manipulated users through targeted messaging.
Academic reviews suggest that there is “widespread neglect” of ethical considerations by social media researchers , such as inadequate informed consent, lack of researcher boundaries, reposting of personally identifiable content, and deliberate misrepresentation or deception [ ] [ , , ]. A recent study found that online searches of verbatim Twitter quotes found in journal articles can be tracked back to individual users 84% of the time [ ], despite users’ lack of awareness of this sharing, resistance to being studied, and desire to consent to these practices [ , ]. Some researchers misrepresent themselves or engage in deception to engage with social media participants [ ]. Many researchers assume that social media data are in the public domain, obviating the need for consent altogether [ ].
Outside of the United States, there are a wide variety of national research ethics governing bodies and over 1000 laws, regulations, and standards that provide oversight for human subjects research in 130 countries . The rigor of ethical review varies widely across countries. In Europe, ethics review is generally stringent and managed through national bioethics agencies, health ministries, food and drug safety organizations, national research committees, etc [ ]. Ethical review processes in countries such as China are less well developed, with a lack of standardization in operating procedures, professional ethics training, protection of vulnerable groups, and privacy safeguards [ ]. In both of these scenarios, issues of privacy, data trustworthiness, and consent have yet to be resolved, even with the advent of the European Union General Data Protection Regulation (GDPR) [ ]. Research ethics committees (RECs) often lack the expertise to evaluate technical standards, methodologies, data ownership, and group-level ethical harms in big data studies [ ]. Taken together, these issues suggest that international ethical review frameworks continue to be highly challenged by the current dynamic social media research environment.
Second, accessing and de-identifying social media data is not difficult. Data transgressions can be enabled by the ready availability of user data combined with the dissemination of “scraping” technologies that allow easy extraction . Data scraping and de-anonymizing can be accomplished by individuals with no more than basic programming and statistics skills [ ]. Unfortunately, privacy has been considered a “binary value”—either public or private [ ]—rather than a continuum [ ]. While some researchers assume that information shared in public spaces is inherently available for public consumption and may be used without consent, it is important to respect the nature of the data, collection context, and user expectations [ ]. Identifiability should not be regarded as a binary value (either “public” or “private”), but as a continuum based on the nature and extent of the data [ ]. Attempts at de-identification are a necessary but insufficient to ensure safe use of data [ ], with some researchers warning that true de-identification is a “false promise” [ ]. Re-identification has been accomplished with relatively limited data available such as Netflix subscriber movie ratings [ ] or simple demographics [ ].
Third, the perception that big data are somehow “objective” and can be analyzed independent of context is an illusion [, , ]. Social media users post information for reasons differing widely from what researchers may imagine. For example, within the PatientsLikeMe platform [ ], patients adopt a broader definition of “treatments” than clinicians and researchers. For patients, treatments may include “pets” and “handicapped parking stickers” just as much as medications, medical procedures, and therapies. Faulty data assumptions and researcher biases may cascade into poorly built algorithms that lead to ultimate inaccurate (and possible harmful) conclusions, termed by O’Neil [ ] as “weapons of math destruction.” It is important not to dissociate the data from the people behind them [ ]. Even when aggregate data are used and no individual identification has been made, researchers need to be sensitive to the potential psychological and behavioral consequences of findings (particularly with stigmatized or vulnerable groups) as well as the scale and generalizability of conclusions [ , ]. There is a risk of type I error when findings are overgeneralized [ ], thus requiring more mixed methods and longitudinal data gathering [ ].
Fourth, health research has traditionally been conducted by researchers trained in human subject ethics and overseen by established ethics panels. However, the recent growth of “big data” sets in health has attracted computer science researchers who may be less well versed or monitored with regard to key ethical issues . Wright [ ] warns that many computer scientists are skirting the ethical traditions of medical and social science professionals, who abide by guidelines such as the Belmont Report [ ] and the USDHHS Common Rule [ ]. Buchanan et al [ ] suggest that computer science researchers “may not fully understand or believe that their projects align with the same ethical concerns that pertain to human subjects, such as the minimization of risk or harm to individuals, confidentiality, privacy, or just recruitment methods.”
Several questions arise in this context. How do these ethical violations occur? How are these violations discovered and remedied by data producers? Most importantly, what corrective actions can and should be taken to prevent violations that compromise the privacy of social media users? In order to address these questions, we share four cases involving ethical and terms-of-use violations that highlight the four challenges described above. These violations involved the use, interpretation/misinterpretation, and dissemination of patient self-reported data and forum posts available at PatientsLikeMe . In this manuscript, our goal is to utilize these cases as a springboard to protect patient privacy while finding ways of meeting investigators’ legitimate public health research objectives.
Case Studies: Real-World Experiences From an Online Health Community
Each of the cases illustrates a different set of ethical problems. We have applied the health-related research ethics guidelines created by the Council for International Organizations of Medical Sciences (CIOMS) in conjunction with the World Health Organization  as a primary framework for these cases.
Because we aimed to remain transparent, we emailed the prepublication manuscript to the researchers represented in Cases 2-4 below (Case 1 has already been publicized in the national press). After providing 1 month for responses but receiving none, we moved forward with the final manuscript. We have not named specific researchers or papers in Cases 2-4 in order to preserve their anonymity.
Case 1: Large-Scale Data Scraping by Commercial Market Researchers
In a well-publicized 2009 incident reported in the Wall Street Journal , staff at the company Nielsen Media sought to understand how patients with mental health conditions talked about the company. The company created an unauthorized account on PatientsLikeMe and used automated “scraper” software to begin copying open-text discussion data from the message board forums. In total, they harvested about 5% of the mood disorder forum’s qualitative discussion content for an undisclosed commercial client. Our team detected the scraping software, suspended the account (and three others linked to it) shortly after it was initiated, and emailed the company to ask them to stop.
Because this was considered “market research,” no IRB was involved. For market researchers, the level of ethical oversight is not the same as that for academic researchers in most studies. However, professional bodies such as the Market Research Association state that members should “Protect the rights of respondents, including the right to refuse to participate in part or all of the research process,” among other guidelines . Market researchers may need to develop their own standards related to health-data gathered online or endorse existing guidelines. For example, the Association of Internet Researchers recommends that researchers obtain consent from either participants individually or community owners [ ]. Harvesting sensitive data from people with mental health issues also warrants consideration of vulnerable populations; without proper procedures in place to ensure data were handled correctly, there is a risk of re-identification. Scraping only the visible data (as opposed to accessing a full dataset) risks drawing spurious or biased conclusions.
We emailed the company with a cease-and-desist letter. PatientsLikeMe sent a private message to its entire membership describing the incident and wrote a blog post about it. As a result, about 200 members decided to close their accounts. Six months later, reporters at the Wall Street Journal investigated the story as part of a series looking at scraping activity on the Web, and the incident was reported on the newspaper’s front page .
Description of PatientsLikeMe.
PatientsLikeMe has adopted this model because patients lack access to information that can affect their treatment decisions. Sharing “real world” data allows patients, providers, and researchers to collaborate in evaluating current treatment effectiveness, gaps in treatment, and potential new and better treatments. This collaboration can speed the pace of research and improve health care delivery. To facilitate this mission, PatientsLikeMe is funded through investment, as well as commercial and academic research partnerships, rather than advertising or member fees. Because of the serious nature of health data, PatientsLikeMe has been committed to applying these data responsibly toward patient-centered goals and implementing a “data for good” philosophy. Responsible big data research seeks soundness and accuracy of data while maximizing good and minimizing harm .
|Violation type||Case 1 - Commercial scraping||Case 2 - De-anonymization of forum user||Case 3 - Fake profile data||Case 4 - Multiple scraper bots||Relevant CIOMSa guideline number|
|PLMb terms-of-use violations|
|Not a patient, caregiver, health care professional, or visitor with legitimate reasons to participatec||✓||✓||✓||✓||7, 22|
|Posting false contentd||✓||4, 11|
|Use of any robot, spider, scraper, or other automated means to access the site or contente||✓||✓||✓||7, 12, 22|
|Lack of research authorization by PLMf||✓||✓||✓||✓||7, 8, 9, 10, 22, 25|
|De-identifying patient data in any way||✓||4, 11, 14, 15, 22|
|Inadequate/no informed consent||✓||✓||✓||9, 10, 12, 22|
|False identification or misrepresentation||✓||✓||4, 22|
|Verbatim use of user posts||✓||4, 11, 12, 14, 15, 22|
aCIOMS: Council for International Organizations of Medical Sciences.
cPLM user agreement: “To become a member and access the area on this Site reserved for members (the ‘Member Area’), PatientsLikeMe requires that you are either a (a) diagnosed patient of the particular community you are joining or a parent or legal guardian acting for such a patient who is under 18 years of age or incapacitated, (b) caregiver for a patient eligible to join such community, (c) health care professional (e.g. doctor, nurse, health researcher, etc.), (d) guest with legitimate, non-commercial reasons to participate in the community and who agrees to respect the privacy and preserve the dignity of all community participants or (e) guest as authorized by a PatientsLikeMe member or employee.”
dPLM user agreement: “Members shall not post or upload any information or other content on the Site that (a) is false, inaccurate or misleading; (b) is obscene or indecent; (c) infringes any copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy of any party; or (d) is defamatory, libelous, threatening, abusive, hateful, or contains pornography.”
ePLM user agreement: “You may not use any robot, spider, scraper, or other automated means to access the Site or content or services provided on the Site for any purposes.”
fPLM user agreement: “Please note that under our terms of service, you are not permitted to capture or utilize data from within the site nor to solicit members through our forums or private message to take part in your study.”
In the Wall Street Journal article, a company representative stated, “It was a bad legacy practice that we don't do anymore...It’s something that we decided is not acceptable, and we stopped.” Corrective efforts included upgrading our automated scraper-detection software, clarifying how commercial researchers could contact PatientsLikeMe for authorization, determining which actions are permissible and not permissible on the site, and sustaining communication with our members about the implications for their data and further participation on the site.
Case 2: De-anonymization of Individual Forum Members
Around 2014, computer science researchers at a European university developed an algorithm that could be used to de-identify highly sensitive medical data, which individuals might choose to share on social networks in order to reduce their risk of personal identification. The system involved automated methods for determining the “identifying information content” of a given piece of data (ie, “I’m a woman living with a mental health condition for the past two years” vs “my name is Susan and I was diagnosed with bipolar disorder in Boston on June 2, 2016”). In order to illustrate their approach, they provided in their manuscript a verbatim text quote from a member discussing how they came to be diagnosed with HIV. The authors published their study, whereupon a Google Scholar Alert notified us that the research had taken place.
No formal ethics review was conducted, which may have contributed to the oversight. In terms of accessibility, while this story was “shared online,” it was on a private profile accessible only to other patients logged into the site. Searching for the verbatim text within the logged-in area of PatientsLikeMe quickly identified the member concerned. Although de-identification is never foolproof (and indeed, this was the point of the study itself), if the patient had decided to change his/her mind and delete the data or close their PatientsLikeMe account, the quote and the patient’s association with it could have persisted permanently within the scientific literature. CIOMS guideline 22 on the use of data obtained from the online environment states, “When researchers use the online environment and digital tools to obtain data for health-related research they should use privacy-protective measures to protect individuals from the possibility that their personal information is directly revealed or otherwise inferred when datasets are published, shared, combined or linked .” Additional considerations should have been given, as HIV is a highly stigmatized condition.
During other similar incidents in the past, reaching out solely to the authors or their institutions often failed to yield a response. As a result, we emailed the authors and the journal editor with our concerns to ensure this issue would be dealt with appropriately.
As no specific patient data were mentioned in the papers, no data were scraped from the site. The focus was a theoretical algorithm, and all parties quickly realized their error. A partial retraction was agreed upon to replace the verbatim quote with a synthetic quote. PatientsLikeMe notified the member concerned. Although CIOMS guideline 22 speaks to research in the online environment, the guidance is general instead of recommendations for best practices for every platform. More specific advice for preventing risk to patients can be found from NatCen’s Social Research guidance, which recommends “(Test) the traceability of a tweet or post and (take) responsible steps to inform the user and protect their identity, if desired. Best practices include paraphrasing instead of verbatim quotes and not using an individual’s handle/user name.”
Case 3: Researcher Misrepresentation and Fake Profile Data
Deceptive practices such as researchers misleading participants about their identity are never acceptable, and we were surprised that an REC had approved such activity. In our case, researchers prompted students to enter fake data into a system requiring log-in, which is used by patients, regulators, and health care professionals to guide practice and conduct medical research. CIOMS guideline 1 states, “Although scientific and social value are the fundamental justification for undertaking research, researchers, sponsors, research ethics committees and health authorities have a moral obligation to ensure that all research is carried out in ways that uphold human rights, and respect, protect, and are fair to study participants and the communities in which the research is conducted. Scientific and social value cannot legitimate subjecting study participants or host communities to mistreatment, or injustice.”
The researchers thought their activities were “outside the logged-in” parts of the site (which they were not) and that students had never re-accessed their accounts after the initial study (which they had). The REC agreed that entering false data was suboptimal behavior, admitted to confusion around some of the complex technical issues surrounding online research, and agreed this was an area they would learn more about in future. The funding body claimed that as the institution had its own REC, they had no further responsibility to check that the permissions were in place.
Case 4: Repeated Scraping Through Multiple Accounts
Computer science researchers at an Asian university sought to build a neural network capable of determining whether side effects that members were attributing to a treatment they were taking might, in fact, be symptoms of their condition; for example, “trouble sleeping” might be caused by their depression rather than a drug they were taking. In order to gather test data, they created an account on PatientsLikeMe and began “scraping” data from patient profiles with automated software. When our security systems were tripped by the software activity, their account was closed. Over the following 2 weeks, multiple, seemingly related, accounts were created, many from “disposable” email accounts, in order to continue scraping, which were closed as soon as we identified them. With data from over 5000 users, they prepared a manuscript for a computer science conference to be presented a year later, comparing the reported experience of patients to a third-party data source and describing their algorithm. The authors published a preconference proceeding, whereupon a Google Scholar Alert notified us that the research had taken place 10 months before.
Multiple CIOMS guidelines appeared to be breached, including respect for rights (guideline 1, no permission or consent was requested), balancing individual risks and benefits to participants (guideline 4, no steps were taken to minimize harm to patients), community engagement (guideline 7, the data were gathered covertly), consent (guidelines 9 and 10, no consent was requested or exempted), use of health data (guideline 12, patients’ response to treatment was scraped and analyzed), vulnerable persons (guideline 15, the focus included members with severe mental health issues), online environment (guideline 22, the researchers did not inform the community), and ethics committee review (guideline 23, this work did not undergo formal ethics review). The researchers did not appreciate that using a logged-in account was crossing a boundary nor that active shut down of their accounts by our security team was a “no entry” signal. In our discussion, the researchers appeared to feel that because the emphasis of their research was neural networks, they were “far” from medical data. More traditional medical researchers would have had to undergo quite considerable ethical oversight, consent, and data privacy policies to access similar data from a hospital or insurer. Building systems that used such algorithms to judge the soundness of a patient report risked diminishing the fidelity of patients’ lived experiences; many, if not most, patient experiences with disease and treatments cannot be found in medical texts, and few medical researchers would assume that divergence meant that the patients were automatically “wrong.”
We emailed the authors, conference chairs, and chair of their department with our concerns and requested full retraction of the paper, identification of all scraper accounts, and deletion of all data. The researchers stated that they had only accessed “public” parts of the site, denied having used multiple scraper accounts, said that the data had been held securely, and requested they be allowed to anonymize the data source. In mitigation, they claimed that the paper had received positive peer reviews from the community. Initially, the conference chairs were against retraction based on their judgement that no “material harm” had been done to PatientsLikeMe, that scraping the data was technically easy for a researcher to perform, and that it was unclear whether any laws had been violated. However, further careful investigation by our security team revealed that over 50 “bot” accounts were created from the same rather narrow geographical region during a time period consistent with the conduct of the methods detailed in the paper. On further discussion, the authors admitted that “maybe” an intern had done this. However, scientific record keeping was lacking, as no systematic records had been kept to verify this.
The authors apologized and deleted all locally held data. The conference chairs accordingly decided that the authors had not been truthful, and therefore, the study was retracted from the conference proceedings. PatientsLikeMe notified the members concerned. Because the authors were not forthcoming about their activities, our security team had to exhaust significant resources in determining which accounts were bots and which users’ data had been accessed, and in refuting the authors’ claim. In addition, significant management resources were consumed communicating with the authors and other parties, and communication resources were used in messaging the affected users.
We believe there are many ways in which the analysis of social media data can contribute to the public good as well as inform individuals about ways to improve and maintain their health. However, the lack of equitable data access, underlying biases in data interpretation, and inadequate transparency between those who provide and those who analyze data risks squanders the many potential advantages of algorithmic decision making . Throughout these cases, we believe that researchers based their treatment of study participants’ data on several false assumptions that violated a number of ethical guidelines.
Faulty Assumption 1: “The Internet” Is Not Subject to Ethical Review
Throughout our experiences, we perceived the sense that data (and the “social media users” contributing them) are less worthy of respect or protection when users participate online as opposed to when the same “patients” receive care in a brick-and-mortar health institution like a hospital. To add to the matter, members of ethics review boards may not consider social media studies to be human subject research under current legal definitions and may not believe that data scraping requires informed consent . In our view, social media and “big data” research is not ethically exceptional and should be treated in the same manner as traditional forms of research [ ]. Of the cases reported here, only Case 2 obtained ethical approval, and even then, the behaviors exhibited fell short of what we could consider ethical. Terminology may cloud matters, as existing guidelines may confine themselves only to “biomedical” or “medical” fields, which may lead some researchers to exclude their projects from ethical oversight on the basis that their focus or their branch of study is computer science, business, or design. However, CIOMS [ ] uses the broader term “health-related research” to encourage greater inclusiveness rather than focusing on researchers’ occupation or training. Online contexts should be compared to offline analogues to highlight potential considerations that may affect informed consent; if it was not acceptable to do something in a hospital waiting room, doing it on the internet does not absolve researchers of responsibility. We believe that interpreting the USDHHS Common Rule for “existing data set” as “free access to any health data set on the Internet” is a faulty assumption.
Faulty Assumption 2: Social Media Spaces Are “Public”
In our discussions with individuals involved in the cases reported here, we encountered a lack of cultural sensitivity to the “perceived privacy” of individuals choosing to share information within a “closed network” as opposed to an open forum. It is probably best to take a conservative approach and consider that any content requiring an email for access may not be considered public by a site’s users.
Where trespasses were acknowledged, they were claimed to be justified by good intentions. For example, while few would argue in favor of the potentially good intentions of gathering and analyzing social media posts in Case 1 to try and understand mental health problems, such good intentions do not act as blanket absolution from ethical considerations such as consent, privacy, de-identification, or minimization of harms. In the real world, reading and analyzing the diaries or written correspondence of patients with mental health problems would not be deemed acceptable even if they were left unsecured.
Faulty Assumption 3: Data Can Be Analyzed Independent of Context
Although large datasets may appear alluring by their sheer scale, in practice, they can introduce larger errors of interpretation by inspiring false confidence in the conclusions drawn. In Case 4, the researchers were unaware that there was a host of additional contextual data recorded about how patients had multiple comorbidities and understood the purpose of their medications or that they may have been using some treatments for off-label purposes rather than their standard indications . The absence on their team of trained health professionals also obscured important context about the relationship between a condition’s symptoms and the common side effects of treatments used for the condition. Without understanding the sampling of a data set, the limits of meaningful questions and interpretations may not be observed [ ]. Scientifically, data scraping without context may result in potentially inaccurate algorithms that may get reported and reused in application, leading to potentially harmful consequences [ ]. Our discussions with researchers revealed a general lack of care and rigor that would be of scientific concern even without the ethical considerations. We explained the importance of understanding the context and structure of the data that were scraped in order to produce meaningful scientific results and requested a retraction of questionable findings and interpretations to avoid contaminating the literature.
Faulty Assumption 4: Computer Science Research Does Not Need to Abide by Health Research Guidelines When It Is Only Accessing “Data”
While computer science researchers were responsible for only Case 4 reported here, computer science practitioners are responsible for the bulk of our other unreported cases, confirming Wright’s  assertions that the field needs to adjust its practices before further incidents undermine their social license to practice. Computer scientists are “largely focused on the care and feeding of electronic devices” and may have different conceptions of what constitutes a “human subject”—a living person or data that are representative of a living person [ ]. Involving computer scientists on ethics review boards may be an effective way of encouraging ownership of ethics issues from the inside out as well as assuring more technology expertise in medical and other studies. This would also encourage more complete paper trails when untangling ethics transgressions.
Appropriately Resolving Terms-of-Use Violations
We have shared our experiences, in part, to guide other practitioners in the field. Unfortunately, the effects on data reporting may be difficult to detect and may not be caught until publications and conference papers appear. The resolution of the scientific inaccuracies and communications, as well as deletion of scraped data, often required difficult conversations over extended periods. We recommend that data producers develop their own standard operating procedures and hold practice scenarios when responding to violations.
For instance, because substantial time and effort are devoted to research planning, execution, and publication, a recently published or in-process journal article represents a considerable “sunk cost.” As a result, researchers, funders, conference organizers, and journal editors may apply pressure to data producers to “allow” publications to proceed with corrections rather than retract findings. Over the course of the cases experienced by our team, nearly every supervisor, institution, conference chair, or publisher challenged in the case of a violation first asked (politely) for clemency, forgiveness, “retrospective consent,” or even “post-hoc ethical approval.” Rather than adopt a punitive philosophy, we respectfully reminded these researchers of our responsibilities to patients who are our members and from whom we have earned social license to use and maintain their data responsibly. However, having policies and prepared communications in place early on would reduce the burden on staff members who may find such interactions challenging.
Our report contains several limitations. First, the authors are employees of a for-profit company and therefore have a conflict of interest in “protecting” network data. We hope to encourage similar experiences by others in the academic or nonprofit sphere to share their experiences. Second, the cases reported here are relatively brief and due to our desire to preserve anonymity where possible, there is little additional detail for interested readers. Third, as a complex and emerging area, our conclusions are necessarily editorial rather than evidence based. For example, future work could survey social network users whose data have been shared without their consent. Finally, the individuals described herein may not feel they have an adequate “right to reply”; we would welcome divergent views on the topics we have outlined here.
Future Directions: Prevention Rather Than Cure
When data have been collected without authorization, there should be standard operating procedures developed and followed with regard to how data obtained without authorization should be managed, deleted, and verified. University information technology departments could take a lead in this regard. Further attention to ethical issues in computer and data science training and conduct may help prevent the violations discussed in this paper while recognizing the value of important research questions. Data producers (such as PatientsLikeMe) and data scientists can enhance each other’s work if an appropriate dialogue can take place. Data producers can adopt a proactive stance by finding ways to curate and expand access to views of their data (such as through application programming interfaces), so that important scientific research can be encouraged while minimizing ethical and terms-of-use violations. In order to meet the needs of computer science and other researchers, PatientsLikeMe has started investigating ways to provide tools for researchers to interrogate data sets in order to yield insights with less risk to member privacy.
Such strategies would only be the beginning of addressing social media privacy challenges, but we welcome further enhancement of and feedback on these ideas. A group of data scientists recently reported on a crowdsourced “Hippocratic Oath for Data Science”  that calls upon their peers to “Ensure that all data practitioners take responsibility for exercising ethical imagination in their work, including considering the implication of what came before and what may come after, and actively working to increase benefit and prevent harm to others.”
The authors would like to thank the following reviewers of the manuscript: James Heywood, Benjamin Heywood, Steve Hammond, Greg Ploussios, and John Torous. We would also like to thank the following members of the PatientsLikeMe Ethics & Compliance Advisory Board for their input: Sally Okun (Board Chair), Hans van Delden, Letitia Browne-James, and Gary Rafaloff.
Both authors contributed to the conceptualization, writing, and review of this manuscript.
Conflicts of Interest
Both authors are employed by and own stock options in PatientsLikeMe, Inc. Paul Wicks also is a academic section editor for JMIR Publications.
- Smith A, Anderson M. Pew Research Center - Internet & Technology. Social Media Use in 2018 URL: http://www.pewinternet.org/2018/03/01/social-media-use-in-2018/ [accessed 2019-02-14]
- Herrera JL, Srinivasan R, Brownstein JS, Galvani AP, Meyers LA. Disease Surveillance on Complex Social Networks. PLoS Comput Biol 2016 Dec;12(7):e1004928 [FREE Full text] [CrossRef] [Medline]
- Young SD. Social Media as a New Vital Sign: Commentary. J Med Internet Res 2018 Apr 30;20(4):e161 [FREE Full text] [CrossRef] [Medline]
- Capurro D, Cole K, Echavarría MI, Joe J, Neogi T, Turner AM. The use of social networking sites for public health practice and research: a systematic review. J Med Internet Res 2014;16(3):e79 [FREE Full text] [CrossRef] [Medline]
- Hudnut-Beumler J, Po'e E, Barkin S. The Use of Social Media for Health Promotion in Hispanic Populations: A Scoping Systematic Review. JMIR Public Health Surveill 2016 Jul 11;2(2):e32 [FREE Full text] [CrossRef] [Medline]
- Hswen Y, Naslund JA, Chandrashekar P, Siegel R, Brownstein JS, Hawkins JB. Exploring online communication about cigarette smoking among Twitter users who self-identify as having schizophrenia. Psychiatry Res 2017 Dec;257:479-484 [FREE Full text] [CrossRef] [Medline]
- Wicks P, Massagli M, Frost J, Brownstein C, Okun S, Vaughan T, et al. Sharing health data for better outcomes on PatientsLikeMe. J Med Internet Res 2010;12(2):e19 [FREE Full text] [CrossRef] [Medline]
- Barratt M, Lenton S. Beyond recruitment? Participatory online research with people who use drugs. International Journal of Internet Research Ethics 2010;3:69-86 [FREE Full text]
- Yeung D. Social Media as a Catalyst for Policy Action and Social Change for Health and Well-Being: Viewpoint. J Med Internet Res 2018 Mar 19;20(3):e94 [FREE Full text] [CrossRef] [Medline]
- Eysenbach G, Till JE. Ethical issues in qualitative research on internet communities. BMJ 2001 Nov 10;323(7321):1103-1105 [FREE Full text] [Medline]
- Kraut R, Olson J, Banaji M, Bruckman A, Cohen J, Couper M. Psychological research online: report of Board of Scientific Affairs' Advisory Group on the Conduct of Research on the Internet. Am Psychol 2004;59(2):105-117. [CrossRef] [Medline]
- Kramer AD, Guillory JE, Hancock JT. Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci U S A 2014 Dec 17;111(24):8788-8790 [FREE Full text] [CrossRef] [Medline]
- Verma IM. Editorial expression of concern: Experimental evidence of massive-scale emotional contagion through social networks. Proc Natl Acad Sci U S A 2014 Jul 22;111(29):10779 [FREE Full text] [CrossRef] [Medline]
- Zimmer M. Wired Internet. 2016. OkCupid Study Reveals the Perils of Big-Data Science URL: https://www.wired.com/2016/05/okcupid-study-reveals-perils-big-data-science/ [accessed 2019-01-15] [WebCite Cache]
- Rosenberg M, Confessore N, Cadwalladr C. New York Times. 2018 Mar 17. How Trump Consultants Exploited the Facebook Data of Millions URL: https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html [accessed 2019-02-15] [WebCite Cache]
- Taylor J, Pagliari C. Mining social media data: How are research sponsors and researchers addressing the ethical challenges? Research Ethics 2017 Oct 26;14(2):1-39. [CrossRef]
- Ayers J, Caputi T, Nebeker C, Dredze M. Don’t quote me: reverse identification of research participants in social media studies. npj Digital Med 2018 Aug 2;1(1):30 [FREE Full text] [CrossRef]
- Fiesler C, Proferes N. “Participant” Perceptions of Twitter Research Ethics. Social Media + Society 2018 Mar 10;4(1):205630511876336. [CrossRef]
- Williams ML, Burnap P, Sloan L. Towards an Ethical Framework for Publishing Twitter Data in Social Research: Taking into Account Users' Views, Online Context and Algorithmic Estimation. Sociology 2017 Dec;51(6):1149-1168 [FREE Full text] [CrossRef] [Medline]
- Roberts L. Ethical Issues in Conducting Qualitative Research in Online Communities. Qualitative Research in Psychology 2015 Jan 29;12(3):314-325. [CrossRef]
- Markham A, Buchanan E. AOIR. Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0) URL: http://aoir.org/reports/ethics2.pdf [accessed 2019-02-15]
- Metcalf J, Keller E, Boyd D. Perspectives on Big Data, Ethics, and Society. 2016. Council on Big Data, Ethics, and Society URL: https://bdes.datasociety.net/council-output/perspectives-on-big-data-ethics-and-society/ [accessed 2019-02-15]
- Conway M, O'Connor D. Social Media, Big Data, and Mental Health: Current Advances and Ethical Implications. Curr Opin Psychol 2016 Jun;9:77-82 [FREE Full text] [CrossRef] [Medline]
- Vitak J, Proferes N, Shilton K, Ashktorab Z. Ethics Regulation in Social Computing Research: Examining the Role of Institutional Review Boards. J Empir Res Hum Res Ethics 2017 Dec;12(5):372-382. [CrossRef] [Medline]
- Vayena E, Gasser U, Wood A, O'Brien D, Altman M, See T, et al. Washington and Lee Law Rev. 2016. Elements of a new ethical framework for big data research URL: https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?article=1040&context=wlulr-online [accessed 2019-02-15] [WebCite Cache]
- Hutton L, Henderson T. I didn't sign up for this!: Informed consent in social network research. 2015 Presented at: Proc Ninth Int AAAI Conf Web Soc Media Internet; 2015; University of Oxford, Oxford, UK p. 178-187 URL: https://research-repository.st-andrews.ac.uk/handle/10023/6691
- Office for Human Research Protections, HHS.gov. 2018. International Compilation of Human Research Standards Internet URL: https://www.hhs.gov/ohrp/international/compilation-human-research-standards/index.html [accessed 2019-02-15] [WebCite Cache]
- TRUST - Equitable Research Partnerships. The Chinese Ethical Review System and its Compliance Mechanisms URL: http://trust-project.eu/wp-content/uploads/2016/03/Chinese-Ethics-Review-System.pdf [accessed 2019-01-15] [WebCite Cache]
- Hand DJ. Aspects of Data Ethics in a Changing World: Where Are We Now? Big Data 2018 Sep 01;6(3):176-190 [FREE Full text] [CrossRef] [Medline]
- Ienca M, Ferretti A, Hurst S, Puhan M, Lovis C, Vayena E. Considerations for ethics review of big data health research: A scoping review. PLoS One 2018;13(10):e0204937 [FREE Full text] [CrossRef] [Medline]
- Glez-Peña D, Lourenço A, López-Fernández H, Reboiro-Jato M, Fdez-Riverola F. Web scraping technologies in an API world. Brief Bioinform 2014 Sep;15(5):788-797. [CrossRef] [Medline]
- Narayanan A, Felten E. RandomWalker. 2014. No silver bullet: De-identification still doesn?t work URL: http://randomwalker.info/publications/no-silver-bullet-de-identification.pdf [accessed 2019-02-15] [WebCite Cache]
- Zook M, Barocas S, Boyd D, Crawford K, Keller E, Gangadharan SP, et al. Ten simple rules for responsible big data research. PLoS Comput Biol 2017 Dec;13(3):e1005399 [FREE Full text] [CrossRef] [Medline]
- Bishop L. UK Data Service. 2017. Big data and data sharing:Ethical issues URL: https://www.ukdataservice.ac.uk/media/604711/big-data-and-data-sharing_ethical-issues.pdf [accessed 2019-02-15] [WebCite Cache]
- Ohm P. UCLA Law Review. 2010. Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization URL: https://www.uclalawreview.org/broken-promises-of-privacy-responding-to-the-surprising-failure-of-anonymization-2/ [accessed 2019-02-15]
- Sweeney L. Data Privacy Lab. 2000. Simple demographics often identify people uniquely URL: http://dataprivacylab.org/projects/identifiability/paper1.pdf [accessed 2019-02-15] [WebCite Cache]
- Boyd D, Crawford K. Critical Questions for Big Data. Information, Communication & Society 2012 Jun;15(5):662-679. [CrossRef]
- Mittelstadt BD, Floridi L. The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts. Sci Eng Ethics 2016 Apr;22(2):303-341. [CrossRef] [Medline]
- Nissenbaum H. A contextual approach to privacy online. Daedalus 2011;140(4):48 [FREE Full text]
- PatientsLikeMe. URL: https://www.patientslikeme.com/ [accessed 2019-02-08]
- O'Neil C. Weapons of Math Destruction: How Big Data Increases Inequality And Threatens Democracy. USA: Crown; 2019.
- Kim SJ, Marsch LA, Hancock JT, Das AK. Scaling Up Research on Drug Abuse and Addiction Through Social Media Big Data. J Med Internet Res 2017 Oct 31;19(10):e353 [FREE Full text] [CrossRef] [Medline]
- Daniulaityte R, Carlson R, Falck R, Cameron D, Perera S, Chen L, et al. "I just wanted to tell you that loperamide WILL WORK": a web-based study of extra-medical use of loperamide. Drug Alcohol Depend 2013 Jun 01;130(1-3):241-244 [FREE Full text] [CrossRef] [Medline]
- Wright D. Research ethics and computer science. In: Proceedings of the 24th Annual Conference on Design of Communication. 2006 Oct 18 Presented at: Proceedings of the 24th Annual Conference on Design of Communication; 2006; Myrtle Beach, SC, USA. [CrossRef]
- National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Office for Human Research Protections, HHS.gov. 2014 Dec. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research URL: https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html [accessed 2019-02-15]
- Government Publishing Office. 2017. The Common Rule URL: https://www.gpo.gov/fdsys/pkg/FR-2017-01-19/pdf/2017-01058.pdf [accessed 2019-02-15]
- Buchanan E, Aycock J, Dexter S, Dittrich D, Hvizdak E. Computer Science Security Research and Human Subjects: Emerging Considerations for Research Ethics Boards. Journal of Empirical Research on Human Research Ethics 2011 Jun;6(2):71-83. [CrossRef]
- Council for International Organizations of Medical Sciences. 2016. International Ethical Guidelines for Health-Related Research Involving Humans URL: http://www.sciencedirect.com/science/article/B6VC6-45F5X02-9C/2/e44bc37a6e392634b1cf436105978f01 [accessed 2019-01-15] [WebCite Cache]
- Obar J, Oeldorf-Hirsch A. The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services. 2018 Presented at: TPRC 44 44th Res Conf Commun Inf Internet Policy; 2018; George Mason University, Fairfax, VA URL: https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID3208371_code962598.pdf?abstractid=2757465&mirid=1
- Angwin J, Stecklow S. The Wall Street Journal. 2010. 'Scrapers' Dig Deep for Data on Web URL: https://www.wsj.com/articles/SB10001424052748703358504575544381288117888 [accessed 2019-02-15] [WebCite Cache]
- Brownstein CA, Brownstein JS, Williams DS, Wicks P, Heywood JA. The power of social networking in medicine. Nat Biotechnol 2009 Oct;27(10):888-890. [CrossRef] [Medline]
- PatientsLikeMe. Terms and Conditions of Use URL: https://www.patientslikeme.com/about/user_agreement [accessed 2019-02-07] [WebCite Cache]
- Insights Association. 2013. MRA Code of Marketing Research Standards URL: https://www.insightsassociation.org/sites/default/files/misc_files/mra_code.pdf [accessed 2019-02-15]
- CIOMS. 2009. International Ethical Guidelines for Epidemiological Studies Internet URL: https://cioms.ch/wp-content/uploads/2017/01/International_Ethical_Guidelines_LR.pdf [accessed 2019-02-15] [WebCite Cache]
- Olhede S, Wolfe P. The growing ubiquity of algorithms in society: implications, impacts and innovations. Philos Trans A Math Phys Eng Sci 2018 Sep 13;376(2128):1-16 [FREE Full text] [CrossRef] [Medline]
- Metcalf J, Crawford K. Big Data and Society. 2016. Where are Human Subjects in Big Data Research? The Emerging Ethics Divide URL: https://ssrn.com/abstract=2779647 [accessed 2019-01-15] [WebCite Cache]
- Gelinas L, Pierce R, Winkler S, Cohen IG, Lynch HF, Bierer BE. Using Social Media as a Research Recruitment Tool: Ethical Issues and Recommendations. Am J Bioeth 2017 Mar;17(3):3-14 [FREE Full text] [CrossRef] [Medline]
- Frost J, Okun S, Vaughan T, Heywood J, Wicks P. Patient-reported outcomes as a source of evidence in off-label prescribing: analysis of data from PatientsLikeMe. J Med Internet Res 2011 Jan;13(1):e6 [FREE Full text] [CrossRef] [Medline]
- Community Principles on Ethical Data Practices. URL: https://datapractices.org/community-principles-on-ethical-data-sharing/ [accessed 2019-02-07] [WebCite Cache]
|CIOMS: Council for International Organizations of Medical Sciences|
|GDPR: General Data Protection Regulation|
|IRB: institutional review board|
|REC: research ethics committee|
|USDHHS: United States Department of Health and Human Services|
Edited by G Eysenbach; submitted 20.08.18; peer-reviewed by E Buchanan, A Jobin, C Fiesler, R Daniulaityte; comments to author 27.09.18; revised version received 16.11.18; accepted 03.02.19; published 21.02.19Copyright
©Emil Chiauzzi, Paul Wicks. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 21.02.2019.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.