The Karma system is currently undergoing maintenance (Monday, January 29, 2018).
The maintenance period has been extended to 8PM EST.

Karma Credits will not be available for redeeming during maintenance.

Journal of Medical Internet Research

Advertisement

Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 15.05.18 in Vol 20, No 5 (2018): May

Preprints (earlier versions) of this paper are available at http://preprints.jmir.org/preprint/9330, first published Nov 02, 2017.

This paper is in the following e-collection/theme issue:

    Review

    Mapping of Crowdsourcing in Health: Systematic Review

    1INSERM UMR1153, Methods Team, Epidemiology and Statistics Sorbonne Paris Cité Research Center, Paris Descartes University, Paris, France

    2Centre d’Epidémiologie Clinique, Hôpital Hôtel Dieu, Assistance Publique des Hôpitaux de Paris, Paris, France

    3Cochrane France, Paris, France

    4Department of Epidemiology, Columbia University, Mailman School of Public Health, New York, NY, United States

    Corresponding Author:

    Perrine Créquit, MD, PhD

    INSERM UMR1153, Methods Team

    Epidemiology and Statistics Sorbonne Paris Cité Research Center

    Paris Descartes University

    1 place du Parvis Notre Dame

    Paris, 75004

    France

    Phone: 33 142348932

    Email:


    ABSTRACT

    Background: Crowdsourcing involves obtaining ideas, needed services, or content by soliciting Web-based contributions from a crowd. The 4 types of crowdsourced tasks (problem solving, data processing, surveillance or monitoring, and surveying) can be applied in the 3 categories of health (promotion, research, and care).

    Objective: This study aimed to map the different applications of crowdsourcing in health to assess the fields of health that are using crowdsourcing and the crowdsourced tasks used. We also describe the logistics of crowdsourcing and the characteristics of crowd workers.

    Methods: MEDLINE, EMBASE, and ClinicalTrials.gov were searched for available reports from inception to March 30, 2016, with no restriction on language or publication status.

    Results: We identified 202 relevant studies that used crowdsourcing, including 9 randomized controlled trials, of which only one had posted results at ClinicalTrials.gov. Crowdsourcing was used in health promotion (91/202, 45.0%), research (73/202, 36.1%), and care (38/202, 18.8%). The 4 most frequent areas of application were public health (67/202, 33.2%), psychiatry (32/202, 15.8%), surgery (22/202, 10.9%), and oncology (14/202, 6.9%). Half of the reports (99/202, 49.0%) referred to data processing, 34.6% (70/202) referred to surveying, 10.4% (21/202) referred to surveillance or monitoring, and 5.9% (12/202) referred to problem-solving. Labor market platforms (eg, Amazon Mechanical Turk) were used in most studies (190/202, 94%). The crowd workers’ characteristics were poorly reported, and crowdsourcing logistics were missing from two-thirds of the reports. When reported, the median size of the crowd was 424 (first and third quartiles: 167-802); crowd workers’ median age was 34 years (32-36). Crowd workers were mainly recruited nationally, particularly in the United States. For many studies (58.9%, 119/202), previous experience in crowdsourcing was required, and passing a qualification test or training was seldom needed (11.9% of studies; 24/202). For half of the studies, monetary incentives were mentioned, with mainly less than US $1 to perform the task. The time needed to perform the task was mostly less than 10 min (58.9% of studies; 119/202). Data quality validation was used in 54/202 studies (26.7%), mainly by attention check questions or by replicating the task with several crowd workers.

    Conclusions: The use of crowdsourcing, which allows access to a large pool of participants as well as saving time in data collection, lowering costs, and speeding up innovations, is increasing in health promotion, research, and care. However, the description of crowdsourcing logistics and crowd workers’ characteristics is frequently missing in study reports and needs to be precisely reported to better interpret the study findings and replicate them.

    J Med Internet Res 2018;20(5):e187

    doi:10.2196/jmir.9330

    KEYWORDS



    Introduction

    Scientific research performed with the involvement of the broader public, the crowd, is attracting increasing attention from scientists and policy makers. Crowdsourcing uses the power of many, using the collective wisdom and resources of the crowd, to complete human intelligence tasks (ie, tasks that cannot be entirely automated and require human intelligence). Crowdsourcing is not a new concept and has often been used in the past as a competition to discover a solution. It originated in 1714 in England, where the British Government proposed £20,000 to anyone who could find a solution for calculating the longitudinal position of a ship [1], and then it was applied in a variety of fields such as astronomy, energy system research, genealogy and genetic research, journalism, linguistics, ornithology, public policy, seismology, and molecular biology [2].

    Crowdsourcing currently involves a network of people, the “crowd workers,” responding to an open call and completing Web-based tasks of requesters [3]. These crowd workers provide a large wide of activities, especially via the internet, using specific platforms, but have no formal training in the topic of investigation [4]. They have access to the crowdsourcing websites from anywhere at times convenient for them. They carry out tasks posted by requesters, who accept or reject their work and may or not pay them for the work. Crowdsourcing has grown rapidly with the evolution of technology, with 2.3 billion internet users and 6 billion mobile phone subscribers [5]. The main Web platform for crowdsourcing is Amazon Mechanical Turk (MTurk), which was exploited by scientists 5 years ago. In 1 month—May 2016—23,000 people completed 230,000 tasks on their computers in 3.3 million min, corresponding to a total of more than 6 years of effort [6].

    Crowdsourcing has several benefits. Crowdsourcing provides easy access to a potentially large pool of participants for a research problem, particularly for increasing the number of respondents for mining crowd data (eg, Web-based surveys) and active crowdsourcing (eg, data processing). It offers important time savings, in that a large number of contributors working in parallel reduces the time required to perform a fixed amount of work, mainly saving the elapsed time to collect data. Project organizers can lower the cost of labor inputs. By soliciting ideas from a large group of people through the internet, crowdsourcing can be used to speed up innovations, particularly those with challenges.

    Crowdsourcing has been used primarily in nonmedical fields [7]. The Galaxy Zoo project successfully classified about 900,000 galaxies with the help of hundreds of thousands of Web-based volunteers [8]. The eBird project collected more than 48 million bird observations from more than 35,000 contributors [9]. The fields of research using Amazon MTurk are psychology, marketing, management, business, political science, computer science (improvement of artificial intelligence software, for example, by naming objects to help the computer identify the content of a photograph), and neuroscience [6]. Crowdsourcing is becoming the center of attention of the scientific community and researchers needing to obtain data from any domain.

    Crowdsourcing represents a great opportunity in health and medical research. As mentioned by Swan [10], crowdsourced health research studies are the nexus of 3 contemporary trends: “citizen science,” crowdsourcing, and Medicine 2.0. Medicine 2.0 or Health 2.0 refers to the active participation of individuals in their health care, particularly using Web 2.0 technologies.

    Crowdsourcing is not limited to health research but can also be used in health promotion or health care. Crowdsourcing could be a great way to solve a specific scientific mission that cannot be entirely automated and requires human intelligence in these 3 health categories. However, mapping of crowdsourcing use in health is needed to describe all its applications and to detail specificities, so that health researchers can assess whether they can use this approach in their research.

    The aim of the study was to map the different applications of crowdsourcing used in health to outline the fields of health that are using crowdsourcing and the type of crowdsourced tasks involved. We also describe the logistics of crowdsourcing and the characteristics of crowd workers.


    Methods

    Design

    We conducted a systematic review to identify studies using crowdsourcing in health. We uploaded a prespecified protocol to a publicly accessible institutional Website (Multimedia Appendix 1) and followed standard procedures for systematic reviews and reported processes and results according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines [11].

    Criteria for Considering Studies for This Review

    The inclusion criteria were as follows:

    1. Studies reporting on health, considering the definition proposed by Prpic [12], with the activities of the 3 categories of health:
      • Health promotion: disease detection and surveillance, behavioral interventions, health literacy, and health education
      • Health research: pharmaceutical research, clinical trials and health experiment methodology, and improving health care research knowledge
      • Health maintenance (here “health care”): patient- or physician-related, diagnostics, medical practice, and treatment support.
    2. Studies conducted with a crowdsourced population: workers are recruited by crowdsourcing (ie, recruited via a website [labor markets such as Amazon MTurk or Crowdflower] or an open call to a large audience using internet-related technologies [eg, scientific games or community challenges with dedicated platforms]) [13]. Studies can refer either to a feasibility study (can crowdsourcing be used for a specific task?) or to the use of crowdsourcing to supply data that support a finding in some research activity.

    We excluded studies considering structural and molecular biology (eg, studies reporting Web-based games to manipulate the 3D structures of proteins or moving colored blocks representing different nucleotide sequences).

    Search Method for Identification of Studies

    We performed an electronic search of MEDLINE via PubMed and EMBASE to identify all reports published from inception to March 30, 2016, with no restriction on date, language, study design, or publication status (published papers or conference abstracts). All databases were searched using both controlled vocabulary (namely, MeSH terms in MEDLINE and Emtree terms in EMBASE) and a wide range of free-text terms. Indeed, crowdsourced health studies may be a blend of crowdsourcing and citizen science (ie, nonprofessionally trained individuals conducting science-related activities); these terms can be used interchangeably and so were included in our search equation. We used different terms referring to crowdsourcing, citizen science, and Web platforms. The search strategy used to search MEDLINE and EMBASE is in Multimedia Appendix 2. We also screened ClinicalTrials.gov (search strategy in Multimedia Appendix 3) and the reference lists of previous systematic reviews [5,10] and selected papers to identify additional studies.

    Selection of Studies

    Two reviewers (PC and GM) independently examined each title and abstract identified to exclude irrelevant reports. The 2 reviewers then independently examined full-text articles to determine eligibility. Disagreements were discussed to reach consensus. We documented the primary reason for exclusion of full-text articles. For ClinicalTrials.gov, only studies with posted results were included.

    Definition of the Crowdsourcing Tasks

    We used the classification described by Ranard [5] with 4 tasks of crowdsourcing: (1) problem-solving: to propose empirical solutions to scientific problems; (2) data processing: to perform several human intelligence microtasks to provide in total an analysis of a large amount of data; (3) surveillance or monitoring: to find and collect information into a common location and format such as the creation of collective resources; and (4) surveying: to answer a Web-based survey. Surveillance or monitoring and surveying belong to mining crowd data described by Khare [4] and are defined as data collected and analyzed by crowd workers for the knowledge discovery process. Problem-solving and data processing belong to active crowdsourcing, which refers to crowd workers recruited to solve scientific problems.

    Data Extraction and Management

    The data were extracted from reports by the two reviewers (PC and GM) who used a standardized data extraction form (provided with the protocol as Multimedia Appendix 1). Disagreements were discussed to reach consensus. From each study, we extracted the following characteristics.

    Publication Characteristics of the Study

    Publication characteristics of the study were as follows: Journal Citation Reports categories (ie, general medicine and health care science, biomedical informatics and technology, or medical specialty journals); impact factor (Clarivate Analytics); average journal impact factor percentile from Journal Citation Reports (classified in four categories: >90th percentile, 70th-90th percentile, <70th percentile, and not indexed); and year of publication.

    Characteristics of Crowdsourcing Applications in Health

    The following characteristics of crowdsourcing applications were extracted:

    1. We determined the category of health the study referred to (health promotion, research, or care [12]) and health field (eg, public health, surgery, oncology [details in Multimedia Appendix 4]).
    2. We classified the tasks into 1 of the 4 categories of crowdsourcing tasks defined: problem-solving, data processing, surveillance or monitoring, and surveying.
    3. We determined whether the study was led by researchers (ie, a traditional study led by institutionally trained researchers) or by participants (ie, studies designed and operated by patients or citizen scientists) [10].
    Logistics of Crowdsourcing and Characteristics of Crowd Workers

    Considering the logistics of crowdsourcing and characteristics of crowd workers, the following points were extracted:

    1. We defined how the crowdsourcing was applied: whether a large task was divided into microtasks and distributed to workers [13] or whether the same task—a high-difficulty task called a megatask, such as a challenge—was given to several groups of workers [14].
    2. We extracted the type of platform used (labor markets, scientific games, mobile phone apps, social media, or community challenges with dedicated platforms) [13]; whether monetary incentives were offered and their amount; the time to perform the task; whether a data quality validation was performed; whether the task performed by the crowd workers was compared with that performed by experts (which corresponds to a feasibility study).
    3. We extracted the number of crowd workers, the median age, the proportion of women, their status (eg, researchers, physicians, and students), their geographic location, their motivations, whether a skill set was required to perform the task, and whether they had to undergo training and pass a qualification test to be recruited.
    4. We also assessed the proportion of studies not reporting all these data.

    Analysis

    The analysis was descriptive. Data are summarized as number (%) for qualitative variables and median (Q1-Q3) for continuous variables. All analyses involved the use of R v3.0.2 (R Foundation for Statistical Computing, Vienna, Austria) [15].


    Results

    Systematic Literature Search

    The flow of study selection is in Multimedia Appendix 5. Briefly, the electronic search yielded 2354 references; 326 were selected for further evaluation, and 202 studies were included (182 published papers and 20 conference abstracts [3,16-216]).

    More than half of the included studies (108/202, 53.5%) were published during the last 2 years. The median impact factor of the journals of publication was 3.2 (Q1-Q3: 2.1-3.5); for 42/202 studies (20.8%), reports were published in a journal with very high relative impact factor (>90th percentile of journal impact factors averaged across journal categories). Reports for two-thirds of studies (129/202) were published in medical specialty journals and for one-fourth (50/202) in biomedical informatics and technology journals. All these publication characteristics are in Figure 1. A total of 9 studies corresponded to randomized controlled trials, only 1 with results posted on ClinicalTrials.gov.

    Mapping of Crowdsourcing Applications in Health

    Crowdsourcing applications were more frequent in studies of health promotion (91/202, 45.0%) and health research (72/202, 35.7%) than health care (39/202, 19.3%). More than half of the studies concerned active crowdsourcing (data processing (99/202, 49.0%) and problem-solving (12/202, 5.9%)) and 45% of the studies were about mining crowd data (surveying (70/202, 34.6%) and surveillance or monitoring (21/202, 10.4%)). Examples of crowdsourced tasks by health category are provided in Figure 2.

    Almost 50% of the studies related to health promotion used surveys to conduct their research compared with studies related to health care, which used mainly data processing activity. All included studies were led by researchers.

    Figure 1. Publication characteristics of included studies. Two-thirds of the studies have been published in one of the 18 medical specialty journals, covering almost all medical fields showing the widespread use of crowdsourcing, and sometimes in a journal with very high relative impact factor.
    View this figure
    Figure 2. Examples of crowdsourced tasks according to health category. EEG: electroencephalography.
    View this figure
    Figure 3. Mapping of crowdsourcing applications in health. Sankey diagram representing the distribution of medical fields applying crowdsourcing for each of the 4 types of task. Width of links is proportional to the number of studies. Medical specialties: anatomopathology (n=3), cardiology (n=5), dermatology (n=5), endocrinology (n=1), gynecology (n=2), infectiology (n=6), nephrology (n=1), neurology (n=7), pediatrics (n=2), pneumology (n=3), radiology (n=2),and rheumatology (n=2).
    View this figure

    In Figure 3, we provide a mapping of crowdsourcing applications in health, detailing the medical fields applied to each type of task.

    Data Processing

    One-fourth of studies (27/99) involved public health, one-fifth (21/99) involved surgery, and one-fifth involved medical specialties (20/99). For example, in the Ghani et al study, published in 2016, crowd workers used the Global Evaluative Assessment of Robotic Skills tool to assess surgical skill in a video recording of a nerve-sparing robot-assisted radical prostatectomy [77].

    Surveying

    A total of 43% of studies (30/70) involved public health, and 37% (26/70) involved psychiatry. In the Stroh et al study, published in 2015, crowd workers completed a questionnaire related to public views on organ donation for people who need transplantation because of alcohol abuse [192]. The survey measured attitudes on liver transplantation in general and early transplantation for this patient population.

    Surveillance or Monitoring

    A total of 43% of studies (9/21) concerned public health and 24% (5/21) concerned dermatology. In the Merchant et al study, published in 2012, during 2 months, crowd workers had to locate, photograph, and submit the most eligible automated external defibrillator in Philadelphia [168].

    Problem-Solving

    One-third of studies (4/12) concerned oncology and one-fourth (3/12) concerned medical education. In the Margolin et al study, published in 2013, crowd workers were challenged during 6 months to develop computational models that predict overall survival of breast cancer patients based on clinical information [131].

    Reporting of the Logistics of Crowdsourcing and Crowd Workers’ Characteristics

    For data processing and surveillance or monitoring, a large task was divided into microtasks and distributed to crowd workers. For problem-solving, a megatask was given to several groups of crowd workers. We identified 7 challenges in our sample. A Web platform was used in 190/202 studies (94.1%), of which 133/190 (70.0%) were labor markets (eg, Amazon MTurk; Table 1).

    Crowd workers’ characteristics and crowdsourcing logistics were poorly reported. Reports for almost one-fourth of studies (47/202) did not mention monetary incentives, and for two-thirds of studies (130/202), the time to perform the task was not mentioned. Crowd workers’ characteristics were frequently missing: age and gender were not reported for about 60% of the studies (128/202 and 105/202, respectively), and crowd workers’ location was not reported for one-fourth of the studies (50/202).

    For 109/202 studies (53.9%), reports mentioned monetary incentives, mainly less than US $1 to perform a task. When reported, the time needed to perform the task was mostly less than 10 min (42/72, 58% of studies). For one-fourth of studies (54/202), reports mentioned using data quality validation, mainly by attention check questions (19/54, 35%) or by replicating the task by several crowd workers (16/54, 30%). About one-fifth of studies (36/202) compared crowd workers’ performance with that of experts (corresponding in these cases to a feasibility study), mainly for evaluating surgical skills (15/36, 42%).

    The number of crowd workers was reported for 176 studies (87.1%), and the size of the crowd varied from 5 to about 2 million, with median 424 (first and third quartiles Q1-Q3: 167-802; Table 2). When specified, crowd workers’ median age was 34 years (Q1-Q3: 32-36) and 55% were men. Crowd workers were recruited nationally in 93/152 studies (61.2%), mainly the United States (83/93, 89%).

    Table 1. Logistics of crowdsourcing in systematic review studies.
    View this table
    Table 2. Characteristics of crowd workers.
    View this table

    The motivations of crowd workers were recorded for 5/202 studies (2.5%) and included fun, curiosity, altruism, compensation, contribution to an important cause, personal reasons, research education, and advancing science [82,139,168,183,193]. A skill set was required in 74/202 studies (36.7%); for 60%, this involved previous experience in crowdsourcing. For two-thirds of studies (128/202), a specific skill set required was not specified. For only 12.8% of studies, the Web-based tasks (26/202) required passing a qualification test, and for 10.9%, (22/202), they required training.


    Discussion

    Principal Findings

    In this systematic review of the use of crowdsourcing in studies of health promotion, research, and care, we included 202 studies, mainly published in the last 2 years with for one-fifth of a publication in a journal with very high relative IF. Data processing was the most frequent type of task used (mainly in public health and surgery), followed by surveying (public health and psychiatry), then surveillance or monitoring (public health and dermatology), and finally problem-solving (oncology). Labor market platforms (Amazon MTurk) were mainly used. The description of crowdsourcing logistics and crowd workers’ characteristics were frequently missing from reports. When reported, the median size of the crowd was less than 500; crowd workers’ median age was around 34 years and 55% were men. Crowd workers were mainly recruited in the United States. A previous experience in crowdsourcing was required in about 60% of the studies, whereas passing a qualification test or training was only needed in about 12%. The time needed to perform the task was mostly less than 10 min for monetary incentives less than US $1. Data quality validation was used in less than one-third of studies.

    Our systematic review has advantages over previous ones on the same topic [5,10]. The systematic review conducted by Ranard et al in March 2013 described the scope of crowdsourcing in health and medical research but included only 21 articles [5]. The narrative review conducted by Swan described the use of crowdsourcing in health research studies up to 2011 [10]. Our mapping is more exhaustive—focused on health research but also health promotion and health care—and up-to-date. Many of our studies (80%) were published after the last search date of the Ranard et al’s systematic review [5]. This point highlights the increasing use of crowdsourcing in health during the last few years. Indeed, many health fields have since used crowdsourcing, with 20 medical fields identified in our systematic review compared with 8 fields in the Ranard et al’s study [5]. Moreover, crowdsourcing use is still growing, as shown by the 11 articles published in Journal of Medical Internet Research since our last search date, mainly involving a survey task (9/11, 82%) [217-227]. Our study has some limitations. First, we did not search the gray literature to identify some unpublished studies. However, the EMBASE search allowed us to identify 20 studies (10%) corresponding to conference abstracts. Second, we did not search Google Scholar because of the number of records found (about 30,000). Screening all these references would be extremely time-consuming for only 2 reviewers without using a crowdsourcing process. Third, we did not include studies related to biology, such as studies using the “Fold it” platform to solve protein-folding problems [228]. We did not consider this topic in our definition of health. Finally, we included only crowdsourcing performed via the internet. For example, we did not include studies in which the crowdsourced tasks were performed in a particular workshop without individual data collected online. Therefore, we may have underestimated the number of studies using crowdsourcing in health.

    Every health category (promotion, research, and care) has a potential need for human computing power that crowdsourcing could fulfill to accelerate the process. Our systematic review, focusing on peer-reviewed papers, may have not captured some kinds of crowdsourcing. Studies recruiting crowd workers with social media platforms were few in our selection (12/202 studies [5.9%]). This type of recruitment seems less attractive than labor markets, although it is free and easier to use, perhaps because it is considered less reliable or used for purposes other than publication. Another way of exploiting social media data is under development, whereby tweets referring to a specific disease are analyzed as part of a health maintenance approach (eg, HIV in the Adrover et al’s study to identify adverse effects of drug treatment in tweets using crowdsourcing [229]). Considering health research, a fundamental aspect of this crowdsourcing is that it allows research to be performed with patients and not only to them or on them. However, studies with patients as crowd workers represented only 10% of our included studies, perhaps because the primary aim of collecting these data was not to conduct research with the data. Nevertheless, in 2013, the PatientsLikeMe platform [230] had more than 220,000 members sharing health data on more than 2000 diseases and conditions [231]. Using these data and conducting research with the data represent a great future challenge of mining crowd data and a real opportunity to collect large amounts of data on symptoms of diseases, drug efficacy, or adverse events to solve a wide range of health issues with a more real-life approach. Crowdsourcing also has potential in health promotion, especially preventive medicine, by taking it one step further. For example, specific tips in the form of slides or films could be added to the end of a Web-based survey about addiction to conduct a behavioral intervention, in addition to a simple survey. In some cases, data processing tasks may require thinking about a healthier lifestyle, for example, by suggesting healthier alternatives in addition to gathering information on the nutritional characteristics of packaged foods. Such crowdsourced tasks could be expanded to change dietary behaviors, exercise, or adherence to treatment. Finally, the combination of crowdsourcing and mobile health technologies could be the ultimate step in providing an ideal vehicle for behavioral interventions that can reach users in real time, in real life, without being resource-intensive.

    Crowdsourcing allows for a large number of crowd workers to be mobilized in record time and at low cost. For instance, in Peabody et al’s study [158], experts completed 318 video ratings in 15 days, but crowd workers completed 2531 ratings in 21 hours. These crowdsourced resources might be further harnessed in a world of high health costs. Crowdsourcing also allows for speeding up innovations, when used in the form of collaborative scientific competitions—challenges—to solve diverse and important biomedical problems. Problem-solving was the fourth task we identified in terms of frequency, and only 7 challenges were individualized, perhaps because challenges are an emerging form of crowdsourcing, which should be more prominent in the next few years and lead to more publications [232]. In future, it will be necessary to facilitate and promote the use of this type of crowdsourced tasks in health research, given the amount of data to be considered (big data) and the complexity of medical issues that will require increasingly skilled and qualified individuals to resolve them.

    As previously mentioned, crowdsourcing has many advantages: improved cost, speed, quality, flexibility, scalability, and diversity. However, some points that remain controversial include the impact of crowdsourcing on product quality or its unethical aspect. The first remaining potential concern of crowdsourced studies in health is the validity of their results. Some studies have assessed whether we should trust Web-based studies, and it appears that the data provided by internet methods have at least as good quality as those provided by traditional paper-and-pencil methods [233]. In our review, for data processing tasks, 36/202 feasibility studies (17.8%) compared crowd workers’ performance with that of an expert group considered as reference. These studies mainly considered surgical skills evaluation (15/36, 42%) and parasite identification in infectious diseases (4/36, 11%). At each time, the performance of crowd workers was similar to that of the reference group. However, because the participation is anonymous and compensated, participants may provide unsatisfactory quality data. In our review, 54/202 studies (26.7%) reported using data quality validation. Several types of validation techniques were found, from inserting random questions with known answers into the task, to screening for crowd workers who were incorrectly marking answers (31/54, 57%) and to comparing responses among multiple crowd workers to discard outliers (16/54, 30%). The second concern is its unethical aspect: Amazon MTurk is a bargain for researchers but not for crowd workers [234]. Indeed, many MTurk tasks are completed by a small set of workers who spend long hours on the website, many with low income.

    A detailed description of the crowdsourcing logistics in the Methods section and all the characteristics of the crowd workers (population of the study) should be provided in high-quality research, even if its importance depends on the type of study. In cases of surveying and surveillance or monitoring studies related to illness, crowd workers’ characteristics need to be precisely described to better interpret the study findings and to judge the external validity. In cases of data processing and problem-solving, crowd workers’ characteristics also need to be reported to allow reproducibility of studies and to select more quickly and more easily the best population of crowd workers for a future similar study. In our review, the lack of details of crowd workers’ characteristics in one-third of the included studies impedes the interpretation of results of these studies. Rather than being a virtually infinite subject pool, crowd workers are far less diverse than was previously thought. As we found, although crowd workers should be recruited from all over the world, 61% were actually recruited nationally, mainly the United States (89%). Previously, crowd workers were mainly young, urban, and single and more often had postsecondary education [6]. In our review, the median age of crowd workers was 34 years, 55% were men, and half reported a high level of education. Therefore, logistics of crowdsourcing and crowd workers’ characteristics must be reported, and standardized guidelines on crowdsourcing metrics that needed to be collected and reported could be useful to improve the quality of such studies.

    Conclusions

    Crowdsourcing appears to be a trendy, efficient, competitive, and useful tool to improve health actions, whether in preventive medicine, research, or care. Its use in health is increasing, particularly in public health, psychiatry, surgery, and oncology. Crowdsourcing allows for access to a large pool of participants, saves time to collect data, lowers costs, and speeds up innovations. Each health field could benefit from some tasks that could be crowdsourced to facilitate advances in research. To optimize the use of crowdsourcing in health, the logistics of crowdsourcing and crowd workers’ characteristics must be reported.

    Acknowledgments

    The authors thank Ludovic Trinquart for helping in the initial protocol conception. The authors also thank Laura Smales (BioMedEditing, Toronto, Canada) for language revision of the manuscript and Elise Diard for the layout of the figures. This study was supported by a grant from the French National Cancer Institute (Institut National du Cancer, INCa; no. 2016-020/058/AB-KA). The funding source had no role in the design of this study, its execution, analyses, interpretation of the data, and decision to submit results.

    Authors' Contributions

    PC was involved in the study conception, selection of trials, data extraction, data analysis, interpretation of results, and drafting the manuscript and revision. GM was involved in the study conception, the selection of trials, and data extraction. MB was involved in the interpretation of results and drafting the manuscript. AV was involved in the study conception, data analysis, interpretation of results, and drafting the manuscript and revision. PR was involved in the study conception, interpretation of results, and drafting the manuscript and revision. All authors read and approved the final manuscript.

    Conflicts of Interest

    None declared.

    Multimedia Appendix 1

    Protocol of the systematic review.

    PDF File (Adobe PDF File), 12KB

    Multimedia Appendix 2

    Search terms for MEDLINE and EMBASE (March 30, 2016).

    PDF File (Adobe PDF File), 241KB

    Multimedia Appendix 3

    Search strategy for ClinicalTrials.gov.

    PDF File (Adobe PDF File), 17KB

    Multimedia Appendix 4

    Details of the health fields considered.

    PDF File (Adobe PDF File), 31KB

    Multimedia Appendix 5

    Flow diagram of selection of studies applying crowdsourcing in health.

    PDF File (Adobe PDF File), 157KB

    References

    1. Sobel D. Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time. New York: Walker Publishing Company; 1995.
    2. Wikipedia. Crowdsourcing   URL: https://en.wikipedia.org/wiki/Crowdsourcing [accessed 2018-02-06] [WebCite Cache]
    3. Deal SB, Lendvay TS, Haque MI, Brand T, Comstock B, Warren J, et al. Crowd-sourced assessment of technical skills: an opportunity for improvement in the assessment of laparoscopic surgical skills. Am J Surg 2016 Feb;211(2):398-404. [CrossRef] [Medline]
    4. Khare R, Good BM, Leaman R, Su AI, Lu Z. Crowdsourcing in biomedicine: challenges and opportunities. Brief Bioinform 2016 Jan;17(1):23-32. [CrossRef] [Medline]
    5. Ranard BL, Ha YP, Meisel ZF, Asch DA, Hill SS, Becker LB, et al. Crowdsourcing--harnessing the masses to advance health and medicine, a systematic review. J Gen Intern Med 2014 Jan;29(1):187-203 [FREE Full text] [CrossRef] [Medline]
    6. Bohannon J. Psychology. Mechanical Turk upends social sciences. Science 2016 Jun 10;352(6291):1263-1264. [CrossRef] [Medline]
    7. Dawson R, Bynghall S. Getting Results From Crowds: The Definitive Guide to Using Crowdsourcing to Grow Your Business. San Francisco: Advanced Human Technologies; 2012.
    8. Lintott C, Schawinski K, Bamford S, Slosar A, Land K, Thomas D, et al. Galaxy Zoo 1: data release of morphological classifications for nearly 900 000 galaxies. Mon Not R Astron Soc 2011 Jan 01;410(1):166-178. [CrossRef]
    9. Marris E. Supercomputing for the birds. Nature 2010 Aug 12;466(7308):807. [CrossRef] [Medline]
    10. Swan M. Crowdsourced health research studies: an important emerging complement to clinical trials in the public health research ecosystem. J Med Internet Res 2012;14(2):e46 [FREE Full text] [CrossRef] [Medline]
    11. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol 2009 Oct;62(10):e1-34 [FREE Full text] [CrossRef] [Medline]
    12. Prpić PJ. Health Care Crowds: Collective Intelligence in Public Health Internet. Rochester, NY: Social Science Research Network; 2015.
    13. Michelucci P, Dickinson JL. Human Computation. The power of crowds. Science 2016 Jan 01;351(6268):32-33. [CrossRef] [Medline]
    14. Silberzahn R, Uhlmann EL. Crowdsourced research: many hands make tight work. Nature 2015 Oct 08;526(7572):189-191. [CrossRef] [Medline]
    15. R-project. Vienna, Austria: R Foundation for Statistical Computing R: A language and environment for statistical computing   URL: https://www.r-project.org/ [accessed 2018-04-05] [WebCite Cache]
    16. Adams SA. Using patient-reported experiences for pharmacovigilance? Stud Health Technol Inform 2013;194:63-68. [Medline]
    17. Aghdasi N, Bly R, White LW, Hannaford B, Moe K, Lendvay TS. Crowd-sourced assessment of surgical skills in cricothyrotomy procedure. J Surg Res 2015 Jun 15;196(2):302-306. [CrossRef] [Medline]
    18. Albarqouni S, Baur C, Achilles F, Belagiannis V, Demirci S, Navab N. AggNet: deep learning from crowds for mitosis detection in breast cancer histology images. IEEE Trans Med Imaging 2016 Dec;35(5):1313-1321. [CrossRef] [Medline]
    19. Kuerbis A, Muench F. Normative feedback for adult drinkers: intervention through developing discrepancy or by validating pre-existing worries? 2015 Presented at: Personalized Feedback Interventions for Individuals with Substance-Related Problems; January 14-18, 2015; New Orleans, LA.
    20. Allam A, Schulz PJ, Nakamoto K. The impact of search engine selection and sorting criteria on vaccination beliefs and attitudes: two experiments manipulating Google output. J Med Internet Res 2014 Apr;16(4):e100 [FREE Full text] [CrossRef] [Medline]
    21. Alvare G, Gordon R. CT brush and CancerZap!: two video games for computed tomography dose minimization. Theor Biol Med Model 2015 May 12;12(1):7. [CrossRef]
    22. Alvaro N, Conway M, Doan S, Lofi C, Overington J, Collier N. Crowdsourcing Twitter annotations to identify first-hand experiences of prescription drug use. J Biomed Inform 2015 Dec;58:280-287 [FREE Full text] [CrossRef] [Medline]
    23. Andover MS. Non-suicidal self-injury disorder in a community sample of adults. Psychiatry Res 2014 Oct;219(2):305-310. [CrossRef]
    24. Arditte KA, Morabito DM, Shaw AM, Timpano KR. Interpersonal risk for suicide in social anxiety: the roles of shame and depression. Psychiatry Res 2016 May;239:139-144. [CrossRef]
    25. Armstrong AW, Cheeney S, Wu J, Harskamp CT, Schupp CW. Harnessing the power of crowds. Am J Clin Dermatol 2012;13(6):405-416. [CrossRef]
    26. Armstrong AW, Harskamp CT, Cheeney S, Schupp CW. Crowdsourcing for research data collection in rosacea. Dermatol Online J 2012 Mar 15;18(3):15. [Medline]
    27. Armstrong AW, Harskamp CT, Cheeney S, Wu J, Schupp CW. Power of crowdsourcing: novel methods of data collection in psoriasis and psoriatic arthritis. J Am Acad Dermatol 2012 Dec;67(6):1273-1281.e9. [CrossRef] [Medline]
    28. Armstrong AW, Harskamp CT, Cheeney S, Schupp CW. Crowdsourcing in eczema research: a novel method of data collection. J Drugs Dermatol 2012 Oct;11(10):1153-1155. [Medline]
    29. Armstrong AW, Wu J, Harskamp CT, Cheeney S, Schupp CW. Crowdsourcing for data collection: a pilot study comparing patient-reported experiences and clinical trial data for the treatment of seborrheic dermatitis. Skin Res Technol 2013 Feb;19(1):55-57. [CrossRef] [Medline]
    30. Bahk CY, Goshgarian M, Donahue K, Freifeld CC, Menone CM, Pierce CE, et al. Increasing patient engagement in pharmacovigilance through online community outreach and mobile reporting applications: an analysis of adverse event reporting for the Essure device in the US. Pharmaceut Med 2015;29(6):331-340 [FREE Full text] [CrossRef] [Medline]
    31. Baruch RL, Vishnevsky B, Kalman T. Split-care patients and their caregivers: how collaborative is collaborative care? J Nerv Ment Dis 2015 Jun;203(6):412-417 [FREE Full text] [CrossRef] [Medline]
    32. Bell RA, McGlone MS, Dragojevic M. Bacteria as bullies: effects of linguistic agency assignment in health message. J Health Commun 2014 Sep;19(3):340-358. [CrossRef] [Medline]
    33. Bevelander KE, Kaipainen K, Swain R, Dohle S, Bongard JC, Hines PD, et al. Crowdsourcing novel childhood predictors of adult obesity. PLoS One 2014 Feb;9(2):e87756 [FREE Full text] [CrossRef] [Medline]
    34. Bickel WK, George WA, Franck CT, Terry ME, Jarmolowicz DP, Koffarnus MN, et al. Using crowdsourcing to compare temporal, social temporal, and probability discounting among obese and non-obese individuals. Appetite 2014 Apr;75:82-89 [FREE Full text] [CrossRef] [Medline]
    35. Bickel WK, Jarmolowicz DP, Mueller ET, Franck CT, Carrin C, Gatchalian KM. Altruism in time: social temporal discounting differentiates smokers from problem drinkers. Psychopharmacology (Berl) 2012 Nov;224(1):109-120. [CrossRef] [Medline]
    36. Borgida E, Loken B, Williams AL, Vitriol J, Stepanov I, Hatsukami D. Assessing constituent levels in smokeless tobacco products: a new approach to engaging and educating the public. Nicotine Tob Res 2015 Nov;17(11):1354-1361 [FREE Full text] [CrossRef] [Medline]
    37. Bow HC, Dattilo JR, Jonas AM, Lehmann CU. A crowdsourcing model for creating preclinical medical education study tools. Acad Med 2013 Jun;88(6):766-770. [CrossRef] [Medline]
    38. Boynton MH, Richman LS. An online daily diary study of alcohol use using Amazon's Mechanical Turk. Drug Alcohol Rev 2014 Jul;33(4):456-461 [FREE Full text] [CrossRef] [Medline]
    39. Brady CJ, Villanti AC, Pearson JL, Kirchner TR, Gupta OP, Shah CP. Rapid grading of fundus photographs for diabetic retinopathy using crowdsourcing. J Med Internet Res 2014 Oct 30;16(10):e233 [FREE Full text] [CrossRef] [Medline]
    40. Brooks SC, Simmons G, Worthington H, Bobrow BJ, Morrison LJ. The PulsePoint Respond mobile device application to crowdsource basic life support for patients with out-of-hospital cardiac arrest: challenges for optimal implementation. Resuscitation 2016 Jan;98:20-26. [CrossRef] [Medline]
    41. Brown AW, Allison DB. Using crowdsourcing to evaluate published scientific literature: methods and example. PLoS One 2014;9(7):e100647 [FREE Full text] [CrossRef] [Medline]
    42. Burger JD, Doughty E, Khare R, Wei CH, Mishra R, Aberdeen J, et al. Hybrid curation of gene-mutation relations combining automated extraction and crowdsourcing. Database (Oxford) 2014;2014:- [FREE Full text] [CrossRef] [Medline]
    43. Cabrera LY, Fitz NS, Reiner PB. Reasons for comfort and discomfort with pharmacological enhancement of cognitive, affective, and social domains. Neuroethics 2014 Oct 10;8(2):93-106. [CrossRef]
    44. Campisi J, Folan D, Diehl G, Kable T, Rademeyer C. Social media users have different experiences, motivations, and quality of life. Psychiatry Res 2015 Aug 30;228(3):774-780. [CrossRef] [Medline]
    45. Candido Dos Reis FJ, Lynn S, Ali HR, Eccles D, Hanby A, Provenzano E, et al. Crowdsourcing the general public for large scale molecular pathology studies in cancer. EBioMedicine 2015 Jul;2(7):681-689 [FREE Full text] [CrossRef] [Medline]
    46. Carter RR, DiFeo A, Bogie K, Zhang GQ, Sun J. Crowdsourcing awareness: exploration of the ovarian cancer knowledge gap through Amazon Mechanical Turk. PLoS One 2014 Jan;9(1):e85508 [FREE Full text] [CrossRef] [Medline]
    47. Chaitoff A, Niforatos J, Vega J. Exploring the effects of medical trainee naming: a randomized experiment. Perspect Med Educ 2016 Apr;5(2):114-121 [FREE Full text] [CrossRef] [Medline]
    48. Chan TM, Thoma B, Lin M. Creating, curating, and sharing online faculty development resources: the medical education in cases series experience. Acad Med 2015 Jun;90(6):785-789. [CrossRef] [Medline]
    49. Chen C, White L, Kowalewski T, Aggarwal R, Lintott C, Comstock B, et al. Crowd-sourced assessment of technical skills: a novel method to evaluate surgical performance. J Surg Res 2014 Mar;187(1):65-71. [CrossRef] [Medline]
    50. Chenette P, Martinez C. Synchronization of women's cycles: a big data and crowdsourcing approach to menstrual cycle analysis. Fertil Steril 2014 Sep;102(3):e250. [CrossRef]
    51. Chen SP, Kirsch S, Zlatev DV, Chang T, Comstock B, Lendvay TS, et al. Optical biopsy of bladder cancer using crowd-sourced assessment. J Am Med Assoc Surg 2016 Jan;151(1):90-93. [CrossRef] [Medline]
    52. Cheung SY, Delfabbro P. Who is a cancer survivor? A study into the lay understanding of cancer identity using a crowdsourced population. Ann Oncol 2015 Dec 19;26(suppl 9):ix122-ix123. [CrossRef]
    53. Aaron HJ, Deepti A, Bill C, Eyler AA, Pless RB. 2013. Emerging Technologies: Webcams and Crowd-Sourcing to Identify Active Transportation   URL: https://openscholarship.wustl.edu/brown_facpubs/3/ [accessed 2018-04-05] [WebCite Cache]
    54. Chunara R, Chhaya V, Bane S, Mekaru SR, Chan EH, Freifeld CC, et al. Online reporting for malaria surveillance using micro-monetary incentives, in urban India 2010-2011. Malar J 2012 Feb 13;11:43 [FREE Full text] [CrossRef] [Medline]
    55. Coley HL, Sadasivam RS, Williams JH, Volkman JE, Schoenberger YM, Kohler CL, National Dental PBRN and QUITPRIMO Collaborative Group. Crowdsourced peer- versus expert-written smoking-cessation messages. Am J Prev Med 2013 Nov;45(5):543-550 [FREE Full text] [CrossRef] [Medline]
    56. Collier N, Son NT, Nguyen NM. OMG U got flu? Analysis of shared health messages for bio-surveillance. J Biomed Semantics 2011;2 Suppl 5:S9 [FREE Full text] [CrossRef] [Medline]
    57. Corpas M, Valdivia-Granda W, Torres N, Greshake B, Coletta A, Knaus A, et al. Crowdsourced direct-to-consumer genomic analysis of a family quartet. BMC Genomics 2015 Nov 07;16:910 [FREE Full text] [CrossRef] [Medline]
    58. Corrigan PW, Bink AB, Fokuo JK, Schmidt A. The public stigma of mental illness means a difference between you and me. Psychiatry Res 2015 Mar 30;226(1):186-191. [CrossRef] [Medline]
    59. Crangle CE, Kart JB. A questions-based investigation of consumer mental-health information. PeerJ 2015;3:e867 [FREE Full text] [CrossRef] [Medline]
    60. McGregor JA. 2014. Crowdsourced Analysis of GBS Perinatal Disease as a Sexually Transmissible Infection (STI) Underscores Need for GBS Vaccine and Patient Education regarding GBS as an STI to Be Able to Make Well-Informed Sexual Practice Choices   URL: https://cdc.confex.com/cdc/std2014/webprogram/Paper34267.html [accessed 2018-04-05] [WebCite Cache]
    61. See C. Crowdsourcing in medical education: a visual example using anatomical henna. FASEB J 2014;28(supplement 1): [FREE Full text]
    62. Li L, Adam P, Townsend A, Koehn C, Memetovic J, Esdaile J. Crowdsourcing Priority Setting: A Survey of Canadians Priorities and Views about Using Digital Media in Arthritis Prevention and Treatment. 2015 Jul Presented at: J Rheumatol Conference: 70th Annual Meeting of the Canadian-Rheumatology-Association (CRA); 2015; Quebec, Canada.
    63. Sreih A, Aldaghlawi F. 2013. Crowdsourcing Using The Audience Response System To Solve Medical Problems: A Pilot Study   URL: http:/​/acrabstracts.​org/​abstract/​crowdsourcing-using-the-audience-response-system-to-solve-medical-problems-a-pilot-study/​ [accessed 2018-04-05] [WebCite Cache]
    64. Crump MJ, McDonnell JV, Gureckis TM. Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research. PLoS One 2013 Mar;8(3):e57410 [FREE Full text] [CrossRef] [Medline]
    65. Cui L, Carter R, Zhang GQ. Evaluation of a novel conjunctive exploratory navigation Interface for consumer health information: a crowdsourced comparative study. J Med Internet Res 2014 Feb;16(2):e45 [FREE Full text] [CrossRef] [Medline]
    66. Dakun T, Rui Z, Jinbo S, Wei Q. Sleep spindle detection using deep learning: a validation study based on crowdsourcing. Conf Proc IEEE Eng Med Biol Soc 2015 Aug;2015:2828-2831. [CrossRef] [Medline]
    67. Dart RC, Surratt HL, Le Lait M, Stivers Y, Bebarta VS, Freifeld CC, et al. Diversion and illicit sale of extended release tapentadol in the United States. Pain Med 2016 Aug;17(8):1490-1496 [FREE Full text] [CrossRef] [Medline]
    68. Dasgupta N, Freifeld C, Brownstein JS, Menone CM, Surratt HL, Poppish L, et al. Crowdsourcing black market prices for prescription opioids. J Med Internet Res 2013 Aug 16;15(8):e178 [FREE Full text] [CrossRef] [Medline]
    69. David SV, Hayden BY. Neurotree: a collaborative, graphical database of the academic genealogy of neuroscience. PLoS One 2012 Oct;7(10):e46608 [FREE Full text] [CrossRef] [Medline]
    70. De Weger LA, Hiemstra PS, Op den Buysch E, van Vliet AJ. Spatiotemporal monitoring of allergic rhinitis symptoms in The Netherlands using citizen science. Allergy 2014 Aug;69(8):1085-1091 [FREE Full text] [CrossRef] [Medline]
    71. Donaldson CD, Siegel JT, Crano WD. Nonmedical use of prescription stimulants in college students: attitudes, intentions, and vested interest. Addict Behav 2016 Feb;53:101-107. [CrossRef] [Medline]
    72. Douzgou S, Pollalis YA, Vozikis A, Patrinos GP, Clayton-Smith J. Collaborative crowdsourcing for the diagnosis of rare genetic syndromes: the DYSCERNE experience. Public Health Genomics 2016 Oct;19(1):19-24. [CrossRef] [Medline]
    73. Dunford E, Trevena H, Goodsell C, Ng KH, Webster J, Millis A, et al. FoodSwitch: a mobile phone app to enable consumers to make healthier food choices and crowdsourcing of National Food Composition Data. JMIR Mhealth Uhealth 2014 Aug 21;2(3):e37 [FREE Full text] [CrossRef] [Medline]
    74. Dye T, Li D, Demment M, Groth S, Fernandez D, Dozier A, et al. Sociocultural variation in attitudes toward use of genetic information and participation in genetic research by race in the United States: implications for precision medicine. J Am Med Inform Assoc 2016 Jul;23(4):782-786 [FREE Full text] [CrossRef] [Medline]
    75. Fischetti C, Obidegwu A, Love SM. Crowdsourcing the collateral damage from breast cancer treatment. Ann Epidemiol 2014 Sep;24(9):683. [CrossRef]
    76. Gesser-Edelsburg A, Shir-Raz Y, Hayek S, Sassoni-Bar Lev O. What does the public know about Ebola? The public's risk perceptions regarding the current Ebola outbreak in an as-yet unaffected country. Am J Infect Control 2015 Jul 01;43(7):669-675. [CrossRef] [Medline]
    77. Ghani KR, Miller DC, Linsell S, Brachulis A, Lane B, Sarle R, Michigan Urological Surgery Improvement Collaborative. Measuring to improve: peer and crowd-sourced assessments of technical skill with robot-assisted radical prostatectomy. Eur Urol 2016 Apr;69(4):547-550. [CrossRef] [Medline]
    78. Gipson J, Kahane G, Savulescu J. Attitudes of lay people to withdrawal of treatment in brain damaged patients. Neuroethics 2014 Jan;7:1-9 [FREE Full text] [CrossRef] [Medline]
    79. Golding S, Nadorff MR, Winer ES, Ward KC. Unpacking sleep and suicide in older adults in a combined online sample. J Clin Sleep Med 2015 Dec 15;11(12):1385-1392 [FREE Full text] [CrossRef] [Medline]
    80. Gonzales L, Davidoff KC, DeLuca JS, Yanos PT. The mental illness microaggressions scale-perpetrator version (MIMS-P): reliability and validity. Psychiatry Res 2015 Sep 30;229(1-2):120-125. [CrossRef] [Medline]
    81. Good BM, Loguercio S, Griffith OL, Nanis M, Wu C, Su AI. The cure: design and evaluation of a crowdsourcing game for gene selection for breast cancer survival prediction. JMIR Serious Games 2014 Jul 29;2(2):e7 [FREE Full text] [CrossRef] [Medline]
    82. Good BM, Nanis M, Wu C, Su AI. Microtask crowdsourcing for disease mention annotation in PubMed abstracts. Pac Symp Biocomput 2015:282-293 [FREE Full text] [Medline]
    83. Gore WL, Widiger TA. Assessment of dependency by the FFDI: comparisons to the PID-5 and maladaptive agreeableness. Personal Ment Health 2015 Nov;9(4):258-276. [CrossRef] [Medline]
    84. Gornick MC, Ryan KA, Kim SY. Impact of non-welfare interests on willingness to donate to biobanks: an experimental survey. J Empir Res Hum Res Ethics 2014 Oct;9(4):22-33 [FREE Full text] [CrossRef] [Medline]
    85. Gottlieb A, Hoehndorf R, Dumontier M, Altman RB. Ranking adverse drug reactions with crowdsourcing. J Med Internet Res 2015 Mar 23;17(3):e80 [FREE Full text] [CrossRef] [Medline]
    86. Grasso KL, Bell RA. Understanding health information seeking: a test of the risk perception attitude framework. J Health Commun 2015 Jul;20(12):1406-1414. [CrossRef] [Medline]
    87. Greshake B, Bayer PE, Rausch H, Reda J. openSNP--a crowdsourced web resource for personal genomics. PLoS One 2014 Mar;9(3):e89204 [FREE Full text] [CrossRef] [Medline]
    88. Guinney J, Wang T, Bare C, Norman T, Bot B, Shen L, et al. Abstract B204: a DREAM Challenge to improve prognostic models in patients with metastatic castrate-resistant prostate cancer. Mol Cancer Ther 2016 Jan 07;14(12 Supplement 2):B204-B204. [CrossRef]
    89. Hand J, Heil SH, Sigmon SC, Higgins ST. Cigarette smoking and other behavioral risk factors related to unintended pregnancy. Drug Alcohol Depend 2015 Jan;146:e134. [CrossRef]
    90. Harber P, Leroy G. Assessing work-asthma interaction with Amazon Mechanical Turk. J Occup Environ Med 2015 Apr;57(4):381-385. [CrossRef] [Medline]
    91. Harris JK, Mart A, Moreland-Russell S, Caburnay CA. Diabetes topics associated with engagement on Twitter. Prev Chronic Dis 2015 May 07;12:E62 [FREE Full text] [CrossRef] [Medline]
    92. Henshaw EJ. Too sick, not sick enough? Effects of treatment type and timing on depression stigma. J Nerv Ment Dis 2014 Apr;202(4):292-299. [CrossRef] [Medline]
    93. Hildebrand M, Ahumada C, Watson S. CrowdOutAIDS: crowdsourcing youth perspectives for action. Reprod Health Matters 2013 May 14;21(41):57-68. [CrossRef] [Medline]
    94. Holst D, Kowalewski TM, White LW, Brand TC, Harper JD, Sorensen MD, et al. Crowd-sourced assessment of technical skills: differentiating animate surgical skill through the wisdom of crowds. J Endourol 2015 Oct;29(10):1183-1188. [CrossRef] [Medline]
    95. Holst D, Kowalewski TM, White LW, Brand TC, Harper JD, Sorenson MD, et al. Crowd-sourced assessment of technical skills: an adjunct to urology resident surgical simulation training. J Endourol 2015 May;29(5):604-609. [CrossRef] [Medline]
    96. Hougen HY, Lobo JM, Corey T, Jones R, Rheuban K, Schenkman NS, et al. Optimizing and validating the technical infrastructure of a novel tele-cystoscopy system. J Telemed Telecare 2016 Oct;22(7):397-404. [CrossRef] [Medline]
    97. Hsi RS, Hotaling JM, Hartzler AL, Holt SK, Walsh TJ. Validity and reliability of a smartphone application for the assessment of penile deformity in Peyronie's disease. J Sex Med 2013 Jul;10(7):1867-1873. [CrossRef] [Medline]
    98. Hughes ML, Lowe DA, Shine HE, Carpenter BD, Balsis S. Using the Alzheimer's Association web site to improve knowledge of Alzheimer's disease in health care providers. Am J Alzheimers Dis Other Demen 2015 Feb;30(1):98-100. [CrossRef] [Medline]
    99. Ilakkuvan V, Tacelosky M, Ivey KC, Pearson JL, Cantrell J, Vallone DM, et al. Cameras for public health surveillance: a methods protocol for crowdsourced annotation of point-of-sale photographs. JMIR Res Protoc 2014 Apr 09;3(2):e22 [FREE Full text] [CrossRef] [Medline]
    100. Irshad H, Montaser-Kouhsari L, Waltz G, Bucur O, Nowak JA, Dong F, et al. Crowdsourcing image annotation for nucleus detection and segmentation in computational pathology: evaluating experts, automated methods, and the crowd. Pac Symp Biocomput 2015:294-305 [FREE Full text] [Medline]
    101. Kaczynski AT, Wilhelm Stanis SA, Hipp JA. Point-of-decision prompts for increasing park-based physical activity: a crowdsource analysis. Prev Med 2014 Dec;69:87-89 [FREE Full text] [CrossRef] [Medline]
    102. Dainty KN, Brooks SC. A North American survey of public opinion on the acceptability of crowdsourcing basic life support for out-of-hospital cardiac arrest. Circulation 2015;132:A19542.
    103. Kaufman EA, Cundiff JM, Crowell SE. The development, factor structure, and validation of the Self-concept and Identity Measure (SCIM): a self-report assessment of clinical identity disturbance. J Psychopathol Behav Assess 2014 Jun 28;37(1):122-133. [CrossRef]
    104. Khare R, Burger JD, Aberdeen JS, Tresner-Kirsch DW, Corrales TJ, Hirchman L, et al. Scaling drug indication curation through crowdsourcing. Database (Oxford) 2015 Mar;2015:- [FREE Full text] [CrossRef] [Medline]
    105. Kiefner-Burmeister AE, Hoffmann DA, Meers MR, Koball AM, Musher-Eizenman DR. Food consumption by young children: a function of parental feeding goals and practices. Appetite 2014 Mar;74:6-11. [CrossRef] [Medline]
    106. Kiefner-Burmeister A, Hoffmann D, Zbur S, Musher-Eizenman D. Implementation of parental feeding practices: does parenting style matter? Public Health Nutr 2016 Sep;19(13):2410-2414. [CrossRef] [Medline]
    107. Kim M, Jung Y, Jung D, Hur C. Investigating the congruence of crowdsourced information with official government data: the case of pediatric clinics. J Med Internet Res 2014 Feb;16(2):e29 [FREE Full text] [CrossRef] [Medline]
    108. McKenna MT, Wang S, Nguyen TB, Burns JE, Petrick N, Summers RM. Strategies for improved interpretation of computer-aided detections for CT colonography utilizing distributed human intelligence. Med Image Anal 2012 Aug;16(6):1280-1292 [FREE Full text] [CrossRef] [Medline]
    109. Kowalewski TM, Comstock B, Sweet R, Schaffhausen C, Menhadji A, Averch T, et al. Crowd-sourced assessment of technical skills for validation of basic laparoscopic urologic skills tasks. J Urol 2016 Jun;195(6):1859-1865. [CrossRef] [Medline]
    110. Kowalewski T, Sweet R, Menhadji A, Averch T, Box G, Brand T, et al. High-volume assessment of surgical videos via crowd-sourcing: the Basic Laparoscopic Urologic Skills (BLUS) Initiative. J Urol 2015 Apr;193(4):e393. [CrossRef]
    111. Krauss MJ, Sowles SJ, Moreno M, Zewdie K, Grucza RA, Bierut LJ, et al. Hookah-related Twitter chatter: a content analysis. Prev Chronic Dis 2015 Jul 30;12:E121 [FREE Full text] [CrossRef] [Medline]
    112. Krieke LV, Jeronimus BF, Blaauw FJ, Wanders RB, Emerencia AC, Schenk HM, et al. HowNutsAreTheDutch (HoeGekIsNL): A crowdsourcing study of mental symptoms and strengths. Int J Methods Psychiatr Res 2016 Dec;25(2):123-144. [CrossRef] [Medline]
    113. Kristan J, Suffoletto B. Using online crowdsourcing to understand young adult attitudes toward expert-authored messages aimed at reducing hazardous alcohol consumption and to collect peer-authored messages. Transl Behav Med 2015 Mar;5(1):45-52 [FREE Full text] [CrossRef] [Medline]
    114. Kuang J, Argo L, Stoddard G, Bray BE, Zeng-Treitler Q. Assessing pictograph recognition: a comparison of crowdsourcing and traditional survey approaches. J Med Internet Res 2015 Dec;17(12):e281 [FREE Full text] [CrossRef] [Medline]
    115. Kudesia R, Chernyak E, McAvey B. Creation & validation of the fertility and infertility treatment knowledge survey (FIT-KS). Fertil Steril 2015 Sep;104(3):e118. [CrossRef]
    116. Küffner R, Zach N, Norel R, Hawe J, Schoenfeld D, Wang L, et al. Crowdsourced analysis of clinical trial data to predict amyotrophic lateral sclerosis progression. Nat Biotechnol 2015 Jan;33(1):51-57. [CrossRef] [Medline]
    117. Kwitt R, Hegenbart S, Rasiwasia N, Vécsei A, Uhl A. Do we need annotation experts? A case study in celiac disease classification. Med Image Comput Comput Assist Interv 2014;17(Pt 2):454-461. [Medline]
    118. Lee YO, Jordan JW, Djakaria M, Ling PM. Using peer crowds to segment black youth for smoking intervention. Health Promot Pract 2014 Jul;15(4):530-537 [FREE Full text] [CrossRef] [Medline]
    119. Leifman G, Swedish T, Roesch K, Raskar R. Leveraging the crowd for annotation of retinal images. Conf Proc IEEE Eng Med Biol Soc 2015;2015:7736-7739. [CrossRef] [Medline]
    120. Leiter A, Sablinski T, Diefenbach M, Foster M, Greenberg A, Holland J, et al. Use of crowdsourcing for cancer clinical trial development. J Natl Cancer Inst 2014 Oct;106(10):-. [CrossRef] [Medline]
    121. Levick N, Dore A, Ralphs S, Arkus B, Nemirovsky D, Deville S, et al. iRescU. 2013. The Use of Social Media and Innovative Technology to Create a Global AED Geolocation Database: The iRescU project   URL: https://www.irescu.info/HRS2013finalprint.pdf [accessed 2018-04-05] [WebCite Cache]
    122. Li S, Feng B, Chen M, Bell RA. Physician review websites: effects of the proportion and position of negative reviews on readers' willingness to choose the doctor. J Health Commun 2015 Apr;20(4):453-461. [CrossRef] [Medline]
    123. Lloyd JC, Yen T, Pietrobon R, Wiener JS, Ross SS, Kokorowski PJ, et al. Estimating utility values for vesicoureteral reflux in the general public using an online tool. J Pediatr Urol 2014 Dec;10(6):1026-1031 [FREE Full text] [CrossRef] [Medline]
    124. Luengo-Oroz MA, Arranz A, Frean J. Crowdsourcing malaria parasite quantification: an online game for analyzing images of infected thick blood smears. J Med Internet Res 2012 Nov 29;14(6):e167 [FREE Full text] [CrossRef] [Medline]
    125. MacLean DL, Heer J. Identifying medical terms in patient-authored text: a crowdsourcing-based approach. J Am Med Inform Assoc 2013;20(6):1120-1127 [FREE Full text] [CrossRef] [Medline]
    126. Magid KH, Matlock DD, Thompson JS, McIlvennan CK, Allen LA. The influence of expected risks on decision making for destination therapy left ventricular assist device: An MTurk survey. J Heart Lung Transplant 2015 Jul;34(7):988-990. [CrossRef] [Medline]
    127. Maier-Hein L, Mersmann S, Kondermann D, Bodenstedt S, Sanchez A, Stock C, et al. Can masses of non-experts train highly accurate image classifiers? A crowdsourcing approach to instrument segmentation in laparoscopic images. Med Image Comput Comput Assist Interv 2014;17(Pt 2):438-445. [Medline]
    128. Maier-Hein L, Mersmann S, Kondermann D, Stock C, Kenngott H, Sanchez A, et al. Crowdsourcing for reference correspondence generation in endoscopic images. Med Image Comput Comput Assist Interv 2014;17(Pt 2):349-356. [Medline]
    129. Malpani A, Vedula SS, Chen CC, Hager GD. A study of crowdsourced segment-level surgical skill assessment using pairwise rankings. Int J Comput Assist Radiol Surg 2015 Sep;10(9):1435-1447. [CrossRef] [Medline]
    130. Manuvinakurike R, Velicer WF, Bickmore TW. Automated indexing of Internet stories for health behavior change: weight loss attitude pilot study. J Med Internet Res 2014 Dec 09;16(12):e285 [FREE Full text] [CrossRef] [Medline]
    131. Margolin AA, Bilal E, Huang E, Norman TC, Ottestad L, Mecham BH, et al. Systematic analysis of challenge-driven improvements in molecular prognostic models for breast cancer. Sci Transl Med 2013 Apr 17;5(181):181re1 [FREE Full text] [CrossRef] [Medline]
    132. Mathieu E, Barratt A, Carter SM, Jamtvedt G. Internet trials: participant experiences and perspectives. BMC Med Res Methodol 2012 Oct 23;12:162 [FREE Full text] [CrossRef] [Medline]
    133. Mavandadi S, Dimitrov S, Feng S, Yu F, Sikora U, Yaglidere O, et al. Distributed medical image analysis and diagnosis through crowd-sourced games: a malaria case study. PLoS One 2012 May;7(5):e37245 [FREE Full text] [CrossRef] [Medline]
    134. Mavandadi S, Dimitrov S, Feng S, Yu F, Yu R, Sikora U, et al. Crowd-sourced BioGames: managing the big data problem for next-generation lab-on-a-chip platforms. Lab Chip 2012 Oct 21;12(20):4102-4106 [FREE Full text] [CrossRef] [Medline]
    135. Mavandadi S, Feng S, Yu F, Dimitrov S, Yu R, Ozcan A. BioGames: a platform for crowd-sourced biomedical image analysis and telediagnosis. Games Health J 2012 Oct 01;1(5):373-376 [FREE Full text] [CrossRef] [Medline]
    136. McCoy AB, Wright A, Laxmisan A, Ottosen MJ, McCoy JA, Butten D, et al. Development and evaluation of a crowdsourcing methodology for knowledge base construction: identifying relationships between clinical problems and medications. J Am Med Inform Assoc 2012 Sep;19(5):713-718 [FREE Full text] [CrossRef] [Medline]
    137. McGregor TJ, Batis JC. A novel method for assessing caffeine dependence. J Caffeine Res 2016 Mar;6(1):26-33. [CrossRef]
    138. Yetisgen-Yildiz M, Solti I, Xia F. Using Amazon's Mechanical Turk for annotating medical named entities. AMIA Annu Symp Proc 2010;2010:1316 [FREE Full text] [Medline]
    139. Merchant RM, Griffis HM, Ha YP, Kilaru AS, Sellers AM, Hershey JC, et al. Hidden in plain sight: a crowdsourced public art contest to make automated external defibrillators more visible. Am J Public Health 2014 Dec;104(12):2306-2312. [CrossRef] [Medline]
    140. Meyer AN, Longhurst CA, Singh H. Crowdsourcing diagnosis for patients with undiagnosed illnesses: an evaluation of CrowdMed. J Med Internet Res 2016 Jan 14;18(1):e12 [FREE Full text] [CrossRef] [Medline]
    141. Michl GL, Katz JN, Losina E. Risk and risk perception of knee osteoarthritis in the US: a population-based study. Osteoarthritis Cartilage 2016 Apr;24(4):593-596 [FREE Full text] [CrossRef] [Medline]
    142. Miller-Graff LE, Scrafford K, Rice C. Conditional and indirect effects of age of first exposure on PTSD symptoms. Child Abuse Negl 2016 Jan;51:303-312. [CrossRef] [Medline]
    143. Miller JD, Gentile B, Wilson L, Campbell WK. Grandiose and vulnerable narcissism and the DSM-5 pathological personality trait model. J Pers Assess 2013 May;95(3):284-290. [CrossRef] [Medline]
    144. Mitry D, Peto T, Hayat S, Blows P, Morgan J, Khaw KT, et al. Crowdsourcing as a screening tool to detect clinical features of glaucomatous optic neuropathy from digital photography. PLoS One 2015;10(2):e0117401 [FREE Full text] [CrossRef] [Medline]
    145. Mitry D, Peto T, Hayat S, Morgan JE, Khaw K, Foster PJ. Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium. PLoS One 2013;8(8):e71154 [FREE Full text] [CrossRef] [Medline]
    146. Mollá D, Santiago-Martínez ME. Creation of a corpus for evidence based medicine summarisation. Australas Med J 2012;5(9):503-506 [FREE Full text] [CrossRef] [Medline]
    147. Ma P, Businelle MS, Balis DS, Kendzor DE. The influence of perceived neighborhood disorder on smoking cessation among urban safety net hospital patients. Drug Alcohol Depend 2015 Nov 01;156:157-161. [CrossRef] [Medline]
    148. Morris RR, Schueller SM, Picard RW. Efficacy of a Web-based, crowdsourced peer-to-peer cognitive reappraisal platform for depression: randomized controlled trial. J Med Internet Res 2015;17(3):e72 [FREE Full text] [CrossRef] [Medline]
    149. Morse P, Sweeny K, Legg AM. A situational construal approach to healthcare experiences. Soc Sci Med 2015 Aug;138:170-178. [CrossRef] [Medline]
    150. Mortensen JM, Minty EP, Januszyk M, Sweeney TE, Rector AL, Noy NF, et al. Using the wisdom of the crowds to find critical errors in biomedical ontologies: a study of SNOMED CT. J Am Med Inform Assoc 2015 May;22(3):640-648 [FREE Full text] [CrossRef] [Medline]
    151. Mortensen JM, Musen MA, Noy NF. Crowdsourcing the verification of relationships in biomedical ontologies. AMIA Annu Symp Proc 2013;2013:1020-1029 [FREE Full text] [Medline]
    152. Muench F, Hayes M, Kuerbis A, Shao S. The independent relationship between trouble controlling Facebook use, time spent on the site and distress. J Behav Addict 2015 Sep;4(3):163-169 [FREE Full text] [CrossRef] [Medline]
    153. Nadorff MR, Nadorff DK, Germain A. Nightmares: under-reported, undetected, and therefore untreated. J Clin Sleep Med 2015 Jul 15;11(7):747-750 [FREE Full text] [CrossRef] [Medline]
    154. Nguyen TB, Wang S, Anugu V, Rose N, McKenna M, Petrick N, et al. Distributed human intelligence for colonic polyp classification in computer-aided detection for CT colonography. Radiology 2012 Mar;262(3):824-833 [FREE Full text] [CrossRef] [Medline]
    155. Norr AM, Albanese BJ, Oglesby ME, Allan NP, Schmidt NB. Anxiety sensitivity and intolerance of uncertainty as potential risk factors for cyberchondria. J Affect Disord 2015 Mar 15;174:64-69. [CrossRef] [Medline]
    156. Norr AM, Allan NP, Boffa JW, Raines AM, Schmidt NB. Validation of the Cyberchondria Severity Scale (CSS): replication and extension with bifactor modeling. J Anxiety Disord 2015 Apr;31:58-64. [CrossRef] [Medline]
    157. Norr AM, Oglesby ME, Raines AM, Macatee RJ, Allan NP, Schmidt NB. Relationships between cyberchondria and obsessive-compulsive symptom dimensions. Psychiatry Res 2015 Dec 15;230(2):441-446. [CrossRef] [Medline]
    158. Peabody J, Miller D, Lane B, Sarle R, Brachulis A, Linsell S, et al. Wisdom of the crowds: use of crowdsourcing to assess surgical skill of robot-assisted radical prostatectomy in a statewide surgical collaborative. J Urol 2015 Apr;193(4):e655-e656. [CrossRef]
    159. Folkestad L, Brodersen JB, Hallas P, Brabrand M. [Laypersons can seek help from their Facebook friends regarding medical diagnosis]. Ugeskr Laeger 2011 Dec 05;173(49):3174-3177. [Medline]
    160. Pharoah PD. Cell Slider: Using crowd sourcing for the scoring of molecular pathology. In: Proceedings of the 105th Annual Meeting of the American Association for Cancer Research. 2014 Sep 30 Presented at: 105th Annual Meeting of the American Association for Cancer Research; April 5-9, 2014; San Diego, CA. [CrossRef]
    161. Platt J, Kardia S. Public trust in health information sharing: implications for biobanking and electronic health record systems. J Pers Med 2015 Feb 03;5(1):3-21 [FREE Full text] [CrossRef] [Medline]
    162. Plenge RM, Greenberg JD, Mangravite LM, Derry JM, Stahl EA, Coenen MJ, International Rheumatoid Arthritis Consortium (INTERACT). Crowdsourcing genetic prediction of clinical utility in the Rheumatoid Arthritis Responder Challenge. Nat Genet 2013 Apr 26;45(5):468-469. [CrossRef] [Medline]
    163. Pollert GA, Kauffman AA, Veilleux JC. Symptoms of psychopathology within groups of eating-disordered, restrained eating, and unrestrained eating individuals. J Clin Psychol 2016 Jun;72(6):621-632. [CrossRef] [Medline]
    164. Powers MK, Boonjindasup A, Pinsky M, Dorsey P, Maddox M, Su LM, et al. Crowdsourcing assessment of surgeon dissection of renal artery and vein during robotic partial Nephrectomy: a novel approach for quantitative assessment of surgical performance. J Endourol 2016 Apr;30(4):447-452. [CrossRef] [Medline]
    165. Price M, van Stolk-Cooke K. Examination of the interrelations between the factors of PTSD, major depression, and generalized anxiety disorder in a heterogeneous trauma-exposed sample using DSM 5 criteria. J Affect Disord 2015 Nov 01;186:149-155. [CrossRef] [Medline]
    166. Pugh J, Kahane G, Maslen H, Savulescu J. Lay attitudes toward deception in medicine: Theoretical considerations and empirical evidence. AJOB Empir Bioeth 2016 Jan 02;7(1):31-38 [FREE Full text] [CrossRef] [Medline]
    167. Quisenberry A, Franck C, Koffarnus MN, Bickel WK. Delay discounting in current, ex, and non- smokers: interactions with gender. Drug Alcohol Depend 2015 Jan;146:e73-e74. [CrossRef]
    168. Merchant RM, Asch DA, Hershey JC, Griffis H, Hill S, Saynisch O, et al. A crowdsourcing, mobile media, challenge to locate automated external defibrillators. Circulation 2012;126:A57 [FREE Full text]
    169. Raines AM, Allan NP, Oglesby ME, Short NA, Schmidt NB. Examination of the relations between obsessive-compulsive symptom dimensions and fear and distress disorder symptoms. J Affect Disord 2015 Sep 01;183:253-257. [CrossRef] [Medline]
    170. Raines AM, Short NA, Sutton CA, Oglesby ME, Allan NP, Schmidt NB. Obsessive-compulsive symptom dimensions and insomnia: the mediating role of anxiety sensitivity cognitive concerns. Psychiatry Res 2015 Aug 30;228(3):368-372. [CrossRef] [Medline]
    171. Rass O, Pacek LR, Johnson PS, Johnson MW. Characterizing use patterns and perceptions of relative harm in dual users of electronic and tobacco cigarettes. Exp Clin Psychopharmacol 2015 Dec;23(6):494-503 [FREE Full text] [CrossRef] [Medline]
    172. Behrend TS, Sharek DJ, Meade AW, Wiebe EN. The viability of crowdsourcing for survey research. Behav Res Methods 2011 Sep;43(3):800-813. [CrossRef] [Medline]
    173. Reese ED, Pollert GA, Veilleux JC. Self-regulatory predictors of eating disorder symptoms: understanding the contributions of action control and willpower beliefs. Eat Behav 2016 Jan;20:64-69. [CrossRef] [Medline]
    174. Reidy DE, Berke DS, Gentile B, Zeichner A. Masculine discrepancy stress, substance use, assault and injury in a survey of US men. Inj Prev 2016 Oct;22(5):370-374. [CrossRef] [Medline]
    175. Reidy DE, Brookmeyer KA, Gentile B, Berke DS, Zeichner A. Gender role discrepancy stress, high-risk sexual behavior, and sexually transmitted disease. Arch Sex Behav 2016 Feb;45(2):459-465. [CrossRef] [Medline]
    176. Robillard JM, Roskams-Edris D, Kuzeljevic B, Illes J. Prevailing public perceptions of the ethics of gene therapy. Hum Gene Ther 2014 Aug;25(8):740-746. [CrossRef] [Medline]
    177. Rodrigue JR, Fleishman A, Vishnevsky T, Whiting J, Vella JP, Garrison K, et al. Development and validation of a questionnaire to assess fear of kidney failure following living donation. Transpl Int 2014 Jun;27(6):570-575 [FREE Full text] [CrossRef] [Medline]
    178. Santiago-Rivas M, Schnur JB, Jandorf L. Sun protection belief clusters: analysis of Amazon Mechanical Turk data. J Cancer Educ 2016 Dec;31(4):673-678. [CrossRef] [Medline]
    179. Schofield CA, Dea Moore C, Hall A, Coles ME. Understanding perceptions of anxiety disorders and their treatment. J Nerv Ment Dis 2016 Feb;204(2):116-122. [CrossRef] [Medline]
    180. Schulte EM, Avena NM, Gearhardt AN. Which foods may be addictive? The roles of processing, fat content, and glycemic load. PLoS ONE 2015 Feb 18;10(2):e0117959. [CrossRef]
    181. Schulte EM, Tuttle HM, Gearhardt AN. Belief in food addiction and obesity-related policy support. PLoS One 2016;11(1):e0147557 [FREE Full text] [CrossRef] [Medline]
    182. Seligowski AV, Orcutt HK. Support for the 7-factor hybrid model of PTSD in a community sample. Psychol Trauma 2016 Mar;8(2):218-221. [CrossRef] [Medline]
    183. Shaer O, Nov O, Okerlund J, Balestra M, Stowell E, Ascher L, et al. Informing the design of direct-to-consumer interactive personal genomics reports. J Med Internet Res 2015 Jun;17(6):e146 [FREE Full text] [CrossRef] [Medline]
    184. Shapiro RE, Lipton RB, Reiner PB. Factors influencing stigma towards persons with migraine. J Headache Pain 2014 Sep 18;15(Suppl 1):E36. [CrossRef]
    185. Siegel JT, Navarro MA, Thomson AL. The impact of overtly listing eligibility requirements on MTurk: An investigation involving organ donation, recruitment scripts, and feelings of elevation. Soc Sci Med 2015 Oct;142:256-260. [CrossRef] [Medline]
    186. Silva I, Behar J, Sameni R, Zhu T, Oster J, Clifford GD, et al. Noninvasive fetal ECG: the PhysioNet/Computing in Cardiology Challenge 2013. Comput Cardiol 2013 Mar;40:149-152. [Medline]
    187. Sims MH, Bigham J, Kautz H, Halterman MW. Crowdsourcing medical expertise in near real time. J Hosp Med 2014 Jul;9(7):451-456. [CrossRef] [Medline]
    188. Sims MH, Fagnano M, Halterman JS, Halterman MW. Provider impressions of the use of a mobile crowdsourcing app in medical practice. Health Informatics J 2016 Jun;22(2):221-231. [CrossRef] [Medline]
    189. Smolinski MS, Crawley AW, Baltrusaitis K, Chunara R, Olsen JM, Wójcik O, et al. Flu Near You: crowdsourced symptom reporting spanning 2 influenza seasons. Am J Public Health 2015 Oct;105(10):2124-2130. [CrossRef] [Medline]
    190. Sokol Y, Eisenheim E. The relationship between continuous identity disturbances, negative mood, and suicidal ideation. Prim Care Companion CNS Disord 2016 Jan;18(1):- [FREE Full text] [CrossRef] [Medline]
    191. Strickland JC, Stoops WW. Perceptions of research risk and undue influence: implications for ethics of research conducted with cocaine users. Drug Alcohol Depend 2015 Nov 01;156:304-310. [CrossRef] [Medline]
    192. Stroh G, Rosell T, Dong F, Forster J. Early liver transplantation for patients with acute alcoholic hepatitis: public views and the effects on organ donation. Am J Transplant 2015 Jun;15(6):1598-1604 [FREE Full text] [CrossRef] [Medline]
    193. Su AI, Good BM, van Wijnen AJ. Gene Wiki reviews: marrying crowdsourcing with traditional peer review. Gene 2013 Dec 01;531(2):125. [CrossRef] [Medline]
    194. Syme ML, Cohn TJ, Barnack-Tavlaris J. A comparison of actual and perceived sexual risk among older adults. J Sex Res 2017 Feb;54(2):149-160. [CrossRef] [Medline]
    195. Tang W, Han L, Best J, Zhang Y, Mollan K, Kim J, et al. Crowdsourcing HIV test promotion videos: a noninferiority randomized controlled trial in China. Clin Infect Dis 2016 Jun 01;62(11):1436-1442 [FREE Full text] [CrossRef] [Medline]
    196. Taylor S, Conelea CA, McKay D, Crowe KB, Abramowitz JS. Sensory intolerance: latent structure and psychopathologic correlates. Compr Psychiatry 2014 Jul;55(5):1279-1284 [FREE Full text] [CrossRef] [Medline]
    197. IMPROVER project team (in alphabetical order), Boue S, Fields B, Hoeng J, Park J, Peitsch MC, Challenge Best Performers (in alphabetical order), et al. Enhancement of COPD biological networks using a web-based collaboration interface. F1000Res 2015 May;4:32 [FREE Full text] [CrossRef] [Medline]
    198. Thomas KB, Lund EM, Bradley AR. Composite trauma and mental health diagnosis as predictors of lifetime nonsuicidal self-injury history in an adult online sample. J Aggress Maltreatment Trauma 2015 Jul 27;24(6):623-635. [CrossRef]
    199. Turner AM, Kirchhoff K, Capurro D. Using crowdsourcing technology for testing multilingual public health promotion materials. J Med Internet Res 2012 Jun 04;14(3):e79 [FREE Full text] [CrossRef] [Medline]
    200. Turner-McGrievy GM, Helander EE, Kaipainen K, Perez-Macias JM, Korhonen I. The use of crowdsourcing for dietary self-monitoring: crowdsourced ratings of food pictures are comparable to ratings by trained observers. J Am Med Inform Assoc 2015 Apr;22(e1):e112-e119. [CrossRef] [Medline]
    201. Van de Belt TH, Engelen LJL, Berben SA, Teerenstra S, Samsom M, Schoonhoven L. Internet and social media for health-related information and communication in health care: preferences of the Dutch general population. J Med Internet Res 2013 Oct;15(10):e220 [FREE Full text] [CrossRef] [Medline]
    202. VanderBroek L, Acker J, Palmer AA, de Wit H, MacKillop J. Interrelationships among parental family history of substance misuse, delay discounting, and personal substance use. Psychopharmacology (Berl) 2016 Jan;233(1):39-48 [FREE Full text] [CrossRef] [Medline]
    203. Vashisht R, Mondal AK, Jain A, Shah A, Vishnoi P, Priyadarshini P, OSDD Consortium, et al. Crowd sourcing a new paradigm for interactome driven drug target identification in Mycobacterium tuberculosis. PLoS One 2012 Jul;7(7):e39808 [FREE Full text] [CrossRef] [Medline]
    204. Volpi D, Sarhan MH, Ghotbi R, Navab N, Mateus D, Demirci S. Online tracking of interventional devices for endovascular aortic repair. Int J Comput Assist Radiol Surg 2015 Jun;10(6):773-781. [CrossRef] [Medline]
    205. Wagholikar KB, MacLaughlin KL, Kastner TM, Casey PM, Henry M, Greenes RA, et al. Formative evaluation of the accuracy of a clinical decision support system for cervical cancer screening. J Am Med Inform Assoc 2013;20(4):749-757 [FREE Full text] [CrossRef] [Medline]
    206. Warby SC, Wendt SL, Welinder P, Munk EG, Carrillo O, Sorensen HB, et al. Sleep-spindle detection: crowdsourcing and evaluating performance of experts, non-experts and automated methods. Nat Methods 2014 Apr;11(4):385-392 [FREE Full text] [CrossRef] [Medline]
    207. Wen X, Higgins ST, Xie C, Epstein LH. Improving public acceptability of using financial incentives for smoking cessation during pregnancy: a randomized controlled experiment. Nicotine Tob Res 2016 May;18(5):913-918. [CrossRef] [Medline]
    208. Zhang MW, Ward J, Ying JJ, Pan F, Ho RC. The alcohol tracker application: an initial evaluation of user preferences. Br Med J Innov 2016 Jan;2(1):8-13 [FREE Full text] [CrossRef] [Medline]
    209. White LW, Kowalewski TM, Dockter RL, Comstock B, Hannaford B, Lendvay TS. Crowd-sourced assessment of technical skill: a valid method for discriminating basic robotic surgery skills. J Endourol 2015 Nov;29(11):1295-1301. [CrossRef] [Medline]
    210. White LW, Lendvay TS, Holst D, Borbely Y, Bekele A, Wright A. Using crowd-assessment to support surgical training in the developing world. J Am Coll Surg 2014 Oct;219(4):e40. [CrossRef]
    211. Wymbs BT, Dawson AE. Screening Amazon's Mechanical Turk for adults with ADHD. J Atten Disord 2015 Aug 05:-. [CrossRef] [Medline]
    212. Yang S, Wang SJ, Ji Y. An integrated dose-finding tool for phase I trials in oncology. Contemp Clin Trials 2015 Nov;45(Pt B):426-434. [CrossRef] [Medline]
    213. Yeh VM, Schnur JB, Margolies L, Montgomery GH. Dense breast tissue notification: impact on women's perceived risk, anxiety, and intentions for future breast cancer screening. J Am Coll Radiol 2015 Mar;12(3):261-266 [FREE Full text] [CrossRef] [Medline]
    214. Yin Z, Fabbri D, Rosenbloom ST, Malin B. A scalable framework to detect personal health mentions on Twitter. J Med Internet Res 2015 Jun 05;17(6):e138 [FREE Full text] [CrossRef] [Medline]
    215. Yu B, Willis M, Sun P, Wang J. Crowdsourcing participatory evaluation of medical pictograms using Amazon Mechanical Turk. J Med Internet Res 2013 Jun;15(6):e108 [FREE Full text] [CrossRef] [Medline]
    216. Zhai H, Lingren T, Deleger L, Li Q, Kaiser M, Stoutenborough L, et al. Web 2.0-based crowdsourcing for high-quality gold standard development in clinical natural language processing. J Med Internet Res 2013 Apr 02;15(4):e73 [FREE Full text] [CrossRef] [Medline]
    217. Chang T, Verma BA, Shull T, Moniz MH, Kohatsu L, Plegue MA, et al. Crowdsourcing and the accuracy of online information regarding weight gain in pregnancy: a descriptive study. J Med Internet Res 2016 Apr 07;18(4):e81 [FREE Full text] [CrossRef] [Medline]
    218. Juusola JL, Quisel TR, Foschini L, Ladapo JA. The impact of an online crowdsourcing diagnostic tool on health care utilization: a case study using a novel approach to retrospective claims analysis. J Med Internet Res 2016 Dec 01;18(6):e127 [FREE Full text] [CrossRef] [Medline]
    219. Zide M, Caswell K, Peterson E, Aberle DR, Bui AA, Arnold CW. Consumers' patient portal preferences and health literacy: a survey using crowdsourcing. JMIR Res Protoc 2016 Jun 08;5(2):e104. [CrossRef]
    220. Bardos J, Friedenthal J, Spiegelman J, Williams Z. Cloud based surveys to assess patient perceptions of health care: 1000 respondents in 3 days for US $300. JMIR Res Protoc 2016 Aug 23;5(3):e166 [FREE Full text] [CrossRef] [Medline]
    221. Bock BC, Lantini R, Thind H, Walaska K, Rosen RK, Fava JL, et al. The Mobile Phone Affinity Scale: enhancement and refinement. JMIR Mhealth Uhealth 2016 Dec 15;4(4):e134 [FREE Full text] [CrossRef] [Medline]
    222. Bateman DR, Brady E, Wilkerson D, Yi EH, Karanam Y, Callahan CM. Comparing crowdsourcing and friendsourcing: a social media-based feasibility study to support Alzheimer disease caregivers. JMIR Res Protoc 2017 Apr 10;6(4):e56 [FREE Full text] [CrossRef] [Medline]
    223. Aramaki E, Shikata S, Ayaya S, Kumagaya SI. Crowdsourced identification of possible allergy-associated factors: automated hypothesis generation and validation using crowdsourcing services. JMIR Res Protoc 2017 May 16;6(5):e83 [FREE Full text] [CrossRef] [Medline]
    224. Brady CJ, Mudie LI, Wang X, Guallar E, Friedman DS. Improving consensus scoring of crowdsourced data using the Rasch model: development and refinement of a diagnostic instrument. J Med Internet Res 2017 Jun 20;19(6):e222 [FREE Full text] [CrossRef] [Medline]
    225. DePalma MT, Rizzotti MC, Branneman M. Assessing diabetes-relevant data provided by undergraduate and crowdsourced web-based survey participants for honesty and accuracy. JMIR Diabetes 2017 Jul 12;2(2):e11. [CrossRef]
    226. Bartek MA, Truitt AR, Widmer-Rodriguez S, Tuia J, Bauer ZA, Comstock BA, et al. The promise and pitfalls of using crowdsourcing in research prioritization for back pain: cross-sectional surveys. J Med Internet Res 2017 Oct 06;19(10):e341 [FREE Full text] [CrossRef] [Medline]
    227. Prieto JT, Jara JH, Alvis JP, Furlan LR, Murray CT, Garcia J, et al. Will participatory syndromic surveillance work in Latin America? Piloting a mobile approach to crowdsource influenza-like illness data in Guatemala. JMIR Public Health Surveill 2017 Nov 14;3(4):e87 [FREE Full text] [CrossRef] [Medline]
    228. Cooper S, Khatib F, Treuille A, Barbero J, Lee J, Beenen M, et al. Predicting protein structures with a multiplayer online game. Nature 2010 Aug 05;466(7307):756-760 [FREE Full text] [CrossRef] [Medline]
    229. Adrover C, Bodnar T, Huang Z, Telenti A, Salathé M. Identifying adverse effects of HIV drug treatment and associated sentiments using Twitter. JMIR Public Health Surveill 2015 Jul 27;1(2):e7. [CrossRef]
    230. PatientsLikeMe. Live better, together!   URL: https://www.patientslikeme.com/?format=html [accessed 2018-04-05] [WebCite Cache]
    231. Morton CC. Innovating openly: researchers and patients turn to crowdsourcing to collaborate on clinical trials, drug discovery, and more. IEEE Pulse 2014;5(1):63-67. [CrossRef] [Medline]
    232. Saez-Rodriguez J, Costello JC, Friend SH, Kellen MR, Mangravite L, Meyer P, et al. Crowdsourcing biomedical research: leveraging communities as innovation engines. Nat Rev Genet 2016 Dec 15;17(8):470-486. [CrossRef] [Medline]
    233. Gosling SD, Vazire S, Srivastava S, John OP. Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires. Am Psychol 2004 Mar;59(2):93-104. [CrossRef] [Medline]
    234. Williamson V. 2016 Feb. Brown Center Chalkboard: Can crowdsourcing be ethical?   URL: https://www.brookings.edu/blog/brown-center-chalkboard/2016/02/08/can-crowdsourcing-be-ethical/ [accessed 2018-04-05] [WebCite Cache]


    Abbreviations

    MTurk: Amazon Mechanical Turk


    Edited by G Eysenbach; submitted 02.11.17; peer-reviewed by M Swan, L Hirschman; comments to author 17.12.17; revised version received 10.02.18; accepted 14.03.18; published 15.05.18

    ©Perrine Créquit, Ghizlène Mansouri, Mehdi Benchoufi, Alexandre Vivot, Philippe Ravaud. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.05.2018.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.