Background: Scientific research is typically performed by expert individuals or groups who investigate potential solutions in a sequential manner. Given the current worldwide exponential increase in technical innovations, potential solutions for any new problem might already exist, even though they were developed to solve a different problem. Therefore, in crowdsourcing ideation, a research question is explained to a much larger group of individuals beyond the specialist community to obtain a multitude of diverse, outside-the-box solutions. These are then assessed in parallel by a group of experts for their capacity to solve the new problem.
The 2 key problems in brain tumor surgery are the difficulty of discerning the exact border between a tumor and the surrounding brain, and the difficulty of identifying the function of a specific area of the brain. Both problems could be solved by a method that visualizes the highly organized fiber tracts within the brain; the absence of fibers would reveal the tumor, whereas the spatial orientation of the tracts would reveal the area’s function. To raise awareness about our challenge of developing a means of intraoperative, real-time, noninvasive identification of fiber tracts and tumor borders to improve neurosurgical oncology, we turned to the crowd with a crowdsourcing ideation challenge.
Objective: Our objective was to evaluate the feasibility of a crowdsourcing ideation campaign for finding novel solutions to challenges in neuroscience. The purpose of this paper is to introduce our chosen crowdsourcing method and discuss it in the context of the current literature.
Methods: We ran a prize-based crowdsourcing ideation competition called HORAO on the commercial platform HeroX. Prize money previously collected through a crowdfunding campaign was offered as an incentive. Using a multistage approach, an expert jury first selected promising technical solutions based on broad, predefined criteria, coached the respective solvers in the second stage, and finally selected the winners in a conference setting. We performed a postchallenge web-based survey among the solvers crowd to find out about their backgrounds and demographics.
Results: Our web-based campaign reached more than 20,000 people (views). We received 45 proposals from 32 individuals and 7 teams, working in 26 countries on 4 continents. The postchallenge survey revealed that most of the submissions came from single solvers or teams working in engineering or the natural sciences, with additional submissions from other nonmedical fields. We engaged in further exchanges with 3 out of the 5 finalists and finally initiated a successful scientific collaboration with the winner of the challenge.
Conclusions: This open innovation competition is the first of its kind in medical technology research. A prize-based crowdsourcing ideation campaign is a promising strategy for raising awareness about a specific problem, finding innovative solutions, and establishing new scientific collaborations beyond strictly disciplinary domains.
Gliomas are the most common type of primary brain tumors [, ]. Surgical resection plays a central role in their management, and there is increasing evidence that the extent of tumor resection correlates well with overall and progression-free survival in patients with both high- and low-grade gliomas [ - ]. In recent decades, new techniques have become available to allow for more radical and safer brain tumor surgery. Intraoperative magnetic resonance imaging, ultrasound, and fluorescence guidance attempt to visualize tumor tissue [ - ]. Intraoperative monitoring helps to identify brain areas involved in motor and speech function [ - ]. However, each of these technologies has its drawbacks. Low-grade glioma and infiltration zones of high-grade glioma remain difficult to discern. Most neurological functions cannot be investigated by intraoperative monitoring, and for speech mapping, the patient is required to be awake during surgery. None of these techniques provides real-time feedback about fiber tract location or tissue delineation [ , ]. The ability to see white matter tracts live during surgery would help to differentiate white matter from tumor tissue based on the presence of fibers. In addition, being able to see the fibers would allow the neurosurgeon to identify and spare specific, crucial fiber tracts such as the arcuate fasciculus, the corticospinal tract, and the optic radiation due to their spatial orientation and to orientate himself or herself based on the direction of the fibers that can be seen. Such a technology would need to be noninvasive, nontoxic, and able to provide information about fibers and their spatial orientation in real time.
Innovative research, by means of high-risk projects, in medicine is often subject to tight constraints placed on scientists, such as funding difficulties and a culture of private endeavor rather than reaching out to others . During the 20th century, innovations from the fields of chemistry, physiology, and physics revolutionized medicine. Technological advancements and interdisciplinary research have become indispensable in the quest for improvement in modern medicine [ ]. Investigator isolation, by contrast, impedes collaboration and thus hampers progress [ ].
The development of the Web 2.0 technologies around the turn of the millennium enabled internet users to act as both consumers and contributors of content and to connect to each other independent of location . This opened up completely new ways of collaborating and paved the way toward exploiting the wisdom of the crowd for innovation and research. The concept designated “crowdsourcing,” a portmanteau composed of “crowd” and “outsourcing,” coined by Jeff Howe [ ] in 2006, relies on accomplishing a task by opening up its execution to the broad public crowd [ , ]. The advantages of crowd participation have been exploited for centuries, starting in 1714, when the British Government offered £20,000 (US $24,642.40) to anyone who could find a way to calculate the longitudinal position of a ship. The problem was solved in 1730 by John Harrison, a carpenter and clockmaker, who presented the first sea clock (chronometer) [ ]. And 300 years later, billions of people are connected via the internet, enlarging the crowd for crowdsourcing enormously. This not only opens up access to much more “crowd intelligence,” but also enables networks and collaborations across geographic boundaries and across a plethora of research teams from a vast variety of scientific fields. The model takes advantage of the wisdom of the crowd and counteracts the silo mentality and secrecy traditionally associated with classical research and development [ ].
In health and medical research, crowdsourcing has evolved over the past few decades. Strategies that include the public or a specialist community are broadly applied to recruit patients, collect data, generate intellectual output, conduct evaluations, gather new ideas, or solve specific problems together [- ]. Crowdsourcing in the form of open innovation challenges was reported by the National Aeronautics and Space Administration (NASA) [ ] and the Obama administration [ ]. Independent of the type of crowdsourcing applied, the concept has been shown to save time and money, as well as spur innovation [ - ].
Motivated by these findings and convinced that the solution to our problem already existed beyond the community of medical professionals, we turned to the public to catalyze interdisciplinary research and development. In the search for an innovative solution to overcome a longstanding technical dilemma, we launched “HORAO—The It Doesn’t Take a Brain Surgeon” challenge. Despite considerable evidence supporting the effectiveness of open innovation as an alternative method in health and medical research, to our knowledge, ours was the first open innovation challenge of its kind.
We designed a multiphase, prize-based, open innovation competition in collaboration with the commercial platform provider HeroX. The challenge page on HeroX  served as a content hub throughout the challenge. It featured an explanatory video and a text-based description of the technology gap and its background. On HeroX, we published the judging scorecard and all formal and legal requirements. Moreover, the challenge hub included a chat room for discussions and questions for the sponsor team. The whole of the financial funds previously collected in a crowdfunding campaign were used for the crowdsourcing campaign [ ]. First, expenses for using the HeroX platform, production of informative media content (eg, the video), and organization of the HORAO conference, as well as the travel costs of the participants and the jury, were paid. The remaining US $50,000 served as the monetary incentive for putting forward existing solutions. We decided to divide this prize money among several finalists to increase the likelihood of any innovator winning a prize and further motivate innovators to participate. The final share was set at US $35,000 for the winner, US $12,000 for the runner-up, and US $1000 each for the third to fifth places.
After 3 months of content creation (video and text), on April 23, 2018, the challenge went on the internet with a prelaunch in the categories of engineering, health care, and technology. The aim of the prelaunch phase was to raise awareness about the ensuing competition and give solvers a first opportunity to evaluate the challenge. HeroX’s service included advertisements on Facebook (2 weeks) and Twitter (1 week) as channels for recruiting potential participants and a one-on-one outreach campaign (targeted outreach: 2108 emails). The target audience for the one-on-one outreach included individuals, companies, and organizations involved in medical imaging, medical technology, radiology, surgical technology, clinical engineering, neurological societies, imaging science, health science, and microscopy. Finally, HeroX published the HORAO challenge in its newsletter as part of the media service.provides an overview of the challenge timeline.
The submission phase started on June 12, 2018, and ended on November 16, 2018 (after 22 weeks and 4 days). Proposals had to be presented using the official submission form and to address the questions asked thoughtfully. The form had space for a technical report with the option to embed a link to a video or website. The complete proposal had to be uploaded as a PDF to HeroX. Submissions needed to comply with all the terms of the challenge defined in the challenge-specific agreement, which specified, for example, that competitors retain all intellectual property rights to their technology. The challenge was open to all adult individuals or teams, requiring no specific qualifications. We considered, for further evaluation, only submissions satisfying the criteria of the judging scorecard (). We made the judging scorecard and the challenge guidelines publicly accessible from the very beginning. The judging scorecard narrowed down and specified the scope of the possible solution. The challenge guidelines and the challenge-specific agreement are reproduced in and .
After termination of the submission phase, the challenge team launched an individual project website serving as a new content hub independent of HeroX. The sponsor team also published regular updates about the challenge on the hospital and departmental websites and via social media channels (Facebook, Twitter, and Instagram).
|Detection of cerebral tissue||The solution discerns brain from tumor tissue||20|
|Detection of fiber tracts||The solution detects brain tissue in such a way that the spatial orientation of fiber tracts can be seen||20|
|Real-time detection||Time used for visualization must be short (minutes) in order not to disrupt the flow of surgery||20|
|Size of solution||The size of the solution must be such that it fits well into the operating theater (not larger than 2 cubic meters)||10|
|Noninvasiveness||The solution must not harm or remove the investigated tissue||20|
|Repetitiveness||The solution must be able to be used repetitively at short interval (minutes)||10|
The evaluation phase consisted of the following 3 consecutive rounds: the preround, the judging round, and the finals. Each round had its own panel of judges.
In the preround evaluation, the sponsor team, consisting of 4 neurosurgeons, formed the panel of judges. The sponsor team performed a first evaluation of all proposals based on the judging scorecard criteria. The aim of the preround was to facilitate the work of the expert panel by rejecting proposals that did not meet the criteria and limiting the number of proposals to 10-15. The votes of at least 2 out of the 4 members of the sponsor team were required for the proposal to be chosen for the next round. The sponsor team gave feedback to those selected for the judging round about their submissions. Participants then had to resubmit the revised proposals within 2 weeks.
Judging Round Evaluation
An expert panel, consisting of 2 research and development directors from medical technology companies, 2 neurosurgeons, and 3 senior biomedical scientists, assessed the proposals in the judging round. Each member of the expert panel rated the proposals that passed the pre-evaluation by awarding a certain number of points (0 to max weight;) for each of the criteria. The 5 submissions with the highest score entered the finals. Again, the sponsor team gave feedback about the selected proposals based on the jury’s assessment, and the finalists had the option to revise their submission prior to the finals.
The finals took place at the HORAO conference on March 15, 2019, which was open to the public. For the finals, the expert panel from the judging round and the audience formed the judging board. Each finalist presented his or her proposal orally and then answered questions from the audience and the expert panel. The final score consisted of the points awarded by the expert panel in the judging round (50%), the score given for the presentation at the final conference by the expert panel (25%), and the score given for the presentation by the general audience (25%).
We refrained from asking for background information about the solvers themselves to avoid discouraging those with a lower level of academic attainment from submitting solutions and to avoid selection bias during the judging round. Using a web-based survey (SurveyMonkey ), sent to the individuals and teams, we obtained this information after the finals. We asked team leaders to forward the survey to their team members to capture information about as many of them as possible. The questions in the survey covered place of residence, type of employment and place of work, academic degrees obtained and field of education, number of prior challenges joined, and how the solver found out about the HORAO challenge. We used descriptive statistics to evaluate the diversity of the solvers crowd. In addition, HeroX performed an analysis of the reach of advertising on social media channels (Facebook and Twitter) for challenge visibility as part of their service.
We did not use any health-related data for this work. Participants shared the information about educational background and demographics voluntarily. We treated all personal data disclosed with the utmost care. The project does not fall under the jurisdiction of the local ethics committee, so we did not need to obtain their approval.
General Crowd and Solvers
Preround Evaluation and Solvers Crowd
The challenge on HeroX attracted 20,680 views, and 274 individuals and 17 teams actively followed the challenge hub. The first and second advertisements published by HeroX on Facebook were displayed to 10,751 and 4466 users, respectively, and the advertisement on Twitter to 23,718 users. Eighty-one and 46 users actively clicked the link in the 2 advertisements published on Facebook, and on Twitter, 109 users actively clicked the link. Overall, 2108 individual emails were sent in the targeted outreach performed by HeroX.
A total of 45 proposals were submitted by 7 teams and 32 individuals. All members of the 7 teams and the 32 individuals formed the solvers crowd of the challenge. The background survey was sent to 39 individuals, and feedback was obtained from 23 (58.9%). If we received no response, we searched their profile on HeroX or LinkedIn for any information. Finally, we collected background information about 39 solvers. Four people were involved in more than one submission. For further analysis of the crowd, we treated them like individual solvers of every submission, resulting in a crowd of 45 solvers. A total of 4 women and 41 men aged between 16 and 75 years from 26 countries on 4 continents formed the solvers crowd (). Most of the solvers were from Asia, North America, or Europe. A total of 37 solvers reported having a university degree (18 bachelor’s, 9 master’s, 9 PhDs, and 1 professor), 1 reported no degree and 7 solvers did not provide any information about an obtained degree. Solvers reported an educational background in the field of engineering (13/45, 28.9%), natural sciences (12/45, 26.7%), technology (4/45, 8.9%), or a nonrelated field like finance, architecture, or other (16/45, 35.5%). Seven solvers did not provide any information about their educational background. Most of the solvers were employed either at a university (10/45, 22.2%) or in industry (14/45, 31.1%); others reported to be self-employed (9/45, 20%). One solver was a student, three reported being freelancers or not employed, and one reported being retired. Three solvers reported being employed but not where, and four solvers did not provide any information about their employment. The solvers were working in the following areas: natural sciences (10/45, 22.2%), technology (9/45, 20%), engineering (9/45, 20%), aerospace (2/45, 4.4%), innovation ideation (2/45, 4.4%), or a nonrelated area like architecture, finance, or other (8/45, 17.8%). Five solvers did not report in which area they were working.
Judging Round Evaluation and Semifinalists
The sponsor team selected 13 submissions from 9 individuals and 3 teams for the judging round, forming the semifinalists’ crowd. The crowd composition with regard to current employment was comparable with that of the solvers crowd. The educational backgrounds of the semifinalists, with the exception of 1 member, were in the field of natural sciences or engineering. All of the semifinalists were familiar with innovation challenges prior to HORAO. The semifinalists originated from Europe, North America, and Asia.
Final Evaluation and Finalists
The expert panel selected the proposals of 1 woman and 4 men. All reported having an academic degree of at least bachelor’s level in the field of natural sciences or engineering. Of the 5 finalists, 3 reported that they worked in an academic research group in the field of mathematical oncology, bioengineering, or technology. One reported being self-employed, and 1 worked in industry, both in the fields of technology. The self-employed solver further reported having participated in about 100 challenges prior to HORAO, whereas the other 4 reported having participated in 1-4 prior challenges. The finalists came from Canada, the United States, the United Kingdom, Germany, and Spain.provides an overview of the demographic and background characteristics of the 3 different crowds.
|All (N=45), n||Semifinalists (n=13), n||Finalists (n=5), n|
|Sex (unknown: n=0)|
|Continent (unknown: n=0)|
|Field of education (unknown: n=7)|
|Field of work (unknown: n=5)|
a“Other” includes international studies, communication, consultancy, and law.
The 5 finalists proposed solutions based on multispectral time-resolved fluorescence spectroscopy, polarization-sensitive optical coherence tractography, machine-learned interpretation of red-green-blue images, using polarized light based on Mueller polarimetry, and wide-field Mueller polarimetry based on Lu-Chipman decomposition. Of these, wide-field Mueller polarimetry based on Lu-Chipman decomposition received the highest score, both from the conference audience and the expert jury, and won our crowdsourcing ideation campaign. The solution uses a series of liquid crystals to polarize white light from a xenon light source and captures the polarization details of the tissue in reflection transfiguration. Following the conference, we initiated an in-depth scientific collaboration between the sponsor team and the winning research team. For their preliminary results, the collaborators were awarded an industry grant. Following a series of ex-vivo experiments on cadaveric animal brain tissue and on fresh human tumor tissue with a prototype of the Mueller polarization microscope [, ], we created a new interdisciplinary research unit, which comprises neurosurgeons, optical physicists, neuropathologists, and experts on artificial intelligence. The group recently launched a multiyear, in-depth clinical project, which has been awarded a Swiss National Science Foundation Sinergia grant 205904. After 3 years of the HORAO conference, we have already published a series of promising preclinical results [ - ].
With the crowdsourcing challenge HORAO, a technology-gap-type problem in neurosurgery was presented for the first time to the public, with the conviction that a solution already existed somewhere, albeit developed for a different use. The challenge proved successful, producing a handful of very innovative and promising proposals, leading to new scientific collaborations.
In recent years, simplified access for patients, participants, scientists, and biomedical staff through Web 2.0 has opened up a new world for collaborating on a variety of tasks. Consequently, various types of crowdsourcing have evolved in health care. A well-represented use case described in the recent literature concerns projects searching for new biomarkers, like the Anti-PD-1 Response Challenge, Prostate Cancer DREAM challenge, and the Multiple Myeloma DREAM challenge, to name but a few [, , ]. In those challenges, the initiators share data, usually from a large set of patients, on an open platform and mobilize groups of people with same interests around the world to analyze the data. Leveraging worldwide expertise and the power of the mass has speeded up the identification of novel biomarkers tremendously compared to the classical approaches used in research. Another, completely different approach, evaluated the Berlin Institute of Health, was the OPEN project [ ]. In response to the slow progress being made with artificial pancreas systems for people with diabetes, the patient community has taken the problem into its own hands. Under the hashtag #wearenotwaiting, patients and their families are building their own systems and making the algorithms publicly available (do-it-yourself artificial pancreas systems, OpenAPS) [ ]. The OPEN project examines what academia, industry, and individuals with diabetes can learn from each other by establishing empirical evidence of the impact of do-it-yourself artificial pancreas systems. The initiators of the OPEN project are convinced that such an interdisciplinary and collaborative approach will have a profound impact, not only on the patients but also on the health care system and on society in general. With the rise of machine learning in medicine, labeled data sets are in high demand. Hence, crowdsourcing for data labeling has become popular. A recent example is the NuCLS study [ ]. This study used a crowdsourcing approach for nucleus classification, localization, and segmentation in hematoxylin- and eosin-stained slides of breast carcinomas using a preannotated data set elaborated in a previous crowdsourcing study [ ]. The organizers specifically addressed medical students and graduates in pathology by searching interest groups on social media (Facebook and LinkedIn) and assigning the tasks depending on experience. The mixed crowd of experts and undergraduates produced the final NuCLS data set containing more than 220,000 annotations of cell nuclei. Although it was successful, the initiators of the project pointed out that the context-dependency of data set curation makes transfer of the approach to other problems difficult. The crowdsourcing approach described here focuses on the ideation process and therefore differs considerably from the abovementioned crowdsourcing applications.
The review by Nguyen et al  discusses the various methods of collective intelligence applied in clinical research and proposes a framework to implement them with respect to the domains shown in . In comparison to other reviews that address crowdsourcing in medicine more generally [ , , , ], Nguyen et al [ ] specifically addressed crowdsourcing that involved intellectual thinking on the part of the crowd and excluded other approaches like data collection or the performance of simple tasks (eg, classifying images or transcribing data). Information on ideation based on crowdsourcing is still scarce or underreported in the literature [ ]. Only a pilot study by NASA [ ] stands out. Since we were not able to identify other comparable projects, we discuss in detail the HORAO project in relation to the pilot projects of NASA, using the framework proposed by Nguyen et al [ ].
HORAO was launched to overcome a technology gap in neuro-oncological surgery, specifically the inability to identify white matter tracts in real time during surgery. The rationale for turning to the public was that a solution already existed somewhere in another field of research, albeit having initially been developed to solve a different technological problem. The motivation to seek a solution, which many research groups have so far been unable to find, however, goes beyond simple ideation. The challenge organizers, all neurosurgeons themselves, face is the consequences of the lack of an appropriate technology almost daily, creating a strong personal desire to have a solution at hand. NASA faced a strategic challenge caused by a 45% reduction in its research and development budget in 2005. Formerly famous for its track record in research and innovation achieved by its own researchers, NASA decided to reach out to the crowd to solve space exploration problems, running a pilot program of challenges on InnoCentive  (NASA innovation pavilion). In NASA’s case, as in ours, the problem-owners identified a technology gap and believed that solutions for closing the gap would be accessible through open innovation.
Participants, Recruitment, and Incentives
HORAO—like the NASA pilot challenges—was open to the public with no restrictions or specific requirements for solvers. In both projects, the monetary incentives were large enough to motivate solvers to apply known solutions without having to finance scientific investigations (average NASA: US $7500-$30,000; HORAO: CHF 1000-35,000 [US $1089.60-$38,136]). For recruitment and challenge execution, HORAO collaborated with HeroX, whereas NASA worked with InnoCentive. InnoCentive, launched in 2001, is the pioneer and longtime leader in its field. Its hallmark is the wide network of registered experts with various academic backgrounds (about 390,000 solvers, 60% with a master’s degree or higher) and its longstanding experience in crowdsourcing innovative research. HeroX was founded in 2013 with the intention of opening up access to the public, enabling them to participate in and contribute to innovation challenges (about 170,000 solvers). Both platform providers offer a range of services, from challenge conceptualization to pre-evaluation of submissions. The services offered by HeroX and InnoCentive are comparable. One of the major benefits for HORAO was the targeted outreach offered by HeroX, which sent announcement emails to potential participants specifically to raise their awareness of the challenge. Whereas InnoCentive charges a challenge fee of about CHF 75,000 (US $81,720), HeroX charges 18% of the prize money with the security of a 50% refund if no winning idea can be identified and of a 100% refund if no idea at all is submitted, making the platform attractive to first-time users.
The number of solvers attracted by the HORAO challenge was about half the number of followers recorded for the single NASA pilot challenges (HORAO: n=220; NASA: n=419). Based on the report of NASA’s pilot program by InnoCentive , an average challenge usually attracts about 300 followers. What is interesting, however, is that the final number of submissions was about the same (HORAO: n=45; NASA: n=11-108). HORAO solvers were from 26 different countries, while NASA pilot challenge participants were from 5-20 different countries for a single challenge and for all 7 pilot challenges, from 30 different countries. Most submissions came from the same continents—North America, Europe, and Asia. Although more than half (7/13, 53%) of the submissions that reached the judging round (semifinals) of the HORAO challenge came from North America, all 5 finalists originated from a different country on 2 continents (Europe and North America). The NASA pilot challenge reports diversity of solvers with regard to education and expertise on the level of all followers, whereas HORAO evaluated participants from the solvers to the finalists. Thus, the numbers are not directly comparable. Moreover, different categorizations of the fields of expertise were used, which makes direct comparison difficult. Solvers attracted by the HORAO challenge were educated or employed in the fields of engineering, natural sciences, or technology, with engineering as the leading discipline. Likewise, for the NASA challenges, engineering was also the major field of employment of the solvers, followed by computer and physical sciences. Of the NASA solvers, 30% (147/490) reported having expertise within the challenge’s discipline.
Participants’ Contribution and Interaction
In these domains too, the crowdsourcing approach applied for HORAO matches the approach used for the NASA pilot challenges. Participants contributed by providing their ideas or solutions in a competitive manner, with repeated interactions with the challenge organizers and only minor interaction with each other. A limitation reported for the challenge design of the NASA pilot program was the lack of a user template for submitting a proposal. HORAO provided a predefined format (submission form) () with questions concerning the issues relevant for evaluation.
Evaluation and Decision-making
Both the NASA pilot challenges and HORAO applied a stepwise evaluation system, with a first prescreening and consecutive evaluation round(s) conducted by an expert panel. In contrast to HORAO, in the NASA pilot challenges, the platform provider performed the prescreening. NASA itself reported this approach to be prone to inappropriate rating by the platform providers. The organizers of the NASA challenge resolved this issue by jointly defining clear rating criteria. The benefit of letting the platform providers do the pre-evaluation is that it lifts the burden of evaluating masses of low-quality proposals from the challenge owners, but it necessitates very accurately defined evaluation criteria. For HORAO, the judging scorecard was developed prior to the challenge, and the sponsor team performed a pre-evaluation, which prevented inappropriate rating and rejection of promising proposals.
In each case, pre-evaluation removed low-quality submissions and avoided the jury being overwhelmed in the consecutive evaluation round(s) by too many proposals, thus allowing them to focus on the promising solutions. For HORAO, an additional benefit of reducing the number of proposals passing the pre-evaluation stage was that it enabled an effective feedback system during the consecutive challenge phases.
Most of the potential difficulties listed by Nguyen et al  ( ) did not apply to HORAO (integration of open innovation in traditional processes and protection of data privacy) or were avoided by the choice of the challenge design (risk of unqualified solvers or low-quality proposals, inappropriate incentives, lack of platform, unclear task description, and risk of dominant voices). Nevertheless, selective participation and bias in decision-making are important risks to consider. Although open to the public, the targeted outreach by HeroX probably led to a partial selection of the participants. We do not consider this a major disadvantage since it raised awareness among the crowd addressed without excluding anyone from participating. In a future crowdsourcing ideation campaign, we would again focus the targeted outreach on the scientific crowd, possibly by advertising the campaign at specific conferences. The provision of predefined criteria was intended to avoid bias in decision-making or evaluation. Nevertheless, some criteria were more objectively assessable than others, creating a possible source of bias. Overall, the sponsor team greatly benefited from working with a preexisting commercial crowdsourcing company, especially as it was the team’s first such campaign.
Our strategy of crowdsourcing ideation relied on prize money as an incentive, to which the usual, especially governmental, funding agencies are unlikely to contribute. Collecting the prize money beforehand in a crowdfunding campaign, as done for the HORAO project, requires time and effort.
Running the crowdsourcing campaign also involved time-consuming tasks, such as producing explanatory videos, press releases, and daily responses to solvers’ questions. The performance of these tasks by the challenge owners themselves, who were laypersons in the case of HORAO, increased the time and effort expended because they first had to acquire the essential skills. Alternatively—and as done for the video in our project—some of these tasks may be outsourced, in which case they incur additional costs. The pre-evaluation performed by the sponsor team was another time-consuming task, limiting the future usability of the approach applied.
The HORAO campaign was the first of its kind to crowdsource for new ideas on contemporary problems in neurosurgery. The campaign was successful in raising awareness of a longstanding neurosurgical problem; the lack of a means of intraoperative real-time visualization of fibers to delineate tumor tissue from surrounding healthy tissue. It allowed us to gain access to a multitude of outside-the-box potential solutions, and the team of experts was able to rapidly assess them in parallel for their capacity to solve our problem. Ultimately, the crowdsourcing campaign led to the creation of a very successful interdisciplinary research unit, which is now funded by traditional governmental funds.
The authors would like to acknowledge Manuel Imboden and Philipp Adonie from Rise and Shine Films for the video sequences; Stefan Weber, Christoph Hauger, Joseph Zinter, Ruth Lyck and Susanne Hager for being jury members; and Susan Kaplan for editorial assistance.
The data sets generated and analyzed during this study that are not displayed in this manuscript or its appendices are available from the corresponding author upon reasonable request.
Conflicts of Interest
Challenge guidelines.PDF File (Adobe PDF File), 653 KB
Challenge legal agreement.PDF File (Adobe PDF File), 49 KB
Submission form (empty).PDF File (Adobe PDF File), 127 KB
- Ostrom QT, Gittleman H, Stetson L, Virk SM, Barnholtz-Sloan JS. Epidemiology of gliomas. Cancer Treat Res 2015;163:1-14. [CrossRef] [Medline]
- Louis DN, Perry A, Reifenberger G, von Deimling A, Figarella-Branger D, Cavenee WK, et al. The 2016 World Health Organization classification of tumors of the central nervous system: a summary. Acta Neuropathol 2016;131(6):803-820. [CrossRef] [Medline]
- Scerrati M, Roselli R, Iacoangeli M, Pompucci A, Rossi GF. Prognostic factors in low grade (WHO grade II) gliomas of the cerebral hemispheres: the role of surgery. J Neurol Neurosurg Psychiatry 1996;61(3):291-296 [FREE Full text] [CrossRef] [Medline]
- Hervey-Jumper SL, Berger MS. Maximizing safe resection of low- and high-grade glioma. J Neurooncol 2016;130(2):269-282. [CrossRef] [Medline]
- Lacroix M, Abi-Said D, Fourney DR, Gokaslan ZL, Shi W, DeMonte F, et al. A multivariate analysis of 416 patients with glioblastoma multiforme: prognosis, extent of resection, and survival. J Neurosurg 2001;95(2):190-198. [CrossRef] [Medline]
- Schucht P, Beck J, Abu-Isa J, Andereggen L, Murek M, Seidel K, et al. Gross total resection rates in contemporary glioblastoma surgery: results of an institutional protocol combining 5-aminolevulinic acid intraoperative fluorescence imaging and brain mapping. Neurosurgery 2012;71(5):927-935; discussion 935-936. [CrossRef] [Medline]
- Senft C, Bink A, Franz K, Vatter H, Gasser T, Seifert V. Intraoperative MRI guidance and extent of resection in glioma surgery: a randomised, controlled trial. Lancet Oncol 2011;12(11):997-1003. [CrossRef] [Medline]
- Stummer W, Reulen HJ, Meinel T, Pichlmeier U, Schumacher W, Tonn JC, ALA-Glioma Study Group. Extent of resection and survival in glioblastoma multiforme: identification of and adjustment for bias. Neurosurgery 2008;62(3):564-576; discussion 564-576. [CrossRef] [Medline]
- De Witt Hamer PC, Robles SG, Zwinderman AH, Duffau H, Berger MS. Impact of intraoperative stimulation brain mapping on glioma surgery outcome: a meta-analysis. J Clin Oncol 2012;30(20):2559-2565. [CrossRef] [Medline]
- Schucht P, Seidel K, Jilch A, Beck J, Raabe A. A review of monopolar motor mapping and a comprehensive guide to continuous dynamic motor mapping for resection of motor eloquent brain tumors. Neurochirurgie 2017;63(3):175-180. [CrossRef] [Medline]
- Raabe A, Beck J, Schucht P, Seidel K. Continuous dynamic mapping of the corticospinal tract during surgery of motor eloquent brain tumors: evaluation of a new method. J Neurosurg 2014;120(5):1015-1024. [CrossRef] [Medline]
- D'Amico RS, Englander ZK, Canoll P, Bruce JN. Extent of resection in glioma-a review of the cutting edge. World Neurosurg 2017;103:538-549. [CrossRef] [Medline]
- Almekkawi AK, El Ahmadieh TY, Wu EM, Abunimer AM, Abi-Aad KR, Aoun SG, et al. The use of 5-aminolevulinic acid in low-grade glioma resection: a systematic review. Oper Neurosurg (Hagerstown) 2020;19(1):1-8. [CrossRef] [Medline]
- Johnston SC, Hauser SL. Transformative research. Ann Neurol 2008;63(5):A11-A13. [CrossRef] [Medline]
- Bronzino J. 1 - Biomedical engineering: a historical perspective. In: Enderle JD, Bronzino JD, Blanchard SM, editors. Introduction to Biomedical Engineering. 2nd ed. Boston, MA: Academic Press; 2005:1-29.
- Johnston SC, Hauser SL. Investigator balkanization. Ann Neurol 2008;64(3):A11-A12. [CrossRef] [Medline]
- Hudson-Smith A, Batty M, Crooks A, Milton R. Mapping for the masses: accessing Web 2.0 through crowdsourcing. Soc Sci Comput Rev 2009;27(4):524-538. [CrossRef]
- Howe J. The rise of crowdsourcing. Wired. 2006. URL: https://www.wired.com/2006/06/crowds/ [accessed 2023-03-28]
- Ranard BL, Ha YP, Meisel ZF, Asch DA, Hill SS, Becker LB, et al. Crowdsourcing--harnessing the masses to advance health and medicine, a systematic review. J Gen Intern Med 2014;29(1):187-203 [FREE Full text] [CrossRef] [Medline]
- Tucker JD, Day S, Tang W, Bayus B. Crowdsourcing in medical research: concepts and applications. PeerJ 2019 Apr 12;7:e6762 [FREE Full text] [CrossRef] [Medline]
- Sobel D. Longitude: The True Story of a Lone Genius Who Solved the Greatest Scientific Problem of His Time. New York, NY: Walker Publishing Company; 1995.
- Chesbrough H. Open Innovation: The Imperative for Creating and Profiting From Technology. Boston, MA: Harvard Business Press; 2006.
- Amgad M, Atteya LA, Hussein H, Mohammed KH, Hafiz E, Elsebaie MAT, et al. NuCLS: a scalable crowdsourcing approach and dataset for nucleus classification and segmentation in breast cancer. Gigascience 2022;11:giac037 [FREE Full text] [CrossRef] [Medline]
- Vincent BG, Szustakowski JD, Doshi P, Mason M, Guinney J, Carbone DP. Pursuing better biomarkers for immunotherapy response in cancer through a crowdsourced data challenge. JCO Precis Oncol 2021;5:51-54 [FREE Full text] [CrossRef] [Medline]
- Douzgou S, Pollalis YA, Vozikis A, Patrinos GP, Clayton-Smith J. Collaborative crowdsourcing for the diagnosis of rare genetic syndromes: the DYSCERNE experience. Public Health Genomics 2016;19(1):19-24. [CrossRef] [Medline]
- Brinkmann BH, Wagenaar J, Abbot D, Adkins P, Bosshard SC, Chen M, et al. Crowdsourcing reproducible seizure forecasting in human and canine epilepsy. Brain 2016;139(pt 6):1713-1722 [FREE Full text] [CrossRef] [Medline]
- Marshall TF, Alfano CM, Sleight AG, Moser RP, Zucker DS, Rice EL, et al. Consensus-building efforts to identify best tools for screening and assessment for supportive services in oncology. Disabil Rehabil 2020;42(15):2178-2185. [CrossRef] [Medline]
- Hilton LG, Coulter ID, Ryan GW, Hays RD. Comparing the recruitment of research participants with chronic low back pain using Amazon Mechanical Turk with the recruitment of patients from chiropractic clinics: a quasi-experimental study. J Manipulative Physiol Ther 2021;44(8):601-611 [FREE Full text] [CrossRef] [Medline]
- Tarca AL, Pataki BÁ, Romero R, Sirota M, Guan Y, Kutum R, DREAM Preterm Birth Prediction Challenge Consortium, et al. Crowdsourcing assessment of maternal blood multi-omics for predicting gestational age and preterm birth. Cell Rep Med 2021;2(6):100323 [FREE Full text] [CrossRef] [Medline]
- Koepnick B, Flatten J, Husain T, Ford A, Silva DA, Bick MJ, et al. De novo protein design by citizen scientists. Nature 2019;570(7761):390-394 [FREE Full text] [CrossRef] [Medline]
- Sonabend AM, Zacharia BE, Cloney MB, Sonabend A, Showers C, Ebiana V, et al. Defining glioblastoma resectability through the wisdom of the crowd: a proof-of-principle study. Neurosurgery 2017;80(4):590-601 [FREE Full text] [CrossRef] [Medline]
- Baldassano SN, Brinkmann BH, Ung H, Blevins T, Conrad EC, Leyde K, et al. Crowdsourcing seizure detection: algorithm development and validation on human implanted device recordings. Brain 2017;140(6):1680-1691 [FREE Full text] [CrossRef] [Medline]
- O'Donnell S, Lewis D, Marchante Fernández M, Wäldchen M, Cleal B, Skinner T, et al. Evidence on user-led innovation in diabetes technology (the OPEN project): protocol for a mixed methods study. JMIR Res Protoc 2019;8(11):e15368 [FREE Full text] [CrossRef] [Medline]
- Masselot C, Greshake Tzovaras B, Graham CLB, Finnegan G, Jeyaram R, Vitali I, et al. Implementing the co-immune open innovation program to address vaccination hesitancy and access to vaccines: retrospective study. J Particip Med 2022;14(1):e32125 [FREE Full text] [CrossRef] [Medline]
- Guinney J, Wang T, Laajala TD, Winner KK, Bare JC, Neto EC, Prostate Cancer Challenge DREAM Community. Prediction of overall survival for patients with metastatic castration-resistant prostate cancer: development of a prognostic model through a crowdsourced challenge with open clinical trial data. Lancet Oncol 2017;18(1):132-142 [FREE Full text] [CrossRef] [Medline]
- Mason MJ, Schinke C, Eng CLP, Towfic F, Gruber F, Dervan A, Multiple Myeloma DREAM Consortium, et al. Multiple myeloma DREAM challenge reveals epigenetic regulator PHF19 as marker of aggressive disease. Leukemia 2020;34(7):1866-1874 [FREE Full text] [CrossRef] [Medline]
- Lewis D. OpenAPS. 2015. URL: https://openaps.org/ [accessed 2023-03-28]
- Amgad M, Elfandy H, Hussein H, Atteya LA, Elsebaie MAT, Abo Elnasr LS, et al. Structured crowdsourcing enables convolutional segmentation of histology images. Bioinformatics 2019;35(18):3461-3467 [FREE Full text] [CrossRef] [Medline]
- Davis JR, Richard EE, Keeton K. Open innovation at NASA: a new business model for advancing human health and performance innovations. Res Technol Manag 2015;58(3):52-58. [CrossRef]
- Desouza KC. Challenge.gov: using competitions and awards to spur innovation. IBM Center for The Business of Government. 2012. URL: https://www.businessofgovernment.org/report/challengegov-using-competitions-and-awards-spur-innovation [accessed 2023-03-28]
- Créquit P, Mansouri G, Benchoufi M, Vivot A, Ravaud P. Mapping of crowdsourcing in health: systematic review. J Med Internet Res 2018;20(5):e187 [FREE Full text] [CrossRef] [Medline]
- Nguyen VT, Benchoufi M, Young B, Ghosn L, Ravaud P, Boutron I. A scoping review provided a framework for new ways of doing research through mobilizing collective intelligence. J Clin Epidemiol 2019;110:1-11 [FREE Full text] [CrossRef] [Medline]
- Arvaniti EN, Dima A, Stylios CD, Papadakis VG. A new step-by-step model for implementing open innovation. Sustainability 2022 May 16;14(10):6017. [CrossRef]
- HeroX. URL: https://www.herox.com/ [accessed 2023-03-31]
- Schucht P, Roccaro-Waldmeyer DM, Murek M, Zubak I, Goldberg J, Falk S, et al. Exploring novel funding strategies for innovative medical research: the HORAO crowdfunding campaign. J Med Internet Res 2020;22(11):e19715 [FREE Full text] [CrossRef] [Medline]
- SurveyMonkey. URL: https://www.surveymonkey.com/ [accessed 2023-03-31]
- Schucht P, Lee HR, Mezouar HM, Hewer E, Raabe A, Murek M, et al. Visualization of white matter fiber tracts of brain tissue sections with wide-field imaging mueller polarimetry. IEEE Trans Med Imaging 2020;39(12):4376-4382. [CrossRef]
- Schucht P, Lee H, Mezouar M, Hewer E, Raabe A, Murek M, et al. Visualization of White Matter Fiber Tracts of Brain Tissue Sections With Wide-Field Imaging Mueller Polarimetry. IEEE Trans Med Imaging 2020 Dec;39(12):4376-4382. [CrossRef] [Medline]
- Novikova T, Pierangelo A, Schucht P, Meglinski I, Rodríguez-Núñez O, Lee HR. Mueller polarimetry of brain tissues. In: Ramella-Roman JC, Novikova T, editors. Polarized Light in Biomedical Imaging and Sensing: Clinical and Preclinical Applications. Cham: Springer International Publishing; 2923:205-229.
- Rodríguez-Núñez O, Novikova T. Polarimetric techniques for the structural studies and diagnosis of brain. Adv Opt Technol 2022;11(5-6):157-171. [CrossRef]
- McKinley R, Felger LA, Hewer E, Maragkou T, Murek M, Novikova T, et al. Machine learning for white matter fibre tract visualization in the human brain via mueller matrix polarimetric data. 2022 Presented at: SPIE Photonics Europe, Unconventional Optical Imaging III; May 20, 2022; Strasbourg, France. [CrossRef]
- Rodríguez-Núñez O, Schucht P, Lee HR, Mezouar MH, Hewer E, Raabe A, et al. Retardance map of brain white matter: a potential game changer for the intra-operative navigation during brain tumor surgery. 2021 Presented at: European Conference on Biomedical Optics, Translational Biophotonics: Diagnostics and Therapeutics; December 7, 2021; Munich URL: https://tinyurl.com/mrypapzd [CrossRef]
- Rodríguez-Núñez O, Schucht P, Hewer E, Novikova T, Pierangelo A. Polarimetric visualization of healthy brain fiber tracts under adverse conditions: studies. Biomed Opt Express 2021;12(10):6674-6685 [FREE Full text] [CrossRef] [Medline]
- Carter AJ, Donner A, Lee WH, Bountra C. Establishing a reliable framework for harnessing the creative power of the scientific crowd. PLoS Biol 2017;15(2):e2001387 [FREE Full text] [CrossRef] [Medline]
- InnoCentive. URL: http://www.innocentive.com/ [accessed 2023-03-31]
- An evaluation of the open innovation pilot program between NASA and InnoCentive, Inc. INNOCENTIVE. 2010. URL: https://www.nasa.gov/pdf/572344main_InnoCentive_NASA_PublicReport_2011-0422.pdf [accessed 2023-03-28]
|NASA: National Aeronautics and Space Administration|
Edited by T Leung, G Eysenbach; submitted 16.09.22; peer-reviewed by W Tang, A Azzam, A AL-Asadi; comments to author 20.12.22; revised version received 14.02.23; accepted 12.03.23; published 28.04.23Copyright
©Philippe Schucht, Andrea Maria Mathis, Michael Murek, Irena Zubak, Johannes Goldberg, Stephanie Falk, Andreas Raabe. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 28.04.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.