Published on in Vol 20, No 3 (2018): March

Ethical Concerns of and Risk Mitigation Strategies for Crowdsourcing Contests and Innovation Challenges: Scoping Review

Ethical Concerns of and Risk Mitigation Strategies for Crowdsourcing Contests and Innovation Challenges: Scoping Review

Ethical Concerns of and Risk Mitigation Strategies for Crowdsourcing Contests and Innovation Challenges: Scoping Review

Review

1University of North Carolina Project-China, Guangzhou, China

2Social Entrepreneurship to Spur Health, Guangzhou, China

3Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, London, United Kingdom

4Institute of Global Health and Infectious Diseases, University of North Carolina, Chapel Hill, NC, United States

5Department of Public Health, Xi'an Jiaotong-Liverpool University, Suzhou, China

6Kenan-Flagler School of Business, University of North Carolina, Chapel Hill, NC, United States

7Social Medicine Department, University of North Carolina, Chapel Hill, NC, United States

Corresponding Author:

Joseph D Tucker, MD, PhD

University of North Carolina Project-China

2 Lujing Road

Guangzhou,

China

Phone: 86 13560294997

Email: jdtucker@med.unc.edu


Background: Crowdsourcing contests (also called innovation challenges, innovation contests, and inducement prize contests) can be used to solicit multisectoral feedback on health programs and design public health campaigns. They consist of organizing a steering committee, soliciting contributions, engaging the community, judging contributions, recognizing a subset of contributors, and sharing with the community.

Objective: This scoping review describes crowdsourcing contests by stage, examines ethical problems at each stage, and proposes potential ways of mitigating risk.

Methods: Our analysis was anchored in the specific example of a crowdsourcing contest that our team organized to solicit videos promoting condom use in China. The purpose of this contest was to create compelling 1-min videos to promote condom use. We used a scoping review to examine the existing ethical literature on crowdsourcing to help identify and frame ethical concerns at each stage.

Results: Crowdsourcing has a group of individuals solve a problem and then share the solution with the public. Crowdsourcing contests provide an opportunity for community engagement at each stage: organizing, soliciting, promoting, judging, recognizing, and sharing. Crowdsourcing poses several ethical concerns: organizing—potential for excluding community voices; soliciting—potential for overly narrow participation; promoting—potential for divulging confidential information; judging—potential for biased evaluation; recognizing—potential for insufficient recognition of the finalist; and sharing—potential for the solution to not be implemented or widely disseminated.

Conclusions: Crowdsourcing contests can be effective and engaging public health tools but also introduce potential ethical problems. We present methods for the responsible conduct of crowdsourcing contests.

J Med Internet Res 2018;20(3):e75

doi:10.2196/jmir.8226

Keywords



Crowdsourcing refers to “the practice of obtaining information or services by soliciting input from a large number of people, typically via the internet and often without offering compensation.” [1] The term encompasses a wide range of practices that were originally developed to iteratively improve commercial products based on crowd input and to change the traditional relationship between a business and a client [2]. For example, the online encyclopedia Wikipedia allows anonymous volunteers to write, edit, and manage online encyclopedia entries. Wikipedia has rapidly grown and now has 4.9 million articles that are being edited by 70,000 active contributors [3]. Crowdsourcing is used in the government and nonprofit sectors to generate innovative concepts and designs [4]. Crowdsourcing can take a wide variety of forms, including online games [5], distributed health system platforms [6], and contests to solicit new ideas [4].

Our discussion of crowdsourcing will focus on contests, also called innovation challenges, innovation contests, and inducement prize contests. Crowdsourcing contests include prize-based open contests in which individuals or teams work alone and those in which individuals work together. Contests typically include the following stages: organizing a steering committee, soliciting contributions, promoting the contest, judging contributions by experts or the crowd, recognizing excellent contributions, and sharing contributions. In the past 10 years, contests have been used to promote public health [4]. Crowdsourcing contests have been used to develop health messages [7], inform health policy [8], and improve medical diagnostics [9]. These kinds of contests can increase community engagement [7,10], improve health [10,11], and save money [11].

However, crowdsourcing contests introduce a number of potential ethical concerns [12,13], including not being sufficiently inclusive, only relying on the internet, and not disseminating the solution widely. Identifying and responding to these shortcomings is important for establishing crowdsourcing as a force for the public good and as a useful public health tool. These concerns have received limited attention in the public health literature on crowdsourcing to date [2,4]. This paper describes crowdsourcing contests by stage, describes common ethical challenges, and provides guidance on implementing crowdsourcing contests ethically.


We conducted a scoping review [14] to synthesize literature on the ethical conduct of health-related crowdsourcing projects. This review includes applied and theoretical ethics literature related to crowdsourcing research and practice. Scoping reviews allow one to examine the literature in a structured way but are different from systematic reviews in their methodology and content [15]. Our review focused on sources between January 1, 2005 and July 1, 2017. We examined a wide range of anthropological, ethical, social science, and related literature on crowdsourcing to promote public health. We anchored this discussion in a particular example of a single crowdsourcing contest. In addition, we examined several crowdsourcing contest failures to understand concerns and potential ethical problems.

We identified studies using keyword searches in electronic databases, including MEDLINE (OVID interface, 1946 onwards), Google Scholar, expert opinion, and Wikipedia. For database searches, we used phrases and synonymous variations of the following terms: crowdsourcing, innovation challenge, ethics, implementation ethics, and applied ethical analysis. We also identified studies based on searches of reference lists, hand-searching key journals identified from initial database inquiries, and unpublished conference abstracts. We prioritized studies that examined crowdsourcing contests in health contexts. Our search included studies that provided empirical or theoretical data on crowdsourcing contests in the past 12 years.


Overview

Our scoping review data are organized according to the 6 stages of a crowdsourcing contest—organizing, soliciting, promoting, judging, recognizing, and sharing [7,10]. First, the contest organizers form a contest steering committee to articulate the purpose, values, and methods of the contest. Second, an open call for content (eg, concepts, images, videos, or other materials) is announced via in-person events and social media. This open call clarifies the goals and terms of the contest, the prize or incentive structure, and the nature of participation. The open call plays a key role in defining the crowd. Third, the crowd is iteratively engaged through feedback sessions, in-person events, and social media. Fourth, a group of judges evaluates each contribution based on prespecified criteria to determine finalists. In some cases, the judges are the crowd itself. Finalists and others are awarded prizes according to their rank order. The judging process aggregates crowd wisdom [16]. Fifth, contest finalists are announced and recognized through an incentive structure. Sixth, the steering committee shares the finalist solution(s) with the community. After discussing each of these 6 stages, we review the literature on failures in crowdsourcing and discuss ethical principles of crowdsourcing contests.

Organizing a Steering Committee

The first step of a crowdsourcing contest is to establish a steering committee that will decide the structure and function of the contest. The steering committee powerfully shapes the contest and provides a set of norms, expectations, and deadlines. Often contests are divided into ones that focus on engaging large numbers of the community or on resulting in a high-quality outcome [4]. The condom video contest in China was focused on creating a high-quality video. The condom video contest steering committee was composed of youth, community health leaders, men who have sex with men, doctors, business leaders, and researchers. The group was organized by Sesh Global, an organization with experience in crowdsourcing contests. The steering committee met on a monthly basis to discuss the scope, rules, and promotion of the contest. In addition, the steering committee used email and social media to discuss contest developments.

One potential ethical problem with organizing a steering committee is the possibility of excluding community members or voices that are important to the contest. Often individuals from marginalized, vulnerable groups who lack a voice in decision making are less likely to be represented on steering committees. This problem has important implications for contests because the steering committee establishes the expectations and rules governing the entire process. For example, a contest focused on gay men and HIV ought to include gay men and people living with HIV. A committee lacking appropriate representation of key groups could undermine both effectiveness and trust in the contest.

One way to mitigate the risk of excluding important community voices is to have transparent criteria for selecting steering committee members. In addition, aligning the composition of the steering committee with the overall purpose of the contest could help ensure that community voices are represented. Given local power dynamics related to nonexpert advice, it may also be useful to have local, in-person meetings of the steering committee specifically to establish trust and align expectations.

Soliciting Contributions

Contest organizers design the open call soliciting contributions. The call for contributions is open so that anyone can contribute. Open calls can be through social media, in-person, or both. We define social media as websites or apps that allow users to create and share content or to engage in social networking [17]. Our condom video contest call for entries in China shows how the language, format, and structure shape a crowd (Multimedia Appendix 1). The call for entries was distributed through social media and in-person events at local high schools, colleges, and community-based organizations. The choice of distribution channels encouraged young Chinese individuals to participate but allowed entries from anyone.

One potential problem with calls for entries is over-reliance on social media announcements and insufficient attention to in-person events. Most private sector contests have focused on using social media calls to solicit entries [18], including several calls exclusively through social media [19,20]. There are two ethical problems with exclusively social media open calls. First, there is still a substantial digital divide between those who use social media and those who do not. Individuals who use social media tend to be from more developed regions or sectors and have higher socioeconomic status compared with those who do not use social media [21]. An exclusively social media call would not only constrain participation among some vulnerable groups, but it could worsen some of the entrenched social inequities. Second, among those who use social media, there is a further barrier to engaging sophisticated contest platforms, such as Ideascale, a company that creates online platforms for crowdsourcing contests [22]. A wide range of these platforms have been developed to crowdsource tasks. However, individuals who have the skills, knowledge, and experience to participate in these online platforms are a subset of the crowd, skewing its composition and unfairly excluding those without these skills but who are interested and could meaningfully contribute.

Careful attention to soliciting contributions can help to deal with these problems. In-person contest promotion events are one mechanism to broaden access to contests and diversify the crowd. These events have been used in several health contests [7] and have been found to increase participation and quality of participant entries. In-person events could take the form of classroom didactics, interactive feedback sessions, or community-led events. Capacity building sessions [23] could help individuals to learn about contributing on social media platforms. In addition to in-person contest promotion, ensuring multiple channels for contributing would be useful. This could include providing contributions through mail, in-person, or short text message. Contests should be as inclusive as possible relative to the intended audience. It is important to note that the goal is not for universal participation but to provide an opportunity to participate to those who would have a reasonable expectation of contributing to the contest.

Promoting Crowd Engagement and Contributions

Following the open call, there is an iterative process of engagement between organizers and potential participants. Our condom contest organized in-person and social media engagement activities to promote submissions. This included integration of in-person and social media activities so that they complemented each other. Approximately three-quarters of those who submitted to the contest participated in at least one engagement activity. These activities established trust in the contest, built confidence in contributing, and established social norms about how to participate in the contest. Engagement activities avoided giving examples in order to decrease cognitive fixation and increase innovation [24,25]. However, this stage of crowdsourcing contests also raises potential ethical concerns. Disclosure of confidential information by contributors and the possibility of social media trolling are two primary concerns.

Authentic engagement in a contest allows those who contribute to draw on their own unique talents, preferences, and local social context. However, this personal process introduces the risk of private information being divulged, often unintentionally, as part of engagement and contributing. These concerns have been raised more generally in the crowdsourcing literature [26]. In our case, some condom videos included identifiable individuals. The contest organizers had clear guidelines establishing that all videos could be publically viewed and any individual who participated in the video gave permission to be included. This decreased the risk of unintended disclosure associated with viewing the videos. In addition, contest organizers may consider having more stringent requirements about obtaining written consent for an individual’s photograph or other personal information to be included in the contribution or going back to finalists to confirm consent before video or other forms of dissemination.

In addition, social media trolling has been reported within crowdsourcing contests. The word troll comes from the Scandinavian mythology, referring to evil small creatures who disturb travelers [17]. Today the term “trolling” refers to individuals who (usually anonymously) harass, provoke, or insult others online [27]. Trolling has been reported in a range of social media contexts [28,29], including contests [30,31]. Although trolling may be largely protected in some countries by the right to free speech [32], organizers of crowdsourcing contests should make efforts to anticipate these harms and provide protection. This type of ethical concern can to some extent be addressed through online platform moderation and algorithms for detecting offensive words, as well as informing participants of potential risks of trolling during the consent process.

Aggregating Crowd Wisdom: Judging Contributions

Once the crowd has engaged in the contest and submitted contributions, these contributions are judged. The condom contest videos were evaluated by a multisectoral group of local people living with HIV, youth, physicians, and public health experts. Local judging increased community ownership of the contest and increased the likelihood that local characteristics (eg, using the local dialect) would be incorporated. At the same time, the nature of judging brings up a range of potential ethical issues, including how to fairly select judges. One common approach to judging contests has been to involve the crowd in judging entries, cited as a cost-effective way to engage potential participants [33]. However, if the crowd is exclusively defined online, it would be prone to the same problems described above in addition to bias, inconsistent judging criteria, and favoring popular opinion [34]. Crowd evaluation may also lead to voting based on criteria not consistent with the goals of the contest. For example, the British contest to name a government research vessel resulted in the entry “Boaty McBoatface” receiving 124,109 votes, more than fourfold greater than the next entry [35]. Organizers found themselves in the dilemma of accepting an absurd name or rejecting a crowdsourced outcome. They eventually compromised by using “Boaty McBoatface” to name a submersible carried by the research vessel dubbed the Sir David Attenborough [36]. In addition, the contest contributor with the largest number of online followers may be more likely to receive votes in support of their contest entry. Empirical evidence from private sector contests confirms that online crowd evaluation is biased toward individuals with greater social networks compared with expert judge evaluation [33]. Two studies found that individuals who win crowd-judged prizes are not as likely to sustain their engagement over time compared with individuals who win expert-judged prizes [33,37].

When a crowdsourcing contest has a relatively low number of entries (<100), a panel of expert judges could evaluate contributions. Judges would need to be selected in a fair way that is consistent with the mission and goals of the overall contest. Elements from “fair process” procedures—which emphasize transparency, justification of rationales, opportunities to appeal decisions, and so on—could help in the constitution of the judging panel and also help guide the decisions they make [38]. This could include evaluation of the judge panel to help ensure that a broad range of judges are represented and decrease reliance on social media. Several private sector contests demonstrate the feasibility of having a judging panel evaluate contributions [39].

Celebrating Crowd Wisdom: Recognizing Contributions

Following the judging process, contributions can be recognized by prizes or incentives, acknowledgment, and retention of legal rights to products created. Incentive structures for crowdsourcing vary based on the goals and missions of the contest. Some contests have a single large prize [40] while others recognize a number of contributions [41]. The condom contest included individual prizes for the top three contributors as well as participation prizes. The top contributors were announced on social media as a further form of recognition. All contributors retained the rights to their videos (those who submitted the videos could use them for any purposes), consistent with the goal of the contest to promote community agency.

An ethical challenge related to recognition in crowdsourcing contests is the potential for exploiting those who make substantial contributions. Insufficient recognition of those who contribute to contests has been noted in many online contest settings [42-44]. The condom contest decreased the likelihood of this exploitation because there were several formal and informal ways of recognizing participants, alongside retention of their legal rights. The contest also shared the finalist video online in several forums.

Appropriately recognizing contributions provides a way of addressing these concerns about crowd exploitation related to the incentive structure and acknowledgment. First, clearly stating during the consent process how contributions will be recognized can mitigate exploitation to some extent. Second, incentive structures with multiple prizes (of different types) promote a broad spectrum of participation. Including special prize categories that focus on participation rather than merit have been used in some contests [7]. Third, formally acknowledging and celebrating contributions are important. Several studies have shown that intrinsic benefits of participation (such as recognition and media attention) are more important than extrinsic benefits in the context of crowdsourcing [10]. Governance of ownership and permissible uses of finalist contributions may also minimize exploitation.

Sharing and Implementing the Solution With the Community

The final stage of a crowdsourcing contest is to share the solution more widely with the community that contributed. Henk van Ess has argued that crowdsourcing must give back to the public and share the solution more widely [45]. In this way, crowdsourcing reciprocates in a commensurate way to what the community contributed. Other crowdsourced research has suggested that perceptions of fairness are important for those contributing to crowdsourcing projects [46,47]. Our condom video contest provided prizes to finalists, participation prizes, and then made the videos available on public platforms in China.

Limited sharing of the crowdsourced solution presents an important ethical concern associated with crowdsourcing contests. This also differentiates public-oriented contests from their private sector counterparts. Most private sector contests see the finalist solutions as their own intellectual property; the terms of many private contests give intellectual property rights to organizers. Limited sharing could take the form of only describing contests in articles that are inaccessible to nonsubscribers behind a paywall.

Clearly establishing a plan early in the process for sharing and prizes as part of the call for entries can mitigate the risk of insufficient sharing. While having clear sharing expectations is important, there should also be sufficient flexibility to give the steering committee ultimate authority in making final decisions. For example, the condom contest mandated sharing of the final video on regional, national, and international networks. The call for entries specified this plan, in addition to a plan to recognize excellent entries. The benchmark for “excellent” was decided by the steering committee.

Learning From Failures

Previous examples of crowdsourcing contests that were partially or incompletely effective provide guidance (Table 1). In 2013, the condom manufacturing company Durex invited the public to vote on which city in the world should receive a special condom rush delivery service [48]. By the end of the contest, the nonexistent city of “Batman” in Turkey had received the most votes, and contest organizers were faced with the ethical dilemma of blatantly rejecting the clear will of the crowd or endorsing a deliberately facetious winning entry. Such instances of crowd hijacking are not uncommon [49], and contest organizers should prepare for scenarios where the crowd may use the contest platform to advance an agenda that deviates from that of the contest organizers’. Given the unpredictability of the crowd, it is important for organizers to clearly explain their rights to prospective participants, including the right to deem certain kinds of entries inadmissible.

Past contests have also shown that successfully developing products through crowdsourcing is not formulaic, and that breakthrough innovations are by no means guaranteed. The start-up company Quirky had managed to secure hundreds of millions of investment dollars to develop innovative household consumer products through open online contests [50]. However, despite deep financial resources and hundreds of thousands of contest contributors, Quirky failed to produce any radically innovative products and eventually declared bankruptcy 6 years after its founding. One of the major problems was that Quirky innovators had disagreements with the company in the late stages of business development [50]. This miscommunication could be avoided by involving community members earlier in the process of development.

Finally, a German contest solicited public input on a ban on circumcision. A political party decided to crowdsource local opinions on the topic, targeting the area of North Rhine-Westphalia. Despite having a population of 18 million individuals, the contest only received 20 submissions [51]. This underscores the importance of having a steering committee that plans in advance and understands community interests and willingness to take part in crowdsourcing contests.

Ethical Principles in Crowdsourcing Contests

We identified several general reviews that broadly considered ethical principles associated with crowdsourcing contests [12,13,52]. These highlighted theoretical concerns about privacy, accuracy of information, property, and accessibility in the context of computer science. However, this limited literature did not focus on health contests.

Table 1. Implementation ethics issues and potential solutions associated with crowdsourcing contests.
Contest stagesImplementation ethics issuePotential solution
1 OrganizingLack of input from community voices or marginalized groupsExplicitly state criteria for selecting steering committee members to ensure adequate representation
2 SolicitingOnline contests limit participation to a subset of internet-using individualsIn-person events to promote contests; multiple ways of receiving contributions

Social networking sites narrow participation in contests to a subset of social media-savvy individualsAllow contributions via email, in-person, cell phones, and other forms that do not require online access or social media
3 PromotingPublic contributions may include confidential or private informationClear contest guidelines that clarify whose permission has been obtained and potentially enhanced consent process before dissemination

Social media platforms for contributing may introduce opportunities for online harassmentSocial media moderators and algorithms for detection of explicit language
4 JudgingCrowd evaluation may be biased in favor of online individuals with larger social networksForm a local judge panel composed of key individuals representing different perspectives or backgrounds

Multiple ways of selecting judgesEstablish guidelines for selecting judges and transparent procedures for evaluation and judging
5 RecognizingSingle prize contests that are most optimal provide no recognition for most contributorsMultiple prize or incentive structure encourages a broad range of participation

Online contests may not sufficiently recognize contributionsIn-person prize announcements
6 SharingMore is taken from the community than given backEstablish a formal mechanism to share or implement the solution more widely with the local community

Principal Findings

Our review suggests that crowdsourcing contests for public health introduce several potential ethical concerns. These concerns can be categorized within the 6 stages of crowdsourcing contests—organizing, soliciting, promoting, judging, recognizing, and sharing. Our analysis suggests that these concerns can be minimized with appropriate planning and consultation. Our review expands the limited literature on crowdsourcing contests for public health [4] by focusing on ethics, examining an empirical case, and including a formal scoping review.

Our data suggest that several ethical concerns associated with crowdsourcing contests can be anticipated and avoided. For example, the awkward situation of having crowds decide on a trivial name such as Boaty McBoatface can be avoided by separating the process of soliciting names and choosing names eligible to be voted on. Other risks can be mitigated through appropriate contest planning. For example, inviting a diversity of local steering group members can increase the likelihood of local community perspective representation. Other ethical concerns are related to the use of social media within contests (eg, privacy concerns) and have been well described elsewhere [53]. All of these types of ethical concerns underscore the need for organizers of crowdsourcing contests to include sufficient time for planning and designing a contest.

Our scoping review did not identify studies that articulated ethical principles of crowdsourcing contests for health. This may be because few studies have focused on using crowdsourcing contests to improve public health [4]. This also may be related to the breadth of diversity of crowdsourcing contests activities, including research studies, community engagement programs, and communications strategies. However, our analysis suggests that there are several shared contest stages each of which has potential ethical concerns.

Crowdsourcing contests have implications for public health research and policy. In terms of research, further empirical study on ethical problems associated with crowdsourcing contests is necessary. Such research could help to refine the method and increase the likelihood of crowdsourcing contests achieving their goals. This research could include qualitative studies of those participating and organizing crowdsourcing contests [54]. Such research would provide valuable input for a future ethical framework specific to crowdsourcing. In terms of policy, crowdsourcing contests could help to inform public health policy. The multisectoral, transparent, and open nature of contests establishes a strong foundation for policy making. For example, the World Health Organization (WHO) used a crowdsourcing contest [55] to solicit descriptions of hepatitis testing that were directly included in the 2017 WHO hepatitis testing guidelines [56]. Given the potential for contests to inform policy, more formal principles and ethical considerations associated with crowdsourcing contests may be useful.

Limitations

Our analysis has several limitations. First, most health contests have been single events and have yet to be serialized and formally incorporated into routine public health practice. Serial contests are likely to have different ethical challenges and may be substantially different in terms of implementation. Second, health-focused crowdsourcing contests are relatively new and while there are many examples of health-related contests, few are formally evaluated using validated metrics. Our analysis focused on a single example. Further implementation research is needed to define the most efficient and responsible use of crowdsourcing contests. Third, there is variation in the extent to which contests are driven by crowd input. Some health projects involve the community at all stages [57], while others have more intensive community input only at the start of the project [4].

Conclusions

Crowdsourcing contests may be a useful tool to develop inclusive public health programs but also pose ethical concerns at each stage. Unraveling these ethical concerns requires careful planning, consideration, and consultation. Our analysis provides several practical steps for the responsible conduct of crowdsourcing contests and identifies areas for future research.

Acknowledgments

The authors express their gratitude to Weiming Tang, Songyuan Tang, and Eric Juengst for helpful comments on a previous version of this manuscript. The condom contest research described in this methodology article was approved by the Guangdong Provincial STD Control Institutional Review Board (IRB; 114310-01) and the University of North Carolina Chapel Hill IRB (Study 15-1522). All individuals completed informed consent. All research data are available upon request, contingent upon the approval of Chinese and American IRBs. This research was supported by research grants from the National Institute of Allergy and Infectious Diseases (1R01AI114310, 1R01A108366).

Authors' Contributions

JT wrote the first draft of this manuscript. SP and GT revised the section on crowdsourcing failures. BB and AM provided guidance on writing on the crowdsourcing evaluation part. SR contributed to the implementation ethics sections. All authors contributed to the writing of the manuscript and approved the final version.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Call for entries from the crowdsourcing contest.

PDF File (Adobe PDF File), 38KB

  1. Oxford English Dictionary. Discover the story of English   URL: http://www.oed.com/ [accessed 2018-02-15] [WebCite Cache]
  2. Brabham DC, Ribisl KM, Kirchner TR, Bernhardt JM. Crowdsourcing applications for public health. Am J Prev Med 2014 Feb;46(2):179-187. [CrossRef] [Medline]
  3. Wikipedia.   URL: https://en.wikipedia.org/wiki/Wikipedia [accessed 2017-06-11] [WebCite Cache]
  4. Pan S, Stein G, Bayus B, Tang W, Mathews A, Wang C, et al. Systematic review of innovation design contests for health: spurring innovation and mass engagement. BMJ Innov 2017 Sep 27;3(4):227-237. [CrossRef]
  5. Cooper S, Khatib F, Treuille A, Barbero J, Lee J, Beenen M, et al. Predicting protein structures with a multiplayer online game. Nature 2010 Aug 05;466(7307):756-760 [FREE Full text] [CrossRef] [Medline]
  6. Ringh M, Rosenqvist M, Hollenberg J, Jonsson M, Fredman D, Nordberg P, et al. Mobile-phone dispatch of laypersons for CPR in out-of-hospital cardiac arrest. N Engl J Med 2015 Jun 11;372(24):2316-2325. [CrossRef] [Medline]
  7. Zhang Y, Kim JA, Liu F, Tso LS, Tang W, Wei C, et al. Creative contributory contests to spur innovation in sexual health: 2 cases and a guide for implementation. Sex Transm Dis 2015 Nov;42(11):625-628 [FREE Full text] [CrossRef] [Medline]
  8. Hildebrand M, Ahumada C, Watson S. CrowdOutAIDS: crowdsourcing youth perspectives for action. Reprod Health Matters 2013 May;21(41):57-68. [CrossRef] [Medline]
  9. Wolf M, Krause J, Carney PA, Bogart A, Kurvers RH. Collective intelligence meets medical decision-making: the collective outperforms the best radiologist. PLoS One 2015;10(8):e0134269 [FREE Full text] [CrossRef] [Medline]
  10. Merchant RM, Griffis HM, Ha YP, Kilaru AS, Sellers AM, Hershey JC, et al. Hidden in plain sight: a crowdsourced public art contest to make automated external defibrillators more visible. Am J Public Health 2014 Dec;104(12):2306-2312. [CrossRef] [Medline]
  11. Tang W, Han L, Best J, Zhang Y, Mollan K, Kim J, et al. Crowdsourcing HIV testing: a pragmatic, non-inferiority randomized controlled trial in China. 2015 Presented at: IAS 2015; July, 2015; Vancouver. [CrossRef]
  12. Alqahtani BA, El-shoubaki RT, Noorwali FA, Allouh D, Hemalatha M. 2017. Legal and ethical issues of crowdsourcing   URL: http://www.ijcaonline.org/archives/volume167/number10/alqahtani-2017-ijca-914324.pdf [accessed 2018-01-09] [WebCite Cache]
  13. Schmidt FA. 2013. The good, the bad and the ugly   URL: http://florianalexanderschmidt.de/the-good-the-bad-and-the-ugly/ [accessed 2018-01-09] [WebCite Cache]
  14. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010 Sep 20;5:69 [FREE Full text] [CrossRef] [Medline]
  15. Mays N, Roberts E, Popay J. Synthesising research evidence. In: Fulop N, Allen P, Clarke A, Black N, editors. Studying the organisation and delivery of health services: research methods. London: Routledge; 2001:188-220.
  16. Surowiecki J. The wisdom of crowds : why the many are smarter than the few and how collective wisdom shapes business, economies, societies, and nations. New York: Doubleday; 2004.
  17. Stevenson A, editor. Oxford dictionary of English (3 ed). Oxford, England: Oxford University Press; 2014.
  18. Bullinger AC, Moeslein K. 2013 Apr. Innovation Contests - Where are we?   URL: http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1027&context=amcis2010
  19. Carpenter J. May the best analyst win. Science 2011 Feb 11;331(6018):698-699. [CrossRef] [Medline]
  20. Hussein A. Brokering knowledge in biosciences with InnoCentive. Interview by Semahat S. Demir. IEEE Eng Med Biol Mag 2003;22(4):26-27. [Medline]
  21. Norris P. Digital Divide: Civic Engagement, Information Poverty, and the Internet Worldwide. New York: Cambridge University Press; 2001.
  22. Ideascale.   URL: https://ideascale.com/ [accessed 2017-06-12] [WebCite Cache]
  23. Shapira N, Barak A, Gal I. Promoting older adults' well-being through internet training and use. Aging Ment Health 2007 Sep;11(5):477-484. [CrossRef] [Medline]
  24. Smith SM, Ward TB, Schumacher JS. Constraining effects of examples in a creative generation task. Mem Cognit 1993 Nov;21(6):837-845. [Medline]
  25. Marsh RL, Landau JD, Hicks JL. How examples may (and may not) constrain creativity. Mem Cognit 1996 Sep;24(5):669-680. [Medline]
  26. Durward D, Blohm I. Is there PAPA in crowd work? A literature review on ethical dimensions in crowdsourcing. 2016 Presented at: IEEE International Conference on Internet of People; 2016; Tolouse, France. [CrossRef]
  27. Bartow A. Internet defamation as profit center: the monetization of online harassment. Harv JL & Gender 2009;32(2):1-48 [FREE Full text]
  28. Shin J. Morality and internet behavior: a study of the internet troll and its relation with morality on the internet. Chesapeake, VA: Association for the Advancement of Computing in Education; 2008 Presented at: Society for Information Technology & Teacher Education International Conference; 2008; Nevada, Las Vegas.
  29. Bishop J. Representations of 'trolls' in mass media communication: a review of media-texts and moral panics relating to 'internet trolling'. IJWBC 2014;10(1):7-24. [CrossRef]
  30. Freischlad N. Jovoto. 2018. Something about trolls   URL: http://www.jovoto.com/blog/2010/03/something-about-trolls/ [WebCite Cache]
  31. Ekman O. Jovoto. 2018. To rate or not to rate   URL: http://www.jovoto.com/blog/2009/12/to-rate-or-not-to-rate/ [WebCite Cache]
  32. Diaz FL. 2016. Trolling and the first amendment: protecting internet speech in an era of cyberbullies and internet defamation   URL: http://illinoisjltp.com/journal/wp-content/uploads/2016/06/Diaz.pd [accessed 2018-02-16] [WebCite Cache]
  33. Chen L, Xu P, Liu D. New York: Elsevier; 2016. Experts versus the crowd: a comparison of selection mechanisms in crowdsourcing contests   URL: https://tinyurl.com/yamftjow
  34. Aral S. 2013 Dec 19. The Problem with online ratings   URL: https://sloanreview.mit.edu/article/the-problem-with-online-ratings-2/
  35. Ellis-Petersen H. The Guardian. 2016. Boaty McBoatface wins poll to name polar research vessel   URL: https:/​/www.​theguardian.com/​environment/​2016/​apr/​17/​boaty-mcboatface-wins-poll-to-name-polar-research-vessel [accessed 2018-02-20] [WebCite Cache]
  36. Knapton S. The Daily Telegraph. London; 2016. ‘BoatyMcBoatface’ to live on as yellow submarine, science minister Jo Johnson announces   URL: http:/​/www.​telegraph.co.uk/​science/​2016/​05/​06/​boatymcboatface-to-live-on-as-yellow-submarine-science-minister/​ [accessed 2018-02-20] [WebCite Cache]
  37. Yang J, Adamic L. Crowdsourcing and knowledge sharing: strategic user behavior on taskscn. 2008 Presented at: 9th ACM Conference on Electronic Commerce; 2018; Chicago, USA p. 246-255. [CrossRef]
  38. Daniels N. Just Health Care. Cambridge, UK: Cambridge University Press; 1985.
  39. Issue Lab. And the winner is...: capturing the promise of philanthropic prizes   URL: https:/​/www.​issuelab.org/​resource/​and-the-winner-is-capturing-the-promise-of-philanthropic-prizes.​html [accessed 2017-06-11] [WebCite Cache]
  40. Moldovanu B, Sela A. The optimal allocation of prizes in contests. Am Econ Rev 2001;91(3):542-548. [CrossRef]
  41. Sisak D. Multiple-prize contests - the optimal allocation of prizes. J Econ Surv 2009;23(1):82-114. [CrossRef]
  42. Ross J, Irani L, Silberman S, Zaldivar A, Tomlinson B. Who are the crowdworkers? Shifting demographics in mechanical Turk. In: ACM CHI. 2010 Presented at: 28th International Conference on Human Factors in Computing Systems; 2010; Atlanta, Georgia. [CrossRef]
  43. Kittur A, Nickerson J, Bernstein M, Gerber E, Shaw A, Zimmerman J, et al. The future of crowd work. In: Proceedings of the 2013 conference on Computer supported cooperative work. 2013 Presented at: CSCW '13; 2013; San Antonio. [CrossRef]
  44. Graber MA, Graber A. Internet-based crowdsourcing and research ethics: the case for IRB review. J Med Ethics 2013 Feb;39(2):115-118. [CrossRef] [Medline]
  45. van Ess H. SlideShare. 2010. Crowdsourcing: How to find a crowd   URL: https://www.slideshare.net/searchbistro/harvesting-knowledge-how-to-crowdsource-in-2010 [accessed 2018-02-15] [WebCite Cache]
  46. Faullant R, Fueller J, Hutter K. Fair play: perceived fairness in crowdsourcing communities and its behavioral consequences. Acad Manage Proc 2013 Nov 25;2013(1):15433. [CrossRef]
  47. Mazzola E, Piazza M, Acur N. The Impact of Fairness on the Performance of Crowdsourcing: An Empirical Analysis of Two Intermediate Crowdsourcing Platforms. 2016 Presented at: EURAM Conference Proceedings; 1-4 June, 2016; Paris, France.
  48. Lutz A. Business Insider. New York: Business Insider; 2013. How Durex's Social Media Contest For Condoms Totally Backfired   URL: https://www.businessinsider.in/how-durexs-contest-totally-backfired-2013-6?r=US&IR=T
  49. Wilson M, Robson K, Botha E. Crowdsourcing in a time of empowered stakeholders: lessons from crowdsourcing campaigns. Bus Horiz 2017;60(2):247-253. [CrossRef]
  50. Fixson S, Marion TJ. 2016 Dec 15. A case study of crowdsourcing gone wrong   URL: https://hbr.org/2016/12/a-case-study-of-crowdsourcing-gone-wrong [accessed 2018-02-16] [WebCite Cache]
  51. Dahlander L, Piezunka H. 2017. Why some crowdsourcing efforts work and others don't   URL: https://hbr.org/2017/02/why-some-crowdsourcing-efforts-work-and-others-dont [accessed 2018-02-20] [WebCite Cache]
  52. Standing S, Standing C. The ethical use of crowdsourcing. Bus Ethics 2017 Oct 10;27(1):72-80. [CrossRef]
  53. Golder S, Ahmed S, Norman G, Booth A. Attitudes toward the ethics of research using social media: a systematic review. J Med Internet Res 2017 Jun 06;19(6):e195 [FREE Full text] [CrossRef] [Medline]
  54. Sugarman J, Sulmasy DP. Methods in Medical Ethics, 2nd Ed. Washington DC, US: Georgetown University Press; 2010.
  55. Tucker JD, Meyers K, Best J, Kaplan K, Pendse R, Fenton KA, et al. The HepTestContest: a global innovation contest to identify approaches to hepatitis B and C testing. BMC Infect Dis 2017 Nov 01;17(Suppl 1):701 [FREE Full text] [CrossRef] [Medline]
  56. World Health Organization. Geneva; 2017. WHO guidelines on hepatitis B and C testing   URL: http://apps.who.int/iris/bitstream/10665/254621/1/9789241549981-eng.pdf [accessed 2018-02-20] [WebCite Cache]
  57. SESH Study Group, Tucker JD. Crowdsourcing to promote HIV testing among MSM in China: study protocol for a stepped wedge randomized controlled trial. 2017 Oct 02;18(1):447 [FREE Full text] [CrossRef] [Medline]


IRB: Institutional Review Board
WHO: World Health Organization


Edited by G Eysenbach; submitted 16.06.17; peer-reviewed by M Graber, WH Sun, J Jiang; comments to author 22.11.17; revised version received 09.01.18; accepted 09.01.18; published 09.03.18

Copyright

©Joseph D Tucker, Stephen W Pan, Allison Mathews, Gabriella Stein, Barry Bayus, Stuart Rennie. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 09.03.2018.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.