%0 Journal Article %@ 1438-8871 %I JMIR Publications Inc. %V 17 %N 12 %P e281 %T Assessing Pictograph Recognition: A Comparison of Crowdsourcing and Traditional Survey Approaches %A Kuang,Jinqiu %A Argo,Lauren %A Stoddard,Greg %A Bray,Bruce E %A Zeng-Treitler,Qing %+ Department of Biomedical Informatics, University of Utah, 421 Wakara Way, Suite 140, Salt Lake City, UT, 84108, United States, 1 801 581 4080, Jinqiu.kuang@utah.edu %K crowdsourcing %K patient discharge summaries %K Amazon Mechanical Turk %K pictograph recognition %K cardiovascular %D 2015 %7 17.12.2015 %9 Original Paper %J J Med Internet Res %G English %X Background: Compared to traditional methods of participant recruitment, online crowdsourcing platforms provide a fast and low-cost alternative. Amazon Mechanical Turk (MTurk) is a large and well-known crowdsourcing service. It has developed into the leading platform for crowdsourcing recruitment. Objective: To explore the application of online crowdsourcing for health informatics research, specifically the testing of medical pictographs. Methods: A set of pictographs created for cardiovascular hospital discharge instructions was tested for recognition. This set of illustrations (n=486) was first tested through an in-person survey in a hospital setting (n=150) and then using online MTurk participants (n=150). We analyzed these survey results to determine their comparability. Results: Both the demographics and the pictograph recognition rates of online participants were different from those of the in-person participants. In the multivariable linear regression model comparing the 2 groups, the MTurk group scored significantly higher than the hospital sample after adjusting for potential demographic characteristics (adjusted mean difference 0.18, 95% CI 0.08-0.28, P<.001). The adjusted mean ratings were 2.95 (95% CI 2.89-3.02) for the in-person hospital sample and 3.14 (95% CI 3.07-3.20) for the online MTurk sample on a 4-point Likert scale (1=totally incorrect, 4=totally correct). Conclusions: The findings suggest that crowdsourcing is a viable complement to traditional in-person surveys, but it cannot replace them. %M 26678085 %R 10.2196/jmir.4582 %U http://www.jmir.org/2015/12/e281/ %U https://doi.org/10.2196/jmir.4582 %U http://www.ncbi.nlm.nih.gov/pubmed/26678085