@Article{info:doi/10.2196/28368, author="Leong, Victoria and Raheel, Kausar and Sim, Jia Yi and Kacker, Kriti and Karlaftis, Vasilis M and Vassiliu, Chrysoula and Kalaivanan, Kastoori and Chen, S H Annabel and Robbins, Trevor W and Sahakian, Barbara J and Kourtzi, Zoe", title="A New Remote Guided Method for Supervised Web-Based Cognitive Testing to Ensure High-Quality Data: Development and Usability Study", journal="J Med Internet Res", year="2022", month="Jan", day="6", volume="24", number="1", pages="e28368", keywords="web-based testing; neurocognitive assessment; COVID-19; executive functions; learning", abstract="Background: The global COVID-19 pandemic has triggered a fundamental reexamination of how human psychological research can be conducted safely and robustly in a new era of digital working and physical distancing. Online web-based testing has risen to the forefront as a promising solution for the rapid mass collection of cognitive data without requiring human contact. However, a long-standing debate exists over the data quality and validity of web-based studies. This study examines the opportunities and challenges afforded by the societal shift toward web-based testing and highlights an urgent need to establish a standard data quality assurance framework for online studies. Objective: This study aims to develop and validate a new supervised online testing methodology, remote guided testing (RGT). Methods: A total of 85 healthy young adults were tested on 10 cognitive tasks assessing executive functioning (flexibility, memory, and inhibition) and learning. Tasks were administered either face-to-face in the laboratory (n=41) or online using remote guided testing (n=44) and delivered using identical web-based platforms (Cambridge Neuropsychological Test Automated Battery, Inquisit, and i-ABC). Data quality was assessed using detailed trial-level measures (missed trials, outlying and excluded responses, and response times) and overall task performance measures. Results: The results indicated that, across all data quality and performance measures, RGT data was statistically-equivalent to in-person data collected in the lab (P>.40 for all comparisons). Moreover, RGT participants out-performed the lab group on measured verbal intelligence (P<.001), which could reflect test environment differences, including possible effects of mask-wearing on communication. Conclusions: These data suggest that the RGT methodology could help ameliorate concerns regarding online data quality---particularly for studies involving high-risk or rare cohorts---and offer an alternative for collecting high-quality human cognitive data without requiring in-person physical attendance. ", issn="1438-8871", doi="10.2196/28368", url="https://www.jmir.org/2022/1/e28368", url="https://doi.org/10.2196/28368", url="http://www.ncbi.nlm.nih.gov/pubmed/34989691" }