TY - JOUR AU - Unlu, Ozan AU - Pikcilingis, Aaron AU - Letourneau, Jonathan AU - Landman, Adam AU - Patel, Rajesh AU - Shenoy, S. Erica AU - Hashimoto, Dean AU - Kim, Marvel AU - Pellecer, Johnny AU - Zhang, Haipeng PY - 2024/7/25 TI - Implementation of a Web-Based Chatbot to Guide Hospital Employees in Returning to Work During the COVID-19 Pandemic: Development and Before-and-After Evaluation JO - JMIR Form Res SP - e43119 VL - 8 KW - chatbot KW - return to work KW - employee KW - health care personnel KW - COVID-19 KW - conversational agent KW - occupational health KW - support service KW - health care delivery KW - agile methodology KW - digital intervention KW - digital support KW - work policy KW - hospital staff N2 - Background: Throughout the COVID-19 pandemic, multiple policies and guidelines were issued and updated for health care personnel (HCP) for COVID-19 testing and returning to work after reporting symptoms, exposures, or infection. The high frequency of changes and complexity of the policies made it difficult for HCP to understand when they needed testing and were eligible to return to work (RTW), which increased calls to Occupational Health Services (OHS), creating a need for other tools to guide HCP. Chatbots have been used as novel tools to facilitate immediate responses to patients? and employees? queries about COVID-19, assess symptoms, and guide individuals to appropriate care resources. Objective: This study aims to describe the development of an RTW chatbot and report its impact on demand for OHS support services during the first Omicron variant surge. Methods: This study was conducted at Mass General Brigham, an integrated health care system with over 80,000 employees. The RTW chatbot was developed using an agile design methodology. We mapped the RTW policy into a unified flow diagram that included all required questions and recommendations, then built and tested the chatbot using the Microsoft Azure Healthbot Framework. Using chatbot data and OHS call data from December 10, 2021, to February 17, 2022, we compared OHS resource use before and after the deployment of the RTW chatbot, including the number of calls to the OHS hotline, wait times, call length, and time OHS hotline staff spent on the phone. We also assessed Centers for Disease Control and Prevention data for COVID-19 case trends during the study period. Results: In the 5 weeks post deployment, 5575 users used the RTW chatbot with a mean interaction time of 1 minute and 17 seconds. The highest engagement was on January 25, 2022, with 368 users, which was 2 weeks after the peak of the first Omicron surge in Massachusetts. Among users who completed all the chatbot questions, 461 (71.6%) met the RTW criteria. During the 10 weeks, the median (IQR) number of daily calls that OHS received before and after deployment of the chatbot were 633 (251-934) and 115 (62-167), respectively (U=163; P<.001). The median time from dialing the OHS phone number to hanging up decreased from 28 minutes and 22 seconds (IQR 25:14-31:05) to 6 minutes and 25 seconds (IQR 5:32-7:08) after chatbot deployment (U=169; P<.001). Over the 10 weeks, the median time OHS hotline staff spent on the phone declined from 3 hours and 11 minutes (IQR 2:32-4:15) per day to 47 (IQR 42-54) minutes (U=193; P<.001), saving approximately 16.8 hours per OHS staff member per week. Conclusions: Using the agile methodology, a chatbot can be rapidly designed and deployed for employees to efficiently receive guidance regarding RTW that complies with the complex and shifting RTW policies, which may reduce use of OHS resources. UR - https://formative.jmir.org/2024/1/e43119 UR - http://dx.doi.org/10.2196/43119 UR - http://www.ncbi.nlm.nih.gov/pubmed/ ID - info:doi/10.2196/43119 ER - TY - JOUR AU - Bragazzi, Luigi Nicola AU - Garbarino, Sergio PY - 2024/4/16 TI - Assessing the Accuracy of Generative Conversational Artificial Intelligence in Debunking Sleep Health Myths: Mixed Methods Comparative Study With Expert Analysis JO - JMIR Form Res SP - e55762 VL - 8 KW - sleep KW - sleep health KW - sleep-related disbeliefs KW - generative conversational artificial intelligence KW - chatbot KW - ChatGPT KW - misinformation KW - artificial intelligence KW - comparative study KW - expert analysis KW - adequate sleep KW - well-being KW - sleep trackers KW - sleep health education KW - sleep-related KW - chronic disease KW - healthcare cost KW - sleep timing KW - sleep duration KW - presleep behaviors KW - sleep experts KW - healthy behavior KW - public health KW - conversational agents N2 - Background: Adequate sleep is essential for maintaining individual and public health, positively affecting cognition and well-being, and reducing chronic disease risks. It plays a significant role in driving the economy, public safety, and managing health care costs. Digital tools, including websites, sleep trackers, and apps, are key in promoting sleep health education. Conversational artificial intelligence (AI) such as ChatGPT (OpenAI, Microsoft Corp) offers accessible, personalized advice on sleep health but raises concerns about potential misinformation. This underscores the importance of ensuring that AI-driven sleep health information is accurate, given its significant impact on individual and public health, and the spread of sleep-related myths. Objective: This study aims to examine ChatGPT?s capability to debunk sleep-related disbeliefs. Methods: A mixed methods design was leveraged. ChatGPT categorized 20 sleep-related myths identified by 10 sleep experts and rated them in terms of falseness and public health significance, on a 5-point Likert scale. Sensitivity, positive predictive value, and interrater agreement were also calculated. A qualitative comparative analysis was also conducted. Results: ChatGPT labeled a significant portion (n=17, 85%) of the statements as ?false? (n=9, 45%) or ?generally false? (n=8, 40%), with varying accuracy across different domains. For instance, it correctly identified most myths about ?sleep timing,? ?sleep duration,? and ?behaviors during sleep,? while it had varying degrees of success with other categories such as ?pre-sleep behaviors? and ?brain function and sleep.? ChatGPT?s assessment of the degree of falseness and public health significance, on the 5-point Likert scale, revealed an average score of 3.45 (SD 0.87) and 3.15 (SD 0.99), respectively, indicating a good level of accuracy in identifying the falseness of statements and a good understanding of their impact on public health. The AI-based tool showed a sensitivity of 85% and a positive predictive value of 100%. Overall, this indicates that when ChatGPT labels a statement as false, it is highly reliable, but it may miss identifying some false statements. When comparing with expert ratings, high intraclass correlation coefficients (ICCs) between ChatGPT?s appraisals and expert opinions could be found, suggesting that the AI?s ratings were generally aligned with expert views on falseness (ICC=.83, P<.001) and public health significance (ICC=.79, P=.001) of sleep-related myths. Qualitatively, both ChatGPT and sleep experts refuted sleep-related misconceptions. However, ChatGPT adopted a more accessible style and provided a more generalized view, focusing on broad concepts, while experts sometimes used technical jargon, providing evidence-based explanations. Conclusions: ChatGPT-4 can accurately address sleep-related queries and debunk sleep-related myths, with a performance comparable to sleep experts, even if, given its limitations, the AI cannot completely replace expert opinions, especially in nuanced and complex fields such as sleep health, but can be a valuable complement in the dissemination of updated information and promotion of healthy behaviors. UR - https://formative.jmir.org/2024/1/e55762 UR - http://dx.doi.org/10.2196/55762 UR - http://www.ncbi.nlm.nih.gov/pubmed/38501898 ID - info:doi/10.2196/55762 ER - TY - JOUR AU - Arnold, Virginia AU - Purnat, D. Tina AU - Marten, Robert AU - Pattison, Andrew AU - Gouda, Hebe PY - 2024/3/21 TI - Chatbots and COVID-19: Taking Stock of the Lessons Learned JO - J Med Internet Res SP - e54840 VL - 26 KW - chatbots KW - COVID-19 KW - health KW - public health KW - pandemic KW - health care UR - https://www.jmir.org/2024/1/e54840 UR - http://dx.doi.org/10.2196/54840 UR - http://www.ncbi.nlm.nih.gov/pubmed/38512309 ID - info:doi/10.2196/54840 ER - TY - JOUR AU - Lim, Adrian Wendell AU - Custodio, Razel AU - Sunga, Monica AU - Amoranto, Jayne Abegail AU - Sarmiento, Francis Raymond PY - 2024/1/5 TI - General Characteristics and Design Taxonomy of Chatbots for COVID-19: Systematic Review JO - J Med Internet Res SP - e43112 VL - 26 KW - COVID-19 KW - health chatbot KW - conversational agent in health care KW - artificial intelligence KW - systematic review KW - mobile phone N2 - Background: A conversational agent powered by artificial intelligence, commonly known as a chatbot, is one of the most recent innovations used to provide information and services during the COVID-19 pandemic. However, the multitude of conversational agents explicitly designed during the COVID-19 pandemic calls for characterization and analysis using rigorous technological frameworks and extensive systematic reviews. Objective: This study aims to describe the general characteristics of COVID-19 chatbots and examine their system designs using a modified adapted design taxonomy framework. Methods: We conducted a systematic review of the general characteristics and design taxonomy of COVID-19 chatbots, with 56 studies included in the final analysis. This review followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to select papers published between March 2020 and April 2022 from various databases and search engines. Results: Results showed that most studies on COVID-19 chatbot design and development worldwide are implemented in Asia and Europe. Most chatbots are also accessible on websites, internet messaging apps, and Android devices. The COVID-19 chatbots are further classified according to their temporal profiles, appearance, intelligence, interaction, and context for system design trends. From the temporal profile perspective, almost half of the COVID-19 chatbots interact with users for several weeks for >1 time and can remember information from previous user interactions. From the appearance perspective, most COVID-19 chatbots assume the expert role, are task oriented, and have no visual or avatar representation. From the intelligence perspective, almost half of the COVID-19 chatbots are artificially intelligent and can respond to textual inputs and a set of rules. In addition, more than half of these chatbots operate on a structured flow and do not portray any socioemotional behavior. Most chatbots can also process external data and broadcast resources. Regarding their interaction with users, most COVID-19 chatbots are adaptive, can communicate through text, can react to user input, are not gamified, and do not require additional human support. From the context perspective, all COVID-19 chatbots are goal oriented, although most fall under the health care application domain and are designed to provide information to the user. Conclusions: The conceptualization, development, implementation, and use of COVID-19 chatbots emerged to mitigate the effects of a global pandemic in societies worldwide. This study summarized the current system design trends of COVID-19 chatbots based on 5 design perspectives, which may help developers conveniently choose a future-proof chatbot archetype that will meet the needs of the public in the face of growing demand for a better pandemic response. UR - https://www.jmir.org/2024/1/e43112 UR - http://dx.doi.org/10.2196/43112 UR - http://www.ncbi.nlm.nih.gov/pubmed/38064638 ID - info:doi/10.2196/43112 ER - TY - JOUR AU - Xue, Jia AU - Zhang, Bolun AU - Zhao, Yaxi AU - Zhang, Qiaoru AU - Zheng, Chengda AU - Jiang, Jielin AU - Li, Hanjia AU - Liu, Nian AU - Li, Ziqian AU - Fu, Weiying AU - Peng, Yingdong AU - Logan, Judith AU - Zhang, Jingwen AU - Xiang, Xiaoling PY - 2023/12/19 TI - Evaluation of the Current State of Chatbots for Digital Health: Scoping Review JO - J Med Internet Res SP - e47217 VL - 25 KW - artificial intelligence KW - chatbot KW - health KW - mental health KW - suicide KW - suicidal KW - conversational capacity KW - relational capacity KW - personalization KW - in-app reviews KW - experience KW - experiences KW - scoping KW - review methods KW - review methodology KW - chatbots KW - conversational agent KW - conversational agents N2 - Background: Chatbots have become ubiquitous in our daily lives, enabling natural language conversations with users through various modes of communication. Chatbots have the potential to play a significant role in promoting health and well-being. As the number of studies and available products related to chatbots continues to rise, there is a critical need to assess product features to enhance the design of chatbots that effectively promote health and behavioral change. Objective: This scoping review aims to provide a comprehensive assessment of the current state of health-related chatbots, including the chatbots? characteristics and features, user backgrounds, communication models, relational building capacity, personalization, interaction, responses to suicidal thoughts, and users? in-app experiences during chatbot use. Through this analysis, we seek to identify gaps in the current research, guide future directions, and enhance the design of health-focused chatbots. Methods: Following the scoping review methodology by Arksey and O'Malley and guided by the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) checklist, this study used a two-pronged approach to identify relevant chatbots: (1) searching the iOS and Android App Stores and (2) reviewing scientific literature through a search strategy designed by a librarian. Overall, 36 chatbots were selected based on predefined criteria from both sources. These chatbots were systematically evaluated using a comprehensive framework developed for this study, including chatbot characteristics, user backgrounds, building relational capacity, personalization, interaction models, responses to critical situations, and user experiences. Ten coauthors were responsible for downloading and testing the chatbots, coding their features, and evaluating their performance in simulated conversations. The testing of all chatbot apps was limited to their free-to-use features. Results: This review provides an overview of the diversity of health-related chatbots, encompassing categories such as mental health support, physical activity promotion, and behavior change interventions. Chatbots use text, animations, speech, images, and emojis for communication. The findings highlight variations in conversational capabilities, including empathy, humor, and personalization. Notably, concerns regarding safety, particularly in addressing suicidal thoughts, were evident. Approximately 44% (16/36) of the chatbots effectively addressed suicidal thoughts. User experiences and behavioral outcomes demonstrated the potential of chatbots in health interventions, but evidence remains limited. Conclusions: This scoping review underscores the significance of chatbots in health-related applications and offers insights into their features, functionalities, and user experiences. This study contributes to advancing the understanding of chatbots? role in digital health interventions, thus paving the way for more effective and user-centric health promotion strategies. This study informs future research directions, emphasizing the need for rigorous randomized control trials, standardized evaluation metrics, and user-centered design to unlock the full potential of chatbots in enhancing health and well-being. Future research should focus on addressing limitations, exploring real-world user experiences, and implementing robust data security and privacy measures. UR - https://www.jmir.org/2023/1/e47217 UR - http://dx.doi.org/10.2196/47217 UR - http://www.ncbi.nlm.nih.gov/pubmed/38113097 ID - info:doi/10.2196/47217 ER - TY - JOUR AU - Wang, Guoyong AU - Gao, Kai AU - Liu, Qianyang AU - Wu, Yuxin AU - Zhang, Kaijun AU - Zhou, Wei AU - Guo, Chunbao PY - 2023/12/14 TI - Potential and Limitations of ChatGPT 3.5 and 4.0 as a Source of COVID-19 Information: Comprehensive Comparative Analysis of Generative and Authoritative Information JO - J Med Internet Res SP - e49771 VL - 25 KW - ChatGPT 3.5 KW - ChatGPT 4.0 KW - artificial intelligence KW - AI KW - COVID-19 KW - pandemic KW - public health KW - information retrieval N2 - Background: The COVID-19 pandemic, caused by the SARS-CoV-2 virus, has necessitated reliable and authoritative information for public guidance. The World Health Organization (WHO) has been a primary source of such information, disseminating it through a question and answer format on its official website. Concurrently, ChatGPT 3.5 and 4.0, a deep learning-based natural language generation system, has shown potential in generating diverse text types based on user input. Objective: This study evaluates the accuracy of COVID-19 information generated by ChatGPT 3.5 and 4.0, assessing its potential as a supplementary public information source during the pandemic. Methods: We extracted 487 COVID-19?related questions from the WHO?s official website and used ChatGPT 3.5 and 4.0 to generate corresponding answers. These generated answers were then compared against the official WHO responses for evaluation. Two clinical experts scored the generated answers on a scale of 0-5 across 4 dimensions?accuracy, comprehensiveness, relevance, and clarity?with higher scores indicating better performance in each dimension. The WHO responses served as the reference for this assessment. Additionally, we used the BERT (Bidirectional Encoder Representations from Transformers) model to generate similarity scores (0-1) between the generated and official answers, providing a dual validation mechanism. Results: The mean (SD) scores for ChatGPT 3.5?generated answers were 3.47 (0.725) for accuracy, 3.89 (0.719) for comprehensiveness, 4.09 (0.787) for relevance, and 3.49 (0.809) for clarity. For ChatGPT 4.0, the mean (SD) scores were 4.15 (0.780), 4.47 (0.641), 4.56 (0.600), and 4.09 (0.698), respectively. All differences were statistically significant (P<.001), with ChatGPT 4.0 outperforming ChatGPT 3.5. The BERT model verification showed mean (SD) similarity scores of 0.83 (0.07) for ChatGPT 3.5 and 0.85 (0.07) for ChatGPT 4.0 compared with the official WHO answers. Conclusions: ChatGPT 3.5 and 4.0 can generate accurate and relevant COVID-19 information to a certain extent. However, compared with official WHO responses, gaps and deficiencies exist. Thus, users of ChatGPT 3.5 and 4.0 should also reference other reliable information sources to mitigate potential misinformation risks. Notably, ChatGPT 4.0 outperformed ChatGPT 3.5 across all evaluated dimensions, a finding corroborated by BERT model validation. UR - https://www.jmir.org/2023/1/e49771 UR - http://dx.doi.org/10.2196/49771 UR - http://www.ncbi.nlm.nih.gov/pubmed/38096014 ID - info:doi/10.2196/49771 ER - TY - JOUR AU - Singh, Akanksha AU - Schooley, Benjamin AU - Patel, Nitin PY - 2023/12/14 TI - Effects of User-Reported Risk Factors and Follow-Up Care Activities on Satisfaction With a COVID-19 Chatbot: Cross-Sectional Study JO - JMIR Mhealth Uhealth SP - e43105 VL - 11 KW - patient engagement KW - chatbot KW - population health KW - health recommender systems KW - conversational recommender systems KW - design factors KW - COVID-19 N2 - Background: The COVID-19 pandemic influenced many to consider methods to reduce human contact and ease the burden placed on health care workers. Conversational agents or chatbots are a set of technologies that may aid with these challenges. They may provide useful interactions for users, potentially reducing the health care worker burden while increasing user satisfaction. Research aims to understand these potential impacts of chatbots and conversational recommender systems and their associated design features. Objective: The objective of this study was to evaluate user perceptions of the helpfulness of an artificial intelligence chatbot that was offered free to the public in response to COVID-19. The chatbot engaged patients and provided educational information and the opportunity to report symptoms, understand personal risks, and receive referrals for care. Methods: A cross-sectional study design was used to analyze 82,222 chats collected from patients in South Carolina seeking services from the Prisma Health system. Chi-square tests and multinomial logistic regression analyses were conducted to assess the relationship between reported risk factors and perceived chat helpfulness using chats started between April 24, 2020, and April 21, 2022. Results: A total of 82,222 chat series were started with at least one question or response on record; 53,805 symptom checker questions with at least one COVID-19?related activity series were completed, with 5191 individuals clicking further to receive a virtual video visit and 2215 clicking further to make an appointment with a local physician. Patients who were aged >65 years (P<.001), reported comorbidities (P<.001), had been in contact with a person with COVID-19 in the last 14 days (P<.001), and responded to symptom checker questions that placed them at a higher risk of COVID-19 (P<.001) were 1.8 times more likely to report the chat as helpful than those who reported lower risk factors. Users who engaged with the chatbot to conduct a series of activities were more likely to find the chat helpful (P<.001), including seeking COVID-19 information (3.97-4.07 times), in-person appointments (2.46-1.99 times), telehealth appointments with a nearby provider (2.48-1.9 times), or vaccination (2.9-3.85 times) compared with those who did not perform any of these activities. Conclusions: Chatbots that are designed to target high-risk user groups and provide relevant actionable items may be perceived as a helpful approach to early contact with the health system for assessing communicable disease symptoms and follow-up care options at home before virtual or in-person contact with health care providers. The results identified and validated significant design factors for conversational recommender systems, including triangulating a high-risk target user population and providing relevant actionable items for users to choose from as part of user engagement. UR - https://mhealth.jmir.org/2023/1/e43105 UR - http://dx.doi.org/10.2196/43105 UR - http://www.ncbi.nlm.nih.gov/pubmed/38096007 ID - info:doi/10.2196/43105 ER - TY - JOUR AU - Loveys, Kate AU - Lloyd, Erica AU - Sagar, Mark AU - Broadbent, Elizabeth PY - 2023/12/5 TI - Development of a Virtual Human for Supporting Tobacco Cessation During the COVID-19 Pandemic JO - J Med Internet Res SP - e42310 VL - 25 KW - virtual human KW - conversational agent KW - tobacco cessation KW - eHealth KW - COVID-19 KW - public health KW - virtual health worker KW - smoking cessation KW - artificial intelligence KW - AI KW - chatbot KW - digital health intervention KW - web-based health KW - mobile phone UR - https://www.jmir.org/2023/1/e42310 UR - http://dx.doi.org/10.2196/42310 UR - http://www.ncbi.nlm.nih.gov/pubmed/38051571 ID - info:doi/10.2196/42310 ER - TY - JOUR AU - Lou, Pei AU - Fang, An AU - Zhao, Wanqing AU - Yao, Kuanda AU - Yang, Yusheng AU - Hu, Jiahui PY - 2023/10/20 TI - Potential Target Discovery and Drug Repurposing for Coronaviruses: Study Involving a Knowledge Graph?Based Approach JO - J Med Internet Res SP - e45225 VL - 25 KW - coronavirus KW - heterogeneous data integration KW - knowledge graph embedding KW - drug repurposing KW - interpretable prediction KW - COVID-19 N2 - Background: The global pandemics of severe acute respiratory syndrome, Middle East respiratory syndrome, and COVID-19 have caused unprecedented crises for public health. Coronaviruses are constantly evolving, and it is unknown which new coronavirus will emerge and when the next coronavirus will sweep across the world. Knowledge graphs are expected to help discover the pathogenicity and transmission mechanism of viruses. Objective: The aim of this study was to discover potential targets and candidate drugs to repurpose for coronaviruses through a knowledge graph?based approach. Methods: We propose a computational and evidence-based knowledge discovery approach to identify potential targets and candidate drugs for coronaviruses from biomedical literature and well-known knowledge bases. To organize the semantic triples extracted automatically from biomedical literature, a semantic conversion model was designed. The literature knowledge was associated and integrated with existing drug and gene knowledge through semantic mapping, and the coronavirus knowledge graph (CovKG) was constructed. We adopted both the knowledge graph embedding model and the semantic reasoning mechanism to discover unrecorded mechanisms of drug action as well as potential targets and drug candidates. Furthermore, we have provided evidence-based support with a scoring and backtracking mechanism. Results: The constructed CovKG contains 17,369,620 triples, of which 641,195 were extracted from biomedical literature, covering 13,065 concept unique identifiers, 209 semantic types, and 97 semantic relations of the Unified Medical Language System. Through multi-source knowledge integration, 475 drugs and 262 targets were mapped to existing knowledge, and 41 new drug mechanisms of action were found by semantic reasoning, which were not recorded in the existing knowledge base. Among the knowledge graph embedding models, TransR outperformed others (mean reciprocal rank=0.2510, Hits@10=0.3505). A total of 33 potential targets and 18 drug candidates were identified for coronaviruses. Among them, 7 novel drugs (ie, quinine, nelfinavir, ivermectin, asunaprevir, tylophorine, Artemisia annua extract, and resveratrol) and 3 highly ranked targets (ie, angiotensin converting enzyme 2, transmembrane serine protease 2, and M protein) were further discussed. Conclusions: We showed the effectiveness of a knowledge graph?based approach in potential target discovery and drug repurposing for coronaviruses. Our approach can be extended to other viruses or diseases for biomedical knowledge discovery and relevant applications. UR - https://www.jmir.org/2023/1/e45225 UR - http://dx.doi.org/10.2196/45225 UR - http://www.ncbi.nlm.nih.gov/pubmed/37862061 ID - info:doi/10.2196/45225 ER - TY - JOUR AU - Kang, Annie AU - Hetrick, Sarah AU - Cargo, Tania AU - Hopkins, Sarah AU - Ludin, Nicola AU - Bodmer, Sarah AU - Stevenson, Kiani AU - Holt-Quick, Chester AU - Stasiak, Karolina PY - 2023/10/12 TI - Exploring Young Adults? Views About Aroha, a Chatbot for Stress Associated With the COVID-19 Pandemic: Interview Study Among Students JO - JMIR Form Res SP - e44556 VL - 7 KW - chatbot KW - mental health KW - COVID-19 KW - young adults KW - acceptability KW - qualitative methods N2 - Background: In March 2020, New Zealand was plunged into its first nationwide lockdown to halt the spread of COVID-19. Our team rapidly adapted our existing chatbot platform to create Aroha, a well-being chatbot intended to address the stress experienced by young people aged 13 to 24 years in the early phase of the pandemic. Aroha was made available nationally within 2 weeks of the lockdown and continued to be available throughout 2020. Objective: In this study, we aimed to evaluate the acceptability and relevance of the chatbot format and Aroha?s content in young adults and to identify areas for improvement. Methods: We conducted qualitative in-depth and semistructured interviews with young adults as well as in situ demonstrations of Aroha to elicit immediate feedback. Interviews were recorded, transcribed, and analyzed using thematic analysis assisted by NVivo (version 12; QSR International). Results: A total of 15 young adults (age in years: median 20; mean 20.07, SD 3.17; female students: n=13, 87%; male students: n=2, 13%; all tertiary students) were interviewed in person. Participants spoke of the challenges of living during the lockdown, including social isolation, loss of motivation, and the demands of remote work or study, although some were able to find silver linings. Aroha was well liked for sounding like a ?real person? and peer with its friendly local ?Kiwi? communication style, rather than an authoritative adult or counselor. The chatbot was praised for including content that went beyond traditional mental health advice. Participants particularly enjoyed the modules on gratitude, being active, anger management, job seeking, and how to deal with alcohol and drugs. Aroha was described as being more accessible than traditional mental health counseling and resources. It was an appealing option for those who did not want to talk to someone in person for fear of the stigma associated with mental health. However, participants disliked the software bugs. They also wanted a more sophisticated conversational interface where they could express themselves and ?vent? in free text. There were several suggestions for making Aroha more relevant to a diverse range of users, including developing content on navigating relationships and diverse chatbot avatars. Conclusions: Chatbots are an acceptable format for scaling up the delivery of public mental health and well-being?enhancing strategies. We make the following recommendations for others interested in designing and rolling out mental health chatbots to better support young people: make the chatbot relatable to its target audience by working with them to develop an authentic and relevant communication style; consider including holistic health and lifestyle content beyond traditional ?mental health? support; and focus on developing features that make users feel heard, understood, and empowered. UR - https://formative.jmir.org/2023/1/e44556 UR - http://dx.doi.org/10.2196/44556 UR - http://www.ncbi.nlm.nih.gov/pubmed/37527545 ID - info:doi/10.2196/44556 ER - TY - JOUR AU - Rambaud, Kimberly AU - van Woerden, Simon AU - Palumbo, Leonardo AU - Salvi, Cristiana AU - Smallwood, Catherine AU - Rockenschaub, Gerald AU - Okoliyski, Michail AU - Marinova, Lora AU - Fomaidi, Galina AU - Djalalova, Malika AU - Faruqui, Nabiha AU - Melo Bianco, Viviane AU - Mosquera, Mario AU - Spasov, Ivaylo AU - Totskaya, Yekaterina PY - 2023/10/10 TI - Building a Chatbot in a Pandemic JO - J Med Internet Res SP - e42960 VL - 25 KW - COVID-19 KW - chatbots KW - evidence-based communication channels KW - conversational agent KW - user-centered KW - health promotion KW - digital health intervention KW - online health information KW - digital health tool KW - health communication UR - https://www.jmir.org/2023/1/e42960 UR - http://dx.doi.org/10.2196/42960 UR - http://www.ncbi.nlm.nih.gov/pubmed/37074958 ID - info:doi/10.2196/42960 ER - TY - JOUR AU - Andrews, Emma Nicole AU - Ireland, David AU - Vijayakumar, Pranavie AU - Burvill, Lyza AU - Hay, Elizabeth AU - Westerman, Daria AU - Rose, Tanya AU - Schlumpf, Mikaela AU - Strong, Jenny AU - Claus, Andrew PY - 2023/10/6 TI - Acceptability of a Pain History Assessment and Education Chatbot (Dolores) Across Age Groups in Populations With Chronic Pain: Development and Pilot Testing JO - JMIR Form Res SP - e47267 VL - 7 KW - chronic pain KW - education KW - neurophysiology KW - neuroscience KW - conversation agent KW - chatbot KW - age KW - young adult KW - adolescence KW - adolescent KW - pain KW - patient education KW - usability KW - acceptability KW - mobile health KW - mHealth KW - mobile app KW - health app KW - youth KW - mobile phone N2 - Background: The delivery of education on pain neuroscience and the evidence for different treatment approaches has become a key component of contemporary persistent pain management. Chatbots, or more formally conversation agents, are increasingly being used in health care settings due to their versatility in providing interactive and individualized approaches to both capture and deliver information. Research focused on the acceptability of diverse chatbot formats can assist in developing a better understanding of the educational needs of target populations. Objective: This study aims to detail the development and initial pilot testing of a multimodality pain education chatbot (Dolores) that can be used across different age groups and investigate whether acceptability and feedback were comparable across age groups following pilot testing. Methods: Following an initial design phase involving software engineers (n=2) and expert clinicians (n=6), a total of 60 individuals with chronic pain who attended an outpatient clinic at 1 of 2 pain centers in Australia were recruited for pilot testing. The 60 individuals consisted of 20 (33%) adolescents (aged 10-18 years), 20 (33%) young adults (aged 19-35 years), and 20 (33%) adults (aged >35 years) with persistent pain. Participants spent 20 to 30 minutes completing interactive chatbot activities that enabled the Dolores app to gather a pain history and provide education about pain and pain treatments. After the chatbot activities, participants completed a custom-made feedback questionnaire measuring the acceptability constructs pertaining to health education chatbots. To determine the effect of age group on the acceptability ratings and feedback provided, a series of binomial logistic regression models and cumulative odds ordinal logistic regression models with proportional odds were generated. Results: Overall, acceptability was high for the following constructs: engagement, perceived value, usability, accuracy, responsiveness, adoption intention, esthetics, and overall quality. The effect of age group on all acceptability ratings was small and not statistically significant. An analysis of open-ended question responses revealed that major frustrations with the app were related to Dolores? speech, which was explored further through a comparative analysis. With respect to providing negative feedback about Dolores? speech, a logistic regression model showed that the effect of age group was statistically significant (?22=11.7; P=.003) and explained 27.1% of the variance (Nagelkerke R2). Adults and young adults were less likely to comment on Dolores? speech compared with adolescent participants (odds ratio 0.20, 95% CI 0.05-0.84 and odds ratio 0.05, 95% CI 0.01-0.43, respectively). Comments were related to both speech rate (too slow) and quality (unpleasant and robotic). Conclusions: This study provides support for the acceptability of pain history and education chatbots across different age groups. Chatbot acceptability for adolescent cohorts may be improved by enabling the self-selection of speech characteristics such as rate and personable tone. UR - https://formative.jmir.org/2023/1/e47267 UR - http://dx.doi.org/10.2196/47267 UR - http://www.ncbi.nlm.nih.gov/pubmed/37801342 ID - info:doi/10.2196/47267 ER - TY - JOUR AU - Passanante, Aly AU - Pertwee, Ed AU - Lin, Leesa AU - Lee, Yoonsup Kristi AU - Wu, T. Joseph AU - Larson, J. Heidi PY - 2023/10/3 TI - Conversational AI and Vaccine Communication: Systematic Review of the Evidence JO - J Med Internet Res SP - e42758 VL - 25 KW - chatbots KW - artificial intelligence KW - conversational AI KW - vaccine communication KW - vaccine hesitancy KW - conversational agent KW - COVID-19 KW - vaccine information KW - health information N2 - Background: Since the mid-2010s, use of conversational artificial intelligence (AI; chatbots) in health care has expanded significantly, especially in the context of increased burdens on health systems and restrictions on in-person consultations with health care providers during the COVID-19 pandemic. One emerging use for conversational AI is to capture evolving questions and communicate information about vaccines and vaccination. Objective: The objective of this systematic review was to examine documented uses and evidence on the effectiveness of conversational AI for vaccine communication. Methods: This systematic review was conducted following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. PubMed, Web of Science, PsycINFO, MEDLINE, Scopus, CINAHL Complete, Cochrane Library, Embase, Epistemonikos, Global Health, Global Index Medicus, Academic Search Complete, and the University of London library database were searched for papers on the use of conversational AI for vaccine communication. The inclusion criteria were studies that included (1) documented instances of conversational AI being used for the purpose of vaccine communication and (2) evaluation data on the impact and effectiveness of the intervention. Results: After duplicates were removed, the review identified 496 unique records, which were then screened by title and abstract, of which 38 were identified for full-text review. Seven fit the inclusion criteria and were assessed and summarized in the findings of this review. Overall, vaccine chatbots deployed to date have been relatively simple in their design and have mainly been used to provide factual information to users in response to their questions about vaccines. Additionally, chatbots have been used for vaccination scheduling, appointment reminders, debunking misinformation, and, in some cases, for vaccine counseling and persuasion. Available evidence suggests that chatbots can have a positive effect on vaccine attitudes; however, studies were typically exploratory in nature, and some lacked a control group or had very small sample sizes. Conclusions: The review found evidence of potential benefits from conversational AI for vaccine communication. Factors that may contribute to the effectiveness of vaccine chatbots include their ability to provide credible and personalized information in real time, the familiarity and accessibility of the chatbot platform, and the extent to which interactions with the chatbot feel ?natural? to users. However, evaluations have focused on the short-term, direct effects of chatbots on their users. The potential longer-term and societal impacts of conversational AI have yet to be analyzed. In addition, existing studies do not adequately address how ethics apply in the field of conversational AI around vaccines. In a context where further digitalization of vaccine communication can be anticipated, additional high-quality research will be required across all these areas. UR - https://www.jmir.org/2023/1/e42758 UR - http://dx.doi.org/10.2196/42758 UR - http://www.ncbi.nlm.nih.gov/pubmed/37788057 ID - info:doi/10.2196/42758 ER - TY - JOUR AU - Fournier-Tombs, Eleonore AU - McHardy, Juliette PY - 2023/7/26 TI - A Medical Ethics Framework for Conversational Artificial Intelligence JO - J Med Internet Res SP - e43068 VL - 25 KW - chatbot KW - medicine KW - ethics KW - AI ethics KW - AI policy KW - conversational agent KW - COVID-19 KW - risk KW - medical ethics KW - privacy KW - data governance KW - artificial intelligence UR - https://www.jmir.org/2023/1/e43068 UR - http://dx.doi.org/10.2196/43068 UR - http://www.ncbi.nlm.nih.gov/pubmed/37224277 ID - info:doi/10.2196/43068 ER - TY - JOUR AU - Mesko, Bertalan PY - 2023/6/22 TI - The ChatGPT (Generative Artificial Intelligence) Revolution Has Made Artificial Intelligence Approachable for Medical Professionals JO - J Med Internet Res SP - e48392 VL - 25 KW - artificial intelligence KW - digital health KW - future KW - technology KW - ChatGPT KW - medical practice KW - large language model KW - language model KW - generative KW - conversational agent KW - conversation agents KW - chatbot KW - generated text KW - computer generated KW - medical education KW - continuing education KW - professional development KW - curriculum KW - curricula UR - https://www.jmir.org/2023/1/e48392 UR - http://dx.doi.org/10.2196/48392 UR - http://www.ncbi.nlm.nih.gov/pubmed/37347508 ID - info:doi/10.2196/48392 ER - TY - JOUR AU - Phiri, Millie AU - Munoriyarwa, Allen PY - 2023/6/14 TI - Health Chatbots in Africa: Scoping Review JO - J Med Internet Res SP - e35573 VL - 25 KW - chatbots KW - health KW - Africa KW - technology KW - artificial intelligence KW - chatbot KW - health promotion KW - health database KW - World Health Organization KW - WHO KW - rural area KW - epidemiology KW - vulnerable population KW - health sector KW - Cochrane database N2 - Background: This scoping review explores and summarizes the existing literature on the use of chatbots to support and promote health in Africa. Objective: The primary aim was to learn where, and under what circumstances, chatbots have been used effectively for health in Africa; how chatbots have been developed to the best effect; and how they have been evaluated by looking at literature published between 2017 and 2022. A secondary aim was to identify potential lessons and best practices for others chatbots. The review also aimed to highlight directions for future research on the use of chatbots for health in Africa. Methods: Using the 2005 Arksey and O?Malley framework, we used a Boolean search to broadly search literature published between January 2017 and July 2022. Literature between June 2021 and July 2022 was identified using Google Scholar, EBSCO information services?which includes the African HealthLine, PubMed, MEDLINE, PsycInfo, Cochrane, Embase, Scopus, and Web of Science databases?and other internet sources (including gray literature). The inclusion criteria were literature about health chatbots in Africa published in journals, conference papers, opinion, or white papers. Results: In all, 212 records were screened, and 12 articles met the inclusion criteria. Results were analyzed according to the themes they covered. The themes identified included the purpose of the chatbot as either providing an educational or information-sharing service or providing a counselling service. Accessibility as a result of either technical restrictions or language restrictions was also noted. Other themes that were identified included the need for the consideration of trust, privacy and ethics, and evaluation. Conclusions: The findings demonstrate that current data are insufficient to show whether chatbots are effectively supporting health in the region. However, the review does reveal insights into popular chatbots and the need to make them accessible through language considerations, platform choice, and user trust, as well as the importance of robust evaluation frameworks to assess their impact. The review also provides recommendations on the direction of future research. UR - https://www.jmir.org/2023/1/e35573 UR - http://dx.doi.org/10.2196/35573 UR - http://www.ncbi.nlm.nih.gov/pubmed/35584083 ID - info:doi/10.2196/35573 ER - TY - JOUR AU - Jackson-Triche, Maga AU - Vetal, Don AU - Turner, Eva-Marie AU - Dahiya, Priya AU - Mangurian, Christina PY - 2023/6/8 TI - Meeting the Behavioral Health Needs of Health Care Workers During COVID-19 by Leveraging Chatbot Technology: Development and Usability Study JO - J Med Internet Res SP - e40635 VL - 25 KW - chatbot technology KW - health care workers KW - mental health equity KW - COVID-19 KW - mental health chatbot KW - behavioral health treatment KW - mental health screening KW - telehealth KW - psychoeducation KW - employee support N2 - Background: During the COVID-19 pandemic, health care systems were faced with the urgent need to implement strategies to address the behavioral health needs of health care workers. A primary concern of any large health care system is developing an easy-to-access, streamlined system of triage and support despite limited behavioral health resources. Objective: This study provides a detailed description of the design and implementation of a chatbot program designed to triage and facilitate access to behavioral health assessment and treatment for the workforce of a large academic medical center. The University of California, San Francisco (UCSF) Faculty, Staff, and Trainee Coping and Resiliency Program (UCSF Cope) aimed to provide timely access to a live telehealth navigator for triage and live telehealth assessment and treatment, curated web-based self-management tools, and nontreatment support groups for those experiencing stress related to their unique roles. Methods: In a public-private partnership, the UCSF Cope team built a chatbot to triage employees based on behavioral health needs. The chatbot is an algorithm-based, automated, and interactive artificial intelligence conversational tool that uses natural language understanding to engage users by presenting a series of questions with simple multiple-choice answers. The goal of each chatbot session was to guide users to services that were appropriate for their needs. Designers developed a chatbot data dashboard to identify and follow trends directly through the chatbot. Regarding other program elements, website user data were collected monthly and participant satisfaction was gathered for each nontreatment support group. Results: The UCSF Cope chatbot was rapidly developed and launched on April 20, 2020. As of May 31, 2022, a total of 10.88% (3785/34,790) of employees accessed the technology. Among those reporting any form of psychological distress, 39.7% (708/1783) of employees requested in-person services, including those who had an existing provider. UCSF employees responded positively to all program elements. As of May 31, 2022, the UCSF Cope website had 615,334 unique users, with 66,585 unique views of webinars and 601,471 unique views of video shorts. All units across UCSF were reached by UCSF Cope staff for special interventions, with >40 units requesting these services. Town halls were particularly well received, with >80% of attendees reporting the experience as helpful. Conclusions: UCSF Cope used chatbot technology to incorporate individualized behavioral health triage, assessment, treatment, and general emotional support for an entire employee base (N=34,790). This level of triage for a population of this size would not have been possible without the use of chatbot technology. The UCSF Cope model has the potential to be scaled, adapted, and implemented across both academically and nonacademically affiliated medical settings. UR - https://www.jmir.org/2023/1/e40635 UR - http://dx.doi.org/10.2196/40635 UR - http://www.ncbi.nlm.nih.gov/pubmed/37146178 ID - info:doi/10.2196/40635 ER - TY - JOUR AU - Islam, Ashraful AU - Chaudhry, Moalla Beenish PY - 2023/6/8 TI - Design Validation of a Relational Agent by COVID-19 Patients: Mixed Methods Study JO - JMIR Hum Factors SP - e42740 VL - 10 KW - COVID-19 KW - relational agent KW - mHealth KW - design validation KW - health care KW - chatbot KW - digital health intervention KW - health care professional KW - heuristic KW - health promotion KW - mental well-being KW - design validation survey KW - self-isolation N2 - Background: Relational agents (RAs) have shown effectiveness in various health interventions with and without doctors and hospital facilities. In situations such as a pandemic like the COVID-19 pandemic when health care professionals (HCPs) and facilities are unable to cope with increased demands, RAs may play a major role in ameliorating the situation. However, they have not been well explored in this domain. Objective: This study aimed to design a prototypical RA in collaboration with COVID-19 patients and HCPs and test it with the potential users, for its ability to deliver services during a pandemic. Methods: The RA was designed and developed in collaboration with people with COVID-19 (n=21) and 2 groups of HCPs (n=19 and n=16, respectively) to aid COVID-19 patients at various stages by performing 4 main tasks: testing guidance, support during self-isolation, handling emergency situations, and promoting postrecovery mental well-being. A design validation survey was conducted with 98 individuals to evaluate the usability of the prototype using the System Usability Scale (SUS), and the participants provided feedback on the design. In addition, the RA?s usefulness and acceptability were rated by the participants using Likert scales. Results: In the design validation survey, the prototypical RA received an average SUS score of 58.82. Moreover, 90% (88/98) of participants perceived it to be helpful, and 69% (68/98) of participants accepted it as a viable alternative to HCPs. The prototypical RA received favorable feedback from the participants, and they were inclined to accept it as an alternative to HCPs in non-life-threatening scenarios despite the usability rating falling below the acceptable threshold. Conclusions: Based on participants? feedback, we recommend further development of the RA with improved automation and emotional support, ability to provide information, tracking, and specific recommendations. UR - https://humanfactors.jmir.org/2023/1/e42740 UR - http://dx.doi.org/10.2196/42740 UR - http://www.ncbi.nlm.nih.gov/pubmed/36350760 ID - info:doi/10.2196/42740 ER - TY - JOUR AU - Nehme, Mayssam AU - Schneider, Franck AU - Perrin, Anne AU - Sum Yu, Wing AU - Schmitt, Simon AU - Violot, Guillemette AU - Ducrot, Aurelie AU - Tissandier, Frederique AU - Posfay-Barbe, Klara AU - Guessous, Idris PY - 2023/6/5 TI - The Development of a Chatbot Technology to Disseminate Post?COVID-19 Information: Descriptive Implementation Study JO - J Med Internet Res SP - e43113 VL - 25 KW - COVID-19 KW - post?COVID-19 KW - long COVID KW - PASC KW - postacute sequelae of SARS-CoV-2 KW - chatbot KW - medical technology KW - online platform KW - information KW - communication KW - dissemination KW - disease management KW - conversational agent KW - digital surveillance KW - pediatric KW - children KW - caregiver N2 - Background: Post?COVID-19, or long COVID, has now affected millions of individuals, resulting in fatigue, neurocognitive symptoms, and an impact on daily life. The uncertainty of knowledge around this condition, including its overall prevalence, pathophysiology, and management, along with the growing numbers of affected individuals, has created an essential need for information and disease management. This has become even more critical in a time of abundant online misinformation and potential misleading of patients and health care professionals. Objective: The RAFAEL platform is an ecosystem created to address the information about and management of post?COVID-19, integrating online information, webinars, and chatbot technology to answer a large number of individuals in a time- and resource-limited setting. This paper describes the development and deployment of the RAFAEL platform and chatbot in addressing post?COVID-19 in children and adults. Methods: The RAFAEL study took place in Geneva, Switzerland. The RAFAEL platform and chatbot were made available online, and all users were considered participants of this study. The development phase started in December 2020 and included developing the concept, the backend, and the frontend, as well as beta testing. The specific strategy behind the RAFAEL chatbot balanced an accessible interactive approach with medical safety, aiming to relay correct and verified information for the management of post?COVID-19. Development was followed by deployment with the establishment of partnerships and communication strategies in the French-speaking world. The use of the chatbot and the answers provided were continuously monitored by community moderators and health care professionals, creating a safe fallback for users. Results: To date, the RAFAEL chatbot has had 30,488 interactions, with an 79.6% (6417/8061) matching rate and a 73.2% (n=1795) positive feedback rate out of the 2451 users who provided feedback. Overall, 5807 unique users interacted with the chatbot, with 5.1 interactions per user, on average, and 8061 stories triggered. The use of the RAFAEL chatbot and platform was additionally driven by the monthly thematic webinars as well as communication campaigns, with an average of 250 participants at each webinar. User queries included questions about post?COVID-19 symptoms (n=5612, 69.2%), of which fatigue was the most predominant query (n=1255, 22.4%) in symptoms-related stories. Additional queries included questions about consultations (n=598, 7.4%), treatment (n=527, 6.5%), and general information (n=510, 6.3%). Conclusions: The RAFAEL chatbot is, to the best of our knowledge, the first chatbot developed to address post?COVID-19 in children and adults. Its innovation lies in the use of a scalable tool to disseminate verified information in a time- and resource-limited environment. Additionally, the use of machine learning could help professionals gain knowledge about a new condition, while concomitantly addressing patients? concerns. Lessons learned from the RAFAEL chatbot will further encourage a participative approach to learning and could potentially be applied to other chronic conditions. UR - https://www.jmir.org/2023/1/e43113 UR - http://dx.doi.org/10.2196/43113 UR - http://www.ncbi.nlm.nih.gov/pubmed/37195688 ID - info:doi/10.2196/43113 ER - TY - JOUR AU - Han, Jing AU - Montagna, Marco AU - Grammenos, Andreas AU - Xia, Tong AU - Bondareva, Erika AU - Siegele-Brown, Chloë AU - Chauhan, Jagmohan AU - Dang, Ting AU - Spathis, Dimitris AU - Floto, Andres R. AU - Cicuta, Pietro AU - Mascolo, Cecilia PY - 2023/5/9 TI - Evaluating Listening Performance for COVID-19 Detection by Clinicians and Machine Learning: Comparative Study JO - J Med Internet Res SP - e44804 VL - 25 KW - audio analysis KW - COVID-19 detection KW - deep learning KW - respiratory disease diagnosis KW - mobile health KW - detection KW - clinicians KW - machine learning KW - respiratory diagnosis KW - clinical decisions KW - respiratory N2 - Background: To date, performance comparisons between men and machines have been carried out in many health domains. Yet machine learning (ML) models and human performance comparisons in audio-based respiratory diagnosis remain largely unexplored. Objective: The primary objective of this study was to compare human clinicians and an ML model in predicting COVID-19 from respiratory sound recordings. Methods: In this study, we compared human clinicians and an ML model in predicting COVID-19 from respiratory sound recordings. Prediction performance on 24 audio samples (12 tested positive) made by 36 clinicians with experience in treating COVID-19 or other respiratory illnesses was compared with predictions made by an ML model trained on 1162 samples. Each sample consisted of voice, cough, and breathing sound recordings from 1 subject, and the length of each sample was around 20 seconds. We also investigated whether combining the predictions of the model and human experts could further enhance the performance in terms of both accuracy and confidence. Results: The ML model outperformed the clinicians, yielding a sensitivity of 0.75 and a specificity of 0.83, whereas the best performance achieved by the clinicians was 0.67 in terms of sensitivity and 0.75 in terms of specificity. Integrating the clinicians? and the model?s predictions, however, could enhance performance further, achieving a sensitivity of 0.83 and a specificity of 0.92. Conclusions: Our findings suggest that the clinicians and the ML model could make better clinical decisions via a cooperative approach and achieve higher confidence in audio-based respiratory diagnosis. UR - https://www.jmir.org/2023/1/e44804 UR - http://dx.doi.org/10.2196/44804 UR - http://www.ncbi.nlm.nih.gov/pubmed/37126593 ID - info:doi/10.2196/44804 ER - TY - JOUR AU - Trzebi?ski, Wojciech AU - Claessens, Toni AU - Buhmann, Jeska AU - De Waele, Aurélie AU - Hendrickx, Greet AU - Van Damme, Pierre AU - Daelemans, Walter AU - Poels, Karolien PY - 2023/5/8 TI - The Effects of Expressing Empathy/Autonomy Support Using a COVID-19 Vaccination Chatbot: Experimental Study in a Sample of Belgian Adults JO - JMIR Form Res SP - e41148 VL - 7 KW - COVID-19 KW - vaccinations KW - chatbot KW - empathy KW - autonomy support KW - perceived user autonomy KW - chatbot patronage intention KW - vaccination intention KW - conversational agent KW - public health KW - digital health intervention KW - health promotion N2 - Background: Chatbots are increasingly used to support COVID-19 vaccination programs. Their persuasiveness may depend on the conversation-related context. Objective: This study aims to investigate the moderating role of the conversation quality and chatbot expertise cues in the effects of expressing empathy/autonomy support using COVID-19 vaccination chatbots. Methods: This experiment with 196 Dutch-speaking adults living in Belgium, who engaged in a conversation with a chatbot providing vaccination information, used a 2 (empathy/autonomy support expression: present vs absent) × 2 (chatbot expertise cues: expert endorser vs layperson endorser) between-subject design. Chatbot conversation quality was assessed through actual conversation logs. Perceived user autonomy (PUA), chatbot patronage intention (CPI), and vaccination intention shift (VIS) were measured after the conversation, coded from 1 to 5 (PUA, CPI) and from ?5 to 5 (VIS). Results: There was a negative interaction effect of chatbot empathy/autonomy support expression and conversation fallback (CF; the percentage of chatbot answers ?I do not understand? in a conversation) on PUA (PROCESS macro, model 1, B=?3.358, SE 1.235, t186=2.718, P=.007). Specifically, empathy/autonomy support expression had a more negative effect on PUA when the CF was higher (conditional effect of empathy/autonomy support expression at the CF level of +1SD: B=?.405, SE 0.158, t186=2.564, P=.011; conditional effects nonsignificant for the mean level: B=?0.103, SE 0.113, t186=0.914, P=.36; conditional effects nonsignificant for the ?1SD level: B=0.031, SE=0.123, t186=0.252, P=.80). Moreover, an indirect effect of empathy/autonomy support expression on CPI via PUA was more negative when CF was higher (PROCESS macro, model 7, 5000 bootstrap samples, moderated mediation index=?3.676, BootSE 1.614, 95% CI ?6.697 to ?0.102; conditional indirect effect at the CF level of +1SD: B=?0.443, BootSE 0.202, 95% CI ?0.809 to ?0.005; conditional indirect effects nonsignificant for the mean level: B=?0.113, BootSE 0.124, 95% CI ?0.346 to 0.137; conditional indirect effects nonsignificant for the ?1SD level: B=0.034, BootSE 0.132, 95% CI ?0.224 to 0.305). Indirect effects of empathy/autonomy support expression on VIS via PUA were marginally more negative when CF was higher. No effects of chatbot expertise cues were found. Conclusions: The findings suggest that expressing empathy/autonomy support using a chatbot may harm its evaluation and persuasiveness when the chatbot fails to answer its users? questions. The paper adds to the literature on vaccination chatbots by exploring the conditional effects of chatbot empathy/autonomy support expression. The results will guide policy makers and chatbot developers dealing with vaccination promotion in designing the way chatbots express their empathy and support for user autonomy. UR - https://formative.jmir.org/2023/1/e41148 UR - http://dx.doi.org/10.2196/41148 UR - http://www.ncbi.nlm.nih.gov/pubmed/37074978 ID - info:doi/10.2196/41148 ER - TY - JOUR AU - Boggiss, Anna AU - Consedine, Nathan AU - Hopkins, Sarah AU - Silvester, Connor AU - Jefferies, Craig AU - Hofman, Paul AU - Serlachius, Anna PY - 2023/5/5 TI - Improving the Well-being of Adolescents With Type 1 Diabetes During the COVID-19 Pandemic: Qualitative Study Exploring Acceptability and Clinical Usability of a Self-compassion Chatbot JO - JMIR Diabetes SP - e40641 VL - 8 KW - self-compassion KW - chatbot KW - conversational agent KW - artificial intelligence KW - adolescence KW - type 1 diabetes KW - mental health KW - digital health KW - psychosocial interventions KW - COVID-19 KW - mobile phone N2 - Background: Before the COVID-19 pandemic, adolescents with type 1 diabetes (T1D) had already experienced far greater rates of psychological distress than their peers. With the pandemic further challenging mental health and increasing the barriers to maintaining optimal diabetes self-management, it is vital that this population has access to remotely deliverable, evidence-based interventions to improve psychological and diabetes outcomes. Chatbots, defined as digital conversational agents, offer these unique advantages, as well as the ability to engage in empathetic and personalized conversations 24-7. Building on previous work developing a self-compassion program for adolescents with T1D, a self-compassion chatbot (COMPASS) was developed for adolescents with T1D to address these concerns. However, the acceptability and potential clinical usability of a chatbot to deliver self-compassion coping tools to adolescents with T1D remained unknown. Objective: This qualitative study was designed to evaluate the acceptability and potential clinical utility of COMPASS among adolescents aged 12 to 16 years with T1D and diabetes health care professionals. Methods: Potential adolescent participants were recruited from previous participant lists, and on the web and in-clinic study flyers, whereas health care professionals were recruited via clinic emails and from diabetes research special interest groups. Qualitative Zoom (Zoom Video Communications, Inc) interviews exploring views on COMPASS were conducted with 19 adolescents (in 4 focus groups) and 11 diabetes health care professionals (in 2 focus groups and 6 individual interviews) from March 2022 to April 2022. Transcripts were analyzed using directed content analysis to examine the features and content of greatest importance to both groups. Results: Adolescents were broadly representative of the youth population living with T1D in Aotearoa (11/19, 58% female; 13/19, 68% Aotearoa New Zealand European; and 2/19, 11% M?ori). Health care professionals represented a range of disciplines, including diabetes nurse specialists (3/11, 27%), health psychologists (3/11, 27%), dieticians (3/11, 27%), and endocrinologists (2/11, 18%). The findings offer insight into what adolescents with T1D and their health care professionals see as the shared advantages of COMPASS and desired future additions, such as personalization (mentioned by all 19 adolescents), self-management support (mentioned by 13/19, 68% of adolescents), clinical utility (mentioned by all 11 health care professionals), and breadth and flexibility of tools (mentioned by 10/11, 91% of health care professionals). Conclusions: Early data suggest that COMPASS is acceptable, is relevant to common difficulties, and has clinical utility during the COVID-19 pandemic. However, shared desired features among both groups, including problem-solving and integration with diabetes technology to support self-management; creating a safe peer-to-peer sense of community; and broadening the representation of cultures, lived experience stories, and diabetes challenges, could further improve the potential of the chatbot. On the basis of these findings, COMPASS is currently being improved to be tested in a feasibility study. UR - https://diabetes.jmir.org/2023/1/e40641 UR - http://dx.doi.org/10.2196/40641 UR - http://www.ncbi.nlm.nih.gov/pubmed/36939680 ID - info:doi/10.2196/40641 ER - TY - JOUR AU - Wang, Ruohan AU - Lv, Honghao AU - Lu, Zhangli AU - Huang, Xiaoyan AU - Wu, Haiteng AU - Xiong, Junjie AU - Yang, Geng PY - 2023/4/20 TI - A Medical Assistive Robot for Telehealth Care During the COVID-19 Pandemic: Development and Usability Study in an Isolation Ward JO - JMIR Hum Factors SP - e42870 VL - 10 KW - COVID-19 KW - MAR KW - telehealth care KW - video chat system KW - mental health care N2 - Background: The COVID-19 pandemic is affecting the mental and emotional well-being of patients, family members, and health care workers. Patients in the isolation ward may have psychological problems due to long-term hospitalization, the development of the epidemic, and the inability to see their families. A medical assistive robot (MAR), acting as an intermediary of communication, can be deployed to address these mental pressures. Objective: CareDo, a MAR with telepresence and teleoperation functions, was developed in this work for remote health care. The aim of this study was to investigate its practical performance in the isolation ward during the pandemic. Methods: Two systems were integrated into the CareDo robot. For the telepresence system, a web real-time communications solution is used for the multiuser chat system and a convolutional neural network is used for expression recognition. For the teleoperation system, an incremental motion mapping method is used for operating the robot remotely. A clinical trial of this system was conducted at First Affiliated Hospital, Zhejiang University. Results: During the clinical trials, tasks such as video chatting, emotion detection, and medical supplies delivery were performed via the CareDo robot. Seven voice commands were set for performing system wakeup, video chatting, and system exiting. Durations from 1 to 3 seconds of common commands were set to improve voice command detection. The facial expression was recorded 152 times for a patient in 1 day for the psychological intervention. The recognition accuracy reached 95% and 92.8% for happy and neutral expressions, respectively. Conclusions: Patients and health care workers can use this MAR in the isolation ward for telehealth care during the COVID-19 pandemic. This can be a useful approach to break the chains of virus transmission and can also be an effective way to conduct remote psychological intervention. UR - https://humanfactors.jmir.org/2023/1/e42870 UR - http://dx.doi.org/10.2196/42870 UR - http://www.ncbi.nlm.nih.gov/pubmed/36634269 ID - info:doi/10.2196/42870 ER - TY - JOUR AU - Monteiro, Goldnadel Maristela AU - Pantani, Daniela AU - Pinsky, Ilana AU - Hernandes Rocha, Augusto Thiago PY - 2023/4/6 TI - Using the Pan American Health Organization Digital Conversational Agent to Educate the Public on Alcohol Use and Health: Preliminary Analysis JO - JMIR Form Res SP - e43165 VL - 7 KW - alcohol use KW - alcohol risk assessment KW - digital health worker KW - artificial intelligence KW - health literacy KW - digital health KW - chatbot KW - misinformation KW - online health information KW - digital health education KW - health risk KW - COVID-19 N2 - Background: There is widespread misinformation about the effects of alcohol consumption on health, which was amplified during the COVID-19 pandemic through social media and internet channels. Chatbots and conversational agents became an important piece of the World Health Organization (WHO) response during the COVID-19 pandemic to quickly disseminate evidence-based information related to COVID-19 and tobacco to the public. The Pan American Health Organization (PAHO) seized the opportunity to develop a conversational agent to talk about alcohol-related topics and therefore complement traditional forms of health education that have been promoted in the past. Objective: This study aimed to develop and deploy a digital conversational agent to interact with an unlimited number of users anonymously, 24 hours a day, about alcohol topics, including ways to reduce risks from drinking, that is accessible in several languages, at no cost, and through various devices. Methods: The content development was based on the latest scientific evidence on the impacts of alcohol on health, social norms about drinking, and data from the WHO and PAHO. The agent itself was developed through a nonexclusive license agreement with a private company (Soul Machines) and included Google Digital Flow ES as the natural language processing software and Amazon Web Services for cloud services. Another company was contracted to program all the conversations, following the technical advice of PAHO staff. Results: The conversational agent was named Pahola, and it was deployed on November 19, 2021, through the PAHO website after a launch event with high publicity. No identifiable data were used and all interactions were anonymous, and therefore, this was not considered research with human subjects. Pahola speaks in English, Spanish, and Portuguese and interacts anonymously with a potentially infinite number of users through various digital devices. Users were required to accept the terms and conditions to enable access to their camera and microphone to interact with Pahola. Pahola attracted good attention from the media and reached 1.6 million people, leading to 236,000 clicks on its landing page, mostly through mobile devices. Only 1532 users had a conversation after clicking to talk to Pahola. The average time users spent talking to Pahola was 5 minutes. Major dropouts were observed in different steps of the conversation flow. Some questions asked by users were not anticipated during programming and could not be answered. Conclusions: Our findings showed several limitations to using a conversational agent for alcohol education to the general public. Improvements are needed to expand the content to make it more meaningful and engaging to the public. The potential of chatbots to educate the public on alcohol-related topics seems enormous but requires a long-term investment of resources and research to be useful and reach many more people. UR - https://formative.jmir.org/2023/1/e43165 UR - http://dx.doi.org/10.2196/43165 UR - http://www.ncbi.nlm.nih.gov/pubmed/36961920 ID - info:doi/10.2196/43165 ER - TY - JOUR AU - Chagas, Azevedo Bruno AU - Pagano, Silvina Adriana AU - Prates, Oliveira Raquel AU - Praes, Cordeiro Elisa AU - Ferreguetti, Kícila AU - Vaz, Helena AU - Reis, Nogueira Zilma Silveira AU - Ribeiro, Bonisson Leonardo AU - Ribeiro, Pinho Antonio Luiz AU - Pedroso, Marques Thais AU - Beleigoli, Alline AU - Oliveira, Alves Clara Rodrigues AU - Marcolino, Soriano Milena PY - 2023/4/3 TI - Evaluating User Experience With a Chatbot Designed as a Public Health Response to the COVID-19 Pandemic in Brazil: Mixed Methods Study JO - JMIR Hum Factors SP - e43135 VL - 10 KW - user experience KW - chatbots KW - telehealth KW - COVID-19 KW - human-computer interaction KW - HCI KW - empirical studies in human-computer interaction KW - empirical studies in HCI KW - health care information systems N2 - Background: The potential of chatbots for screening and monitoring COVID-19 was envisioned since the outbreak of the disease. Chatbots can help disseminate up-to-date and trustworthy information, promote healthy social behavior, and support the provision of health care services safely and at scale. In this scenario and in view of its far-reaching postpandemic impact, it is important to evaluate user experience with this kind of application. Objective: We aimed to evaluate the quality of user experience with a COVID-19 chatbot designed by a large telehealth service in Brazil, focusing on the usability of real users and the exploration of strengths and shortcomings of the chatbot, as revealed in reports by participants in simulated scenarios. Methods: We examined a chatbot developed by a multidisciplinary team and used it as a component within the workflow of a local public health care service. The chatbot had 2 core functionalities: assisting web-based screening of COVID-19 symptom severity and providing evidence-based information to the population. From October 2020 to January 2021, we conducted a mixed methods approach and performed a 2-fold evaluation of user experience with our chatbot by following 2 methods: a posttask usability Likert-scale survey presented to all users after concluding their interaction with the bot and an interview with volunteer participants who engaged in a simulated interaction with the bot guided by the interviewer. Results: Usability assessment with 63 users revealed very good scores for chatbot usefulness (4.57), likelihood of being recommended (4.48), ease of use (4.44), and user satisfaction (4.38). Interviews with 15 volunteers provided insights into the strengths and shortcomings of our bot. Comments on the positive aspects and problems reported by users were analyzed in terms of recurrent themes. We identified 6 positive aspects and 15 issues organized in 2 categories: usability of the chatbot and health support offered by it, the former referring to usability of the chatbot and how users can interact with it and the latter referring to the chatbot?s goal in supporting people during the pandemic through the screening process and education to users through informative content. We found 6 themes accounting for what people liked most about our chatbot and why they found it useful?3 themes pertaining to the usability domain and 3 themes regarding health support. Our findings also identified 15 types of problems producing a negative impact on users?10 of them related to the usability of the chatbot and 5 related to the health support it provides. Conclusions: Our results indicate that users had an overall positive experience with the chatbot and found the health support relevant. Nonetheless, qualitative evaluation of the chatbot indicated challenges and directions to be pursued in improving not only our COVID-19 chatbot but also health chatbots in general. UR - https://humanfactors.jmir.org/2023/1/e43135 UR - http://dx.doi.org/10.2196/43135 UR - http://www.ncbi.nlm.nih.gov/pubmed/36634267 ID - info:doi/10.2196/43135 ER - TY - JOUR AU - Shah, B. Ami AU - Oyegun, Eghosa AU - Hampton, Brett William AU - Neri, Antonio AU - Maddox, Nicole AU - Raso, Danielle AU - Sandhu, Paramjit AU - Patel, Anita AU - Koonin, M. Lisa AU - Lee, Leslie AU - Roper, Lauren AU - Whitfield, Geoffrey AU - Siegel, A. David AU - Koumans, H. Emily PY - 2023/3/10 TI - Engagement With the Centers for Disease Control and Prevention Coronavirus Self-Checker and Guidance Provided to Users in the United States From March 23, 2020, to April 19, 2021: Thematic and Trend Analysis JO - J Med Internet Res SP - e39054 VL - 25 KW - COVID-19 KW - automated symptom checker KW - Self-Checker KW - triage KW - medical care KW - online information seeking KW - clinical assessment tool N2 - Background: In 2020, at the onset of the COVID-19 pandemic, the United States experienced surges in healthcare needs, which challenged capacity throughout the healthcare system. Stay-at-home orders in many jurisdictions, cancellation of elective procedures, and closures of outpatient medical offices disrupted patient access to care. To inform symptomatic persons about when to seek care and potentially help alleviate the burden on the healthcare system, Centers for Disease Control and Prevention (CDC) and partners developed the CDC Coronavirus Self-Checker (?Self-Checker?). This interactive tool assists individuals seeking information about COVID-19 to determine the appropriate level of care by asking demographic, clinical, and nonclinical questions during an online ?conversation.? Objective: This paper describes user characteristics, trends in use, and recommendations delivered by the Self-Checker between March 23, 2020, and April 19, 2021, for pursuing appropriate levels of medical care depending on the severity of user symptoms. Methods: User characteristics and trends in completed conversations that resulted in a care message were analyzed. Care messages delivered by the Self-Checker were manually classified into three overarching conversation themes: (1) seek care immediately; (2) take no action, or stay home and self-monitor; and (3) conversation redirected. Trends in 7-day averages of conversations and COVID-19 cases were examined with development and marketing milestones that potentially impacted Self-Checker user engagement. Results: Among 16,718,667 completed conversations, the Self-Checker delivered recommendations for 69.27% (n=11,580,738) of all conversations to ?take no action, or stay home and self-monitor?; 28.8% (n=4,822,138) of conversations to ?seek care immediately?; and 1.89% (n=315,791) of conversations were redirected to other resources without providing any care advice. Among 6.8 million conversations initiated for self-reported sick individuals without life-threatening symptoms, 59.21% resulted in a recommendation to ?take no action, or stay home and self-monitor.? Nearly all individuals (99.8%) who were not sick were also advised to ?take no action, or stay home and self-monitor.? Conclusions: The majority of Self-Checker conversations resulted in advice to take no action, or stay home and self-monitor. This guidance may have reduced patient volume on the medical system; however, future studies evaluating patients? satisfaction, intention to follow the care advice received, course of action, and care modality pursued could clarify the impact of the Self-Checker and similar tools during future public health emergencies. UR - https://www.jmir.org/2023/1/e39054 UR - http://dx.doi.org/10.2196/39054 UR - http://www.ncbi.nlm.nih.gov/pubmed/36745776 ID - info:doi/10.2196/39054 ER - TY - JOUR AU - Elkin, A. Javier AU - McDowell, Michelle AU - Yau, Brian AU - Machiri, Varaidzo Sandra AU - Pal, Shanthi AU - Briand, Sylvie AU - Muneene, Derrick AU - Nguyen, Tim AU - Purnat, D. Tina PY - 2023/3/8 TI - The Good Talk! A Serious Game to Boost People?s Competence to Have Open Conversations About COVID-19: Protocol for a Randomized Controlled Trial JO - JMIR Res Protoc SP - e40753 VL - 12 KW - vaccine hesitancy KW - communication KW - serious game KW - conversation skills KW - self-efficacy KW - behavioral intentions KW - COVID-19 N2 - Background: Vaccine hesitancy is one of the many factors impeding efforts to control the COVID-19 pandemic. Exacerbated by the COVID-19 infodemic, misinformation has undermined public trust in vaccination, led to greater polarization, and resulted in a high social cost where close social relationships have experienced conflict or disagreements about the public health response. Objective: The purpose of this paper is to describe the theory behind the development of a digital behavioral science intervention?The Good Talk!?designed to target vaccine-hesitant individuals through their close contacts (eg, family, friends, and colleagues) and to describe the methodology of a research study to evaluate its efficacy. Methods: The Good Talk! uses an educational serious game approach to boost the skills and competences of vaccine advocates to have open conversations about COVID-19 with their close contacts who are vaccine hesitant. The game teaches vaccine advocates evidence-based open conversation skills to help them speak with individuals who have opposing points of view or who may ascribe to nonscientifically supported beliefs while retaining trust, identifying common ground, and fostering acceptance and respect of divergent views. The game is currently under development and will be available on the web, free to access for participants worldwide, and accompanied by a promotional campaign to recruit participants through social media channels. This protocol describes the methodology for a randomized controlled trial that will compare participants who play The Good Talk! game with a control group that plays the widely known noneducational game Tetris. The study will evaluate a participant?s open conversation skills, self-efficacy, and behavioral intentions to have an open conversation with a vaccine-hesitant individual both before and after game play. Results: Recruitment will commence in early 2023 and will cease once 450 participants complete the study (225 per group). The primary outcome is improvement in open conversation skills. Secondary outcomes are self-efficacy and behavioral intentions to have an open conversation with a vaccine-hesitant individual. Exploratory analyses will examine the effect of the game on implementation intentions as well as potential covariates or subgroup differences based on sociodemographic information or previous experiences with COVID-19 vaccination conversations. Conclusions: The outcome of the project is to promote more open conversations regarding COVID-19 vaccination. We hope that our approach will encourage more governments and public health experts to engage in their mission to reach their citizens directly with digital health solutions and to consider such interventions as an important tool in infodemic management. International Registered Report Identifier (IRRID): PRR1-10.2196/40753 UR - https://www.researchprotocols.org/2023/1/e40753 UR - http://dx.doi.org/10.2196/40753 UR - http://www.ncbi.nlm.nih.gov/pubmed/36884269 ID - info:doi/10.2196/40753 ER - TY - JOUR AU - Weeks, Rose AU - Sangha, Pooja AU - Cooper, Lyra AU - Sedoc, João AU - White, Sydney AU - Gretz, Shai AU - Toledo, Assaf AU - Lahav, Dan AU - Hartner, Anna-Maria AU - Martin, M. Nina AU - Lee, Hyoung Jae AU - Slonim, Noam AU - Bar-Zeev, Naor PY - 2023/1/30 TI - Usability and Credibility of a COVID-19 Vaccine Chatbot for Young Adults and Health Workers in the United States: Formative Mixed Methods Study JO - JMIR Hum Factors SP - e40533 VL - 10 KW - COVID-19 KW - chatbot development KW - risk communication KW - vaccine hesitancy KW - conversational agent KW - health information KW - chatbot KW - natural language processing KW - usability KW - user feedback N2 - Background: The COVID-19 pandemic raised novel challenges in communicating reliable, continually changing health information to a broad and sometimes skeptical public, particularly around COVID-19 vaccines, which, despite being comprehensively studied, were the subject of viral misinformation. Chatbots are a promising technology to reach and engage populations during the pandemic. To inform and communicate effectively with users, chatbots must be highly usable and credible. Objective: We sought to understand how young adults and health workers in the United States assessed the usability and credibility of a web-based chatbot called Vira, created by the Johns Hopkins Bloomberg School of Public Health and IBM Research using natural language processing technology. Using a mixed method approach, we sought to rapidly improve Vira?s user experience to support vaccine decision-making during the peak of the COVID-19 pandemic. Methods: We recruited racially and ethnically diverse young people and health workers, with both groups from urban areas of the United States. We used the validated Chatbot Usability Questionnaire to understand the tool?s navigation, precision, and persona. We also conducted 11 interviews with health workers and young people to understand the user experience, whether they perceived the chatbot as confidential and trustworthy, and how they would use the chatbot. We coded and categorized emerging themes to understand the determining factors for participants? assessment of chatbot usability and credibility. Results: In all, 58 participants completed a web-based usability questionnaire and 11 completed in-depth interviews. Most questionnaire respondents said the chatbot was ?easy to navigate? (51/58, 88%) and ?very easy to use? (50/58, 86%), and many (45/58, 78%) said its responses were relevant. The mean Chatbot Usability Questionnaire score was 70.2 (SD 12.1) and scores ranged from 40.6 to 95.3. Interview participants felt the chatbot achieved high usability due to its strong functionality, performance, and perceived confidentiality and that the chatbot could attain high credibility with a redesign of its cartoonish visual persona. Young people said they would use the chatbot to discuss vaccination with hesitant friends or family members, whereas health workers used or anticipated using the chatbot to support community outreach, save time, and stay up to date. Conclusions: This formative study conducted during the pandemic?s peak provided user feedback for an iterative redesign of Vira. Using a mixed method approach provided multidimensional feedback, identifying how the chatbot worked well?being easy to use, answering questions appropriately, and using credible branding?while offering tangible steps to improve the product?s visual design. Future studies should evaluate how chatbots support personal health decision-making, particularly in the context of a public health emergency, and whether such outreach tools can reduce staff burnout. Randomized studies should also be conducted to measure how chatbots countering health misinformation affect user knowledge, attitudes, and behavior. UR - https://humanfactors.jmir.org/2023/1/e40533 UR - http://dx.doi.org/10.2196/40533 UR - http://www.ncbi.nlm.nih.gov/pubmed/36409300 ID - info:doi/10.2196/40533 ER - TY - JOUR AU - Chin, Hyojin AU - Lima, Gabriel AU - Shin, Mingi AU - Zhunis, Assem AU - Cha, Chiyoung AU - Choi, Junghoi AU - Cha, Meeyoung PY - 2023/1/27 TI - User-Chatbot Conversations During the COVID-19 Pandemic: Study Based on Topic Modeling and Sentiment Analysis JO - J Med Internet Res SP - e40922 VL - 25 KW - chatbot KW - COVID-19 KW - topic modeling KW - sentiment analysis KW - infodemiology KW - discourse KW - public perception KW - public health KW - infoveillance KW - conversational agent KW - global health KW - health information N2 - Background: Chatbots have become a promising tool to support public health initiatives. Despite their potential, little research has examined how individuals interacted with chatbots during the COVID-19 pandemic. Understanding user-chatbot interactions is crucial for developing services that can respond to people?s needs during a global health emergency. Objective: This study examined the COVID-19 pandemic?related topics online users discussed with a commercially available social chatbot and compared the sentiment expressed by users from 5 culturally different countries. Methods: We analyzed 19,782 conversation utterances related to COVID-19 covering 5 countries (the United States, the United Kingdom, Canada, Malaysia, and the Philippines) between 2020 and 2021, from SimSimi, one of the world?s largest open-domain social chatbots. We identified chat topics using natural language processing methods and analyzed their emotional sentiments. Additionally, we compared the topic and sentiment variations in the COVID-19?related chats across countries. Results: Our analysis identified 18 emerging topics, which could be categorized into the following 5 overarching themes: ?Questions on COVID-19 asked to the chatbot? (30.6%), ?Preventive behaviors? (25.3%), ?Outbreak of COVID-19? (16.4%), ?Physical and psychological impact of COVID-19? (16.0%), and ?People and life in the pandemic? (11.7%). Our data indicated that people considered chatbots as a source of information about the pandemic, for example, by asking health-related questions. Users turned to SimSimi for conversation and emotional messages when offline social interactions became limited during the lockdown period. Users were more likely to express negative sentiments when conversing about topics related to masks, lockdowns, case counts, and their worries about the pandemic. In contrast, small talk with the chatbot was largely accompanied by positive sentiment. We also found cultural differences, with negative words being used more often by users in the United States than by those in Asia when talking about COVID-19. Conclusions: Based on the analysis of user-chatbot interactions on a live platform, this work provides insights into people?s informational and emotional needs during a global health crisis. Users sought health-related information and shared emotional messages with the chatbot, indicating the potential use of chatbots to provide accurate health information and emotional support. Future research can look into different support strategies that align with the direction of public health policy. UR - https://www.jmir.org/2023/1/e40922 UR - http://dx.doi.org/10.2196/40922 UR - http://www.ncbi.nlm.nih.gov/pubmed/36596214 ID - info:doi/10.2196/40922 ER - TY - JOUR AU - Sinha, Chaitali AU - Meheli, Saha AU - Kadaba, Madhura PY - 2023/1/26 TI - Understanding Digital Mental Health Needs and Usage With an Artificial Intelligence?Led Mental Health App (Wysa) During the COVID-19 Pandemic: Retrospective Analysis JO - JMIR Form Res SP - e41913 VL - 7 KW - digital mental health KW - COVID-19 KW - engagement KW - retention KW - perceived needs KW - pandemic waves KW - chatbot KW - conversational agent KW - mental health app KW - mobile health KW - digital health intervention N2 - Background: There has been a surge in mental health concerns during the COVID-19 pandemic, which has prompted the increased use of digital platforms. However, there is little known about the mental health needs and behaviors of the global population during the pandemic. This study aims to fill this knowledge gap through the analysis of real-world data collected from users of a digital mental health app (Wysa) regarding their engagement patterns and behaviors, as shown by their usage of the service. Objective: This study aims to (1) examine the relationship between mental health distress, digital health uptake, and COVID-19 case numbers; (2) evaluate engagement patterns with the app during the study period; and (3) examine the efficacy of the app in improving mental health outcomes for its users during the pandemic. Methods: This study used a retrospective observational design. During the COVID-19 pandemic, the app?s installations and emotional utterances were measured from March 2020 to October 2021 for the United Kingdom, the United States of America, and India and were mapped against COVID-19 case numbers and their peaks. The engagement of the users from this period (N=4541) with the Wysa app was compared to that of equivalent samples of users from a pre?COVID-19 period (1000 iterations). The efficacy was assessed for users who completed pre-post assessments for symptoms of depression (n=2061) and anxiety (n=1995) on the Patient Health Questionnaire-9 (PHQ-9) and Generalized Anxiety Disorder-7 (GAD-7) test measures, respectively. Results: Our findings demonstrate a significant positive correlation between the increase in the number of installs of the Wysa mental health app and the peaks of COVID-19 case numbers in the United Kingdom (P=.02) and India (P<.001). Findings indicate that users (N=4541) during the COVID period had a significantly higher engagement than the samples from the pre-COVID period, with a medium to large effect size for 80% of these 1000 iterative samples, as observed on the Mann-Whitney test. The PHQ-9 and GAD-7 pre-post assessments indicated statistically significant improvement with a medium effect size (PHQ-9: P=.57; GAD-7: P=.56). Conclusions: This study demonstrates that emotional distress increased substantially during the pandemic, prompting the increased uptake of an artificial intelligence?led mental health app (Wysa), and also offers evidence that the Wysa app could support its users and its usage could result in a significant reduction in symptoms of anxiety and depression. This study also highlights the importance of contextualizing interventions and suggests that digital health interventions can provide large populations with scalable and evidence-based support for mental health care. UR - https://formative.jmir.org/2023/1/e41913 UR - http://dx.doi.org/10.2196/41913 UR - http://www.ncbi.nlm.nih.gov/pubmed/36540052 ID - info:doi/10.2196/41913 ER - TY - JOUR AU - Perez-Ramos, G. Jose AU - Leon-Thomas, Mariela AU - Smith, L. Sabrina AU - Silverman, Laura AU - Perez-Torres, Claudia AU - Hall, C. Wyatte AU - Iadarola, Suzannah PY - 2023/1/25 TI - COVID-19 Vaccine Equity and Access: Case Study for Health Care Chatbots JO - JMIR Form Res SP - e39045 VL - 7 KW - mHealth KW - ICT KW - Information and Communication Technology KW - community KW - chatbot KW - COVID-19 KW - health equity KW - mobile health KW - health outcome KW - health disparity KW - minority population KW - health care gap KW - chatbot tool KW - user experience KW - chatbot development KW - health information N2 - Background: Disparities in COVID-19 information and vaccine access have emerged during the pandemic. Individuals from historically excluded communities (eg, Black and Latin American) experience disproportionately negative health outcomes related to COVID-19. Community gaps in COVID-19 education, social, and health care services (including vaccines) should be prioritized as a critical effort to end the pandemic. Misinformation created by the politicization of COVID-19 and related public health measures has magnified the pandemic?s challenges, including access to health care, vaccination and testing efforts, as well as personal protective equipment. Information and Communication Technology (ICT) has been demonstrated to reduce the gaps of marginalization in education and access among communities. Chatbots are an increasingly present example of ICTs, particularly in health care and in relation to the COVID-19 pandemic. Objective: This project aimed to (1) follow an inclusive and theoretically driven design process to develop and test a COVID-19 information ICT bilingual (English and Spanish) chatbot tool named ?Ana? and (2) characterize and evaluate user experiences of these innovative technologies. Methods: Ana was developed following a multitheoretical framework, and the project team was comprised of public health experts, behavioral scientists, community members, and medical team. A total of 7 iterations of ß chatbots were tested, and a total of 22 ß testers participated in this process. Content was curated primarily to provide users with factual answers to common questions about COVID-19. To ensure relevance of the content, topics were driven by community concerns and questions, as ascertained through research. Ana?s repository of educational content was based on national and international organizations as well as interdisciplinary experts. In the context of this development and pilot project, we identified an evaluation framework to explore reach, engagement, and satisfaction. Results: A total of 626 community members used Ana from August 2021 to March 2022. Among those participants, 346 used the English version, with an average of 43 users per month; and 280 participants used the Spanish version, with an average of 40 users monthly. Across all users, 63.87% (n=221) of English users and 22.14% (n=62) of Spanish users returned to use Ana at least once; 18.49% (n=64) among the English version users and 18.57% (n=52) among the Spanish version users reported their ranking. Positive ranking comprised the ?smiley? and ?loved? emojis, and negative ranking comprised the ?neutral,? ?sad,? and ?mad? emojis. When comparing negative and positive experiences, the latter was higher across Ana?s platforms (English: n=41, 64.06%; Spanish: n=41, 77.35%) versus the former (English: n=23, 35.93%; Spanish: n=12, 22.64%). Conclusions: This pilot project demonstrated the feasibility and capacity of an innovative ICT to share COVID-19 information within diverse communities. Creating a chatbot like Ana with bilingual content contributed to an equitable approach to address the lack of accessible COVID-19?related information. UR - https://formative.jmir.org/2023/1/e39045 UR - http://dx.doi.org/10.2196/39045 UR - http://www.ncbi.nlm.nih.gov/pubmed/36630649 ID - info:doi/10.2196/39045 ER - TY - JOUR AU - Wlasak, Wendy AU - Zwanenburg, Paul Sander AU - Paton, Chris PY - 2023/1/25 TI - Supporting Autonomous Motivation for Physical Activity With Chatbots During the COVID-19 Pandemic: Factorial Experiment JO - JMIR Form Res SP - e38500 VL - 7 KW - autonomous motivation KW - chatbots KW - self-determination theory KW - physical activity KW - factorial experiment KW - mobile phone KW - COVID-19 N2 - Background: Although physical activity can mitigate disease trajectories and improve and sustain mental health, many people have become less physically active during the COVID-19 pandemic. Personal information technology, such as activity trackers and chatbots, can technically converse with people and possibly enhance their autonomous motivation to engage in physical activity. The literature on behavior change techniques (BCTs) and self-determination theory (SDT) contains promising insights that can be leveraged in the design of these technologies; however, it remains unclear how this can be achieved. Objective: This study aimed to evaluate the feasibility of a chatbot system that improves the user?s autonomous motivation for walking based on BCTs and SDT. First, we aimed to develop and evaluate various versions of a chatbot system based on promising BCTs. Second, we aimed to evaluate whether the use of the system improves the autonomous motivation for walking and the associated factors of need satisfaction. Third, we explored the support for the theoretical mechanism and effectiveness of various BCT implementations. Methods: We developed a chatbot system using the mobile apps Telegram (Telegram Messenger Inc) and Google Fit (Google LLC). We implemented 12 versions of this system, which differed in 3 BCTs: goal setting, experimenting, and action planning. We then conducted a feasibility study with 102 participants who used this system over the course of 3 weeks, by conversing with a chatbot and completing questionnaires, capturing their perceived app support, need satisfaction, physical activity levels, and motivation. Results: The use of the chatbot systems was satisfactory, and on average, its users reported increases in autonomous motivation for walking. The dropout rate was low. Although approximately half of the participants indicated that they would have preferred to interact with a human instead of the chatbot, 46.1% (47/102) of the participants stated that the chatbot helped them become more active, and 42.2% (43/102) of the participants decided to continue using the chatbot for an additional week. Furthermore, the majority thought that a more advanced chatbot could be very helpful. The motivation was associated with the satisfaction of the needs of competence and autonomy, and need satisfaction, in turn, was associated with the perceived system support, providing support for SDT underpinnings. However, no substantial differences were found across different BCT implementations. Conclusions: The results provide evidence that chatbot systems are a feasible means to increase autonomous motivation for physical activity. We found support for SDT as a basis for the design, laying a foundation for larger studies to confirm the effectiveness of the selected BCTs within chatbot systems, explore a wider range of BCTs, and help the development of guidelines for the design of interactive technology that helps users achieve long-term health benefits. UR - https://formative.jmir.org/2023/1/e38500 UR - http://dx.doi.org/10.2196/38500 UR - http://www.ncbi.nlm.nih.gov/pubmed/36512402 ID - info:doi/10.2196/38500 ER - TY - JOUR AU - White, K. Becky AU - Martin, Annegret AU - White, Angus James PY - 2022/12/27 TI - User Experience of COVID-19 Chatbots: Scoping Review JO - J Med Internet Res SP - e35903 VL - 24 IS - 12 KW - COVID-19 KW - chatbot KW - engagement KW - user experience KW - pandemic KW - global health KW - digital health KW - health information N2 - Background: The COVID-19 pandemic has had global impacts and caused some health systems to experience substantial pressure. The need for accurate health information has been felt widely. Chatbots have great potential to reach people with authoritative information, and a number of chatbots have been quickly developed to disseminate information about COVID-19. However, little is known about user experiences of and perspectives on these tools. Objective: This study aimed to describe what is known about the user experience and user uptake of COVID-19 chatbots. Methods: A scoping review was carried out in June 2021 using keywords to cover the literature concerning chatbots, user engagement, and COVID-19. The search strategy included databases covering health, communication, marketing, and the COVID-19 pandemic specifically, including MEDLINE Ovid, Embase, CINAHL, ACM Digital Library, Emerald, and EBSCO. Studies that assessed the design, marketing, and user features of COVID-19 chatbots or those that explored user perspectives and experience were included. We excluded papers that were not related to COVID-19; did not include any reporting on user perspectives, experience, or the general use of chatbot features or marketing; or where a version was not available in English. The authors independently screened results for inclusion, using both backward and forward citation checking of the included papers. A thematic analysis was carried out with the included papers. Results: A total of 517 papers were sourced from the literature, and 10 were included in the final review. Our scoping review identified a number of factors impacting adoption and engagement including content, trust, digital ability, and acceptability. The papers included discussions about chatbots developed for COVID-19 screening and general COVID-19 information, as well as studies investigating user perceptions and opinions on COVID-19 chatbots. Conclusions: The COVID-19 pandemic presented a unique and specific challenge for digital health interventions. Design and implementation were required at a rapid speed as digital health service adoption accelerated globally. Chatbots for COVID-19 have been developed quickly as the pandemic has challenged health systems. There is a need for more comprehensive and routine reporting of factors impacting adoption and engagement. This paper has shown both the potential of chatbots to reach users in an emergency and the need to better understand how users engage and what they want. UR - https://www.jmir.org/2022/12/e35903 UR - http://dx.doi.org/10.2196/35903 UR - http://www.ncbi.nlm.nih.gov/pubmed/36520624 ID - info:doi/10.2196/35903 ER - TY - JOUR AU - He, Yuhao AU - Yang, Li AU - Zhu, Xiaokun AU - Wu, Bin AU - Zhang, Shuo AU - Qian, Chunlian AU - Tian, Tian PY - 2022/11/21 TI - Mental Health Chatbot for Young Adults With Depressive Symptoms During the COVID-19 Pandemic: Single-Blind, Three-Arm Randomized Controlled Trial JO - J Med Internet Res SP - e40719 VL - 24 IS - 11 KW - chatbot KW - conversational agent KW - depression KW - mental health KW - mHealth KW - digital medicine KW - randomized controlled trial KW - evaluation KW - cognitive behavioral therapy KW - young adult KW - youth KW - health service KW - mobile health KW - COVID-19 N2 - Background: Depression has a high prevalence among young adults, especially during the COVID-19 pandemic. However, mental health services remain scarce and underutilized worldwide. Mental health chatbots are a novel digital technology to provide fully automated interventions for depressive symptoms. Objective: The purpose of this study was to test the clinical effectiveness and nonclinical performance of a cognitive behavioral therapy (CBT)?based mental health chatbot (XiaoE) for young adults with depressive symptoms during the COVID-19 pandemic. Methods: In a single-blind, 3-arm randomized controlled trial, participants manifesting depressive symptoms recruited from a Chinese university were randomly assigned to a mental health chatbot (XiaoE; n=49), an e-book (n=49), or a general chatbot (Xiaoai; n=50) group in a ratio of 1:1:1. Participants received a 1-week intervention. The primary outcome was the reduction of depressive symptoms according to the 9-item Patient Health Questionnaire (PHQ-9) at 1 week later (T1) and 1 month later (T2). Both intention-to-treat and per-protocol analyses were conducted under analysis of covariance models adjusting for baseline data. Controlled multiple imputation and ?-based sensitivity analysis were performed for missing data. The secondary outcomes were the level of working alliance measured using the Working Alliance Questionnaire (WAQ), usability measured using the Usability Metric for User Experience-LITE (UMUX-LITE), and acceptability measured using the Acceptability Scale (AS). Results: Participants were on average 18.78 years old, and 37.2% (55/148) were female. The mean baseline PHQ-9 score was 10.02 (SD 3.18; range 2-19). Intention-to-treat analysis revealed lower PHQ-9 scores among participants in the XiaoE group compared with participants in the e-book group and Xiaoai group at both T1 (F2,136=17.011; P<.001; d=0.51) and T2 (F2,136=5.477; P=.005; d=0.31). Better working alliance (WAQ; F2,145=3.407; P=.04) and acceptability (AS; F2,145=4.322; P=.02) were discovered with XiaoE, while no significant difference among arms was found for usability (UMUX-LITE; F2,145=0.968; P=.38). Conclusions: A CBT-based chatbot is a feasible and engaging digital therapeutic approach that allows easy accessibility and self-guided mental health assistance for young adults with depressive symptoms. A systematic evaluation of nonclinical metrics for a mental health chatbot has been established in this study. In the future, focus on both clinical outcomes and nonclinical metrics is necessary to explore the mechanism by which mental health chatbots work on patients. Further evidence is required to confirm the long-term effectiveness of the mental health chatbot via trails replicated with a longer dose, as well as exploration of its stronger efficacy in comparison with other active controls. Trial Registration: Chinese Clinical Trial Registry ChiCTR2100052532; http://www.chictr.org.cn/showproj.aspx?proj=135744 UR - https://www.jmir.org/2022/11/e40719 UR - http://dx.doi.org/10.2196/40719 UR - http://www.ncbi.nlm.nih.gov/pubmed/36355633 ID - info:doi/10.2196/40719 ER - TY - JOUR AU - Ludin, Nicola AU - Holt-Quick, Chester AU - Hopkins, Sarah AU - Stasiak, Karolina AU - Hetrick, Sarah AU - Warren, Jim AU - Cargo, Tania PY - 2022/11/4 TI - A Chatbot to Support Young People During the COVID-19 Pandemic in New Zealand: Evaluation of the Real-World Rollout of an Open Trial JO - J Med Internet Res SP - e38743 VL - 24 IS - 11 KW - COVID-19 KW - youth KW - chatbots KW - adolescent mental health KW - dialog-based intervention KW - digital mental health N2 - Background: The number of young people in New Zealand (Aotearoa) who experience mental health challenges is increasing. As those in Aotearoa went into the initial COVID-19 lockdown, an ongoing digital mental health project was adapted and underwent rapid content authoring to create the Aroha chatbot. This dynamic digital support was designed with and for young people to help manage pandemic-related worry. Objective: Aroha was developed to provide practical evidence-based tools for anxiety management using cognitive behavioral therapy and positive psychology. The chatbot included practical ideas to maintain social and cultural connection, and to stay active and well. Methods: Stay-at-home orders under Aotearoa?s lockdown commenced on March 20, 2020. By leveraging previously developed chatbot technology and broader existing online trial infrastructure, the Aroha chatbot was launched promptly on April 7, 2020. Dissemination of the chatbot for an open trial was via a URL, and feedback on the experience of the lockdown and the experience of Aroha was gathered via online questionnaires and a focus group, and from community members. Results: In the 2 weeks following the launch of the chatbot, there were 393 registrations, and 238 users logged into the chatbot, of whom 127 were in the target age range (13-24 years). Feedback guided iterative and responsive content authoring to suit the dynamic situation and motivated engineering to dynamically detect and react to a range of conversational intents. Conclusions: The experience of the implementation of the Aroha chatbot highlights the feasibility of providing timely event-specific digital mental health support and the technology requirements for a flexible and enabling chatbot architectural framework. UR - https://www.jmir.org/2022/11/e38743 UR - http://dx.doi.org/10.2196/38743 UR - http://www.ncbi.nlm.nih.gov/pubmed/36219754 ID - info:doi/10.2196/38743 ER - TY - JOUR AU - Pithpornchaiyakul, Samerchit AU - Naorungroj, Supawadee AU - Pupong, Kittiwara AU - Hunsrisakhun, Jaranya PY - 2022/10/21 TI - Using a Chatbot as an Alternative Approach for In-Person Toothbrushing Training During the COVID-19 Pandemic: Comparative Study JO - J Med Internet Res SP - e39218 VL - 24 IS - 10 KW - mHealth KW - tele-dentistry KW - digital health KW - chatbot KW - conversional agents KW - oral hygiene KW - oral health behaviors KW - protection motivation theory KW - young children KW - caregiver KW - in-person toothbrushing training KW - COVID-19 N2 - Background: It is recommended that caregivers receive oral health education and in-person training to improve toothbrushing for young children. To strengthen oral health education before COVID-19, the 21-Day FunDee chatbot with in-person toothbrushing training for caregivers was used. During the pandemic, practical experience was difficult to implement. Therefore, the 30-Day FunDee chatbot was created to extend the coverage of chatbots from 21 days to 30 days by incorporating more videos on toothbrushing demonstrations and dialogue. This was a secondary data comparison of 2 chatbots in similar rural areas of Pattani province: Maikan district (Study I) and Maelan district (Study II). Objective: This study aimed to evaluate the effectiveness and usability of 2 chatbots, 21-Day FunDee (Study I) and 30-Day FunDee (Study II), based on the protection motivation theory (PMT). This study explored the feasibility of using the 30-Day FunDee chatbot to increase toothbrushing behaviors for caregivers in oral hygiene care for children aged 6 months to 36 months without in-person training during the COVID-19 pandemic. Methods: A pre-post design was used in both studies. The effectiveness was evaluated among caregivers in terms of oral hygiene practices, knowledge, and oral health care perceptions based on PMT. In Study I, participants received in-person training and a 21-day chatbot course during October 2018 to February 2019. In Study II, participants received only daily chatbot programming for 30 days during December 2021 to February 2022. Data were gathered at baseline of each study and at 30 days and 60 days after the start of Study I and Study II, respectively. After completing their interventions, the chatbot's usability was assessed using open-ended questions. Study I evaluated the plaque score, whereas Study II included an in-depth interview. The 2 studies were compared to determine the feasibility of using the 30-Day FunDee chatbot as an alternative to in-person training. Results: There were 71 pairs of participants: 37 in Study I and 34 in Study II. Both chatbots significantly improved overall knowledge (Study I: P<.001; Study II: P=.001), overall oral health care perceptions based on PMT (Study I: P<.001; Study II: P<.001), and toothbrushing for children by caregivers (Study I: P=.02; Study II: P=.04). Only Study I had statistically significant differences in toothbrushing at least twice a day (P=.002) and perceived vulnerability (P=.003). The highest overall chatbot satisfaction was 9.2 (SD 0.9) in Study I and 8.6 (SD 1.2) in Study II. In Study I, plaque levels differed significantly (P<.001). Conclusions: This was the first study using a chatbot in oral health education. We established the effectiveness and usability of 2 chatbot programs for promoting oral hygiene care of young children by caregivers. The 30-Day FunDee chatbot showed the possibility of improving toothbrushing skills without requiring in-person training. Trial Registration: Thai Clinical Trials Registry TCTR20191223005; http://www.thaiclinicaltrials.org/show/TCTR20191223005 and TCTR20210927004; https://www.thaiclinicaltrials.org/show/TCTR20210927004 UR - https://www.jmir.org/2022/10/e39218 UR - http://dx.doi.org/10.2196/39218 UR - http://www.ncbi.nlm.nih.gov/pubmed/36179147 ID - info:doi/10.2196/39218 ER - TY - JOUR AU - Goonesekera, Yenushka AU - Donkin, Liesje PY - 2022/10/20 TI - A Cognitive Behavioral Therapy Chatbot (Otis) for Health Anxiety Management: Mixed Methods Pilot Study JO - JMIR Form Res SP - e37877 VL - 6 IS - 10 KW - health anxiety KW - conversational agent KW - illness anxiety disorder KW - COVID-19 KW - iCBT KW - user experience KW - anthropomorphism N2 - Background: An increase in health anxiety was observed during the COVID-19 pandemic. However, due to physical distancing restrictions and a strained mental health system, people were unable to access support to manage health anxiety. Chatbots are emerging as an interactive means to deliver psychological interventions in a scalable manner and provide an opportunity for novel therapy delivery to large groups of people including those who might struggle to access traditional therapies. Objective: The aim of this mixed methods pilot study was to investigate the feasibility, acceptability, engagement, and effectiveness of a cognitive behavioral therapy (CBT)?based chatbot (Otis) as an early health anxiety management intervention for adults in New Zealand during the COVID-19 pandemic. Methods: Users were asked to complete a 14-day program run by Otis, a primarily decision tree?based chatbot on Facebook Messenger. Health anxiety, general anxiety, intolerance of uncertainty, personal well-being, and quality of life were measured pre-intervention, postintervention, and at a 12-week follow-up. Paired samples t tests and 1-way ANOVAs were conducted to investigate the associated changes in the outcomes over time. Semistructured interviews and written responses in the self-report questionnaires and Facebook Messenger were thematically analyzed. Results: The trial was completed by 29 participants who provided outcome measures at both postintervention and follow-up. Although an average decrease in health anxiety did not reach significance at postintervention (P=.55) or follow-up (P=.08), qualitative analysis demonstrated that participants perceived benefiting from the intervention. Significant improvement in general anxiety, personal well-being, and quality of life was associated with the use of Otis at postintervention and follow-up. Anthropomorphism, Otis? appearance, and delivery of content facilitated the use of Otis. Technical difficulties and high performance and effort expectancy were, in contrast, barriers to acceptance and engagement of Otis. Conclusions: Otis may be a feasible, acceptable, and engaging means of delivering CBT to improve anxiety management, quality of life, and personal well-being but might not significantly reduce health anxiety. UR - https://formative.jmir.org/2022/10/e37877 UR - http://dx.doi.org/10.2196/37877 UR - http://www.ncbi.nlm.nih.gov/pubmed/36150049 ID - info:doi/10.2196/37877 ER - TY - JOUR AU - Daniel, Thomas AU - de Chevigny, Alix AU - Champrigaud, Adeline AU - Valette, Julie AU - Sitbon, Marine AU - Jardin, Meryam AU - Chevalier, Delphine AU - Renet, Sophie PY - 2022/10/11 TI - Answering Hospital Caregivers? Questions at Any Time: Proof-of-Concept Study of an Artificial Intelligence?Based Chatbot in a French Hospital JO - JMIR Hum Factors SP - e39102 VL - 9 IS - 4 KW - chatbot KW - artificial intelligence KW - pharmacy KW - hospital KW - health care KW - drugs KW - medication KW - information quality KW - health information KW - caregiver KW - healthcare staff KW - digital health tool KW - COVID-19 KW - information technology N2 - Background: Access to accurate information in health care is a key point for caregivers to avoid medication errors, especially with the reorganization of staff and drug circuits during health crises such as the COVID?19 pandemic. It is, therefore, the role of the hospital pharmacy to answer caregivers? questions. Some may require the expertise of a pharmacist, some should be answered by pharmacy technicians, but others are simple and redundant, and automated responses may be provided. Objective: We aimed at developing and implementing a chatbot to answer questions from hospital caregivers about drugs and pharmacy organization 24 hours a day and to evaluate this tool. Methods: The ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model was used by a multiprofessional team composed of 3 hospital pharmacists, 2 members of the Innovation and Transformation Department, and the IT service provider. Based on an analysis of the caregivers? needs about drugs and pharmacy organization, we designed and developed a chatbot. The tool was then evaluated before its implementation into the hospital intranet. Its relevance and conversations with testers were monitored via the IT provider?s back office. Results: Needs analysis with 5 hospital pharmacists and 33 caregivers from 5 health services allowed us to identify 7 themes about drugs and pharmacy organization (such as opening hours and specific prescriptions). After a year of chatbot design and development, the test version obtained good evaluation scores: its speed was rated 8.2 out of 10, usability 8.1 out of 10, and appearance 7.5 out of 10. Testers were generally satisfied (70%) and were hoping for the content to be enhanced. Conclusions: The chatbot seems to be a relevant tool for hospital caregivers, helping them obtain reliable and verified information they need on drugs and pharmacy organization. In the context of significant mobility of nursing staff during the health crisis due to the COVID-19 pandemic, the chatbot could be a suitable tool for transmitting relevant information related to drug circuits or specific procedures. To our knowledge, this is the first time that such a tool has been designed for caregivers. Its development further continued by means of tests conducted with other users such as pharmacy technicians and via the integration of additional data before the implementation on the 2 hospital sites. UR - https://humanfactors.jmir.org/2022/4/e39102 UR - http://dx.doi.org/10.2196/39102 UR - http://www.ncbi.nlm.nih.gov/pubmed/35930555 ID - info:doi/10.2196/39102 ER - TY - JOUR AU - Wilson, Lee AU - Marasoiu, Mariana PY - 2022/10/5 TI - The Development and Use of Chatbots in Public Health: Scoping Review JO - JMIR Hum Factors SP - e35882 VL - 9 IS - 4 KW - chatbots KW - conversational agents KW - public health KW - evidence KW - scoping review KW - health care system KW - chatbot development KW - digital health KW - mental health KW - health technology KW - COVID-19 KW - pandemic KW - chatbot application N2 - Background: Chatbots are computer programs that present a conversation-like interface through which people can access information and services. The COVID-19 pandemic has driven a substantial increase in the use of chatbots to support and complement traditional health care systems. However, despite the uptake in their use, evidence to support the development and deployment of chatbots in public health remains limited. Recent reviews have focused on the use of chatbots during the COVID-19 pandemic and the use of conversational agents in health care more generally. This paper complements this research and addresses a gap in the literature by assessing the breadth and scope of research evidence for the use of chatbots across the domain of public health. Objective: This scoping review had 3 main objectives: (1) to identify the application domains in public health in which there is the most evidence for the development and use of chatbots; (2) to identify the types of chatbots that are being deployed in these domains; and (3) to ascertain the methods and methodologies by which chatbots are being evaluated in public health applications. This paper explored the implications for future research on the development and deployment of chatbots in public health in light of the analysis of the evidence for their use. Methods: Following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines for scoping reviews, relevant studies were identified through searches conducted in the MEDLINE, PubMed, Scopus, Cochrane Central Register of Controlled Trials, IEEE Xplore, ACM Digital Library, and Open Grey databases from mid-June to August 2021. Studies were included if they used or evaluated chatbots for the purpose of prevention or intervention and for which the evidence showed a demonstrable health impact. Results: Of the 1506 studies identified, 32 were included in the review. The results show a substantial increase in the interest of chatbots in the past few years, shortly before the pandemic. Half (16/32, 50%) of the research evaluated chatbots applied to mental health or COVID-19. The studies suggest promise in the application of chatbots, especially to easily automated and repetitive tasks, but overall, the evidence for the efficacy of chatbots for prevention and intervention across all domains is limited at present. Conclusions: More research is needed to fully understand the effectiveness of using chatbots in public health. Concerns with the clinical, legal, and ethical aspects of the use of chatbots for health care are well founded given the speed with which they have been adopted in practice. Future research on their use should address these concerns through the development of expertise and best practices specific to public health, including a greater focus on user experience. UR - https://humanfactors.jmir.org/2022/4/e35882 UR - http://dx.doi.org/10.2196/35882 UR - http://www.ncbi.nlm.nih.gov/pubmed/36197708 ID - info:doi/10.2196/35882 ER - TY - JOUR AU - Luk, Tsun Tzu AU - Lui, Tung Judy Hiu AU - Wang, Ping Man PY - 2022/10/4 TI - Efficacy, Usability, and Acceptability of a Chatbot for Promoting COVID-19 Vaccination in Unvaccinated or Booster-Hesitant Young Adults: Pre-Post Pilot Study JO - J Med Internet Res SP - e39063 VL - 24 IS - 10 KW - COVID-19 KW - coronavirus KW - vaccine KW - immunization KW - booster KW - vaccine hesitancy KW - chatbot KW - conversational agent KW - virtual assistant KW - Chinese KW - young adult KW - youth KW - health promotion KW - health intervention KW - chatbot usability KW - pandemic KW - booster hesitancy KW - web-based survey KW - students KW - university students N2 - Background: COVID-19 vaccines are highly effective in preventing severe disease and death but are underused. Interventions to address COVID-19 vaccine hesitancy are paramount to reducing the burden of COVID-19. Objective: We aimed to evaluate the preliminary efficacy, usability, and acceptability of a chatbot for promoting COVID-19 vaccination and examine the factors associated with COVID-19 vaccine hesitancy. Methods: In November 2021, we conducted a pre-post pilot study to evaluate ?Vac Chat, Fact Check,? a web-based chatbot for promoting COVID-19 vaccination. We conducted a web-based survey (N=290) on COVID-19 vaccination at a university in Hong Kong. A subset of 46 participants who were either unvaccinated (n=22) or were vaccinated but hesitant to receive boosters (n=24) were selected and given access to the chatbot for a 7-day trial period. The chatbot provided information about COVID-19 vaccination (eg, efficacy and common side effects), debunked common myths about the vaccine, and included a decision aid for selecting vaccine platforms (inactivated and mRNA vaccines). The main efficacy outcome was changes in the COVID-19 Vaccine Hesitancy Scale (VHS) score (range 9-45) from preintervention (web-based survey) to postintervention (immediately posttrial). Other efficacy outcomes included changes in intention to vaccinate or receive boosters and willingness to encourage others to vaccinate on a scale from 1 (not at all) to 5 (very). Usability was assessed by the System Usability Scale (range 0-100). Linear regression was used to examine the factors associated with COVID-19 VHS scores in all survey respondents. Results: The mean (SD) age of all survey respondents was 21.4 (6.3) years, and 61% (177/290) of respondents were female. Higher eHealth literacy (B=?0.26; P<.001) and perceived danger of COVID-19 (B=?0.17; P=.009) were associated with lower COVID-19 vaccine hesitancy, adjusting for age, sex, chronic disease status, previous flu vaccination, and perceived susceptibility to COVID-19. The main efficacy outcome of COVID-19 VHS score significantly decreased from 28.6 (preintervention) to 24.5 (postintervention), with a mean difference of ?4.2 (P<.001) and an effect size (Cohen d) of 0.94. The intention to vaccinate increased from 3.0 to 3.9 (P<.001) in unvaccinated participants, whereas the intention to receive boosters increased from 1.9 to 2.8 (P<.001) in booster-hesitant participants. Willingness to encourage others to vaccinate increased from 2.7 to 3.0 (P=.04). At postintervention, the median (IQR) System Usability Scale score was 72.5 (65-77.5), whereas the median (IQR) recommendation score was 7 (6-8) on a scale from 0 to 10. In a post hoc 4-month follow-up, 82% (18/22) of initially unvaccinated participants reported having received the COVID-19 vaccine, whereas 29% (7/24) of booster-hesitant participants received boosters. Conclusions: This pilot study provided initial evidence to support the efficacy, usability, and acceptability of a chatbot for promoting COVID-19 vaccination in young adults who were unvaccinated or booster-hesitant. UR - https://www.jmir.org/2022/10/e39063 UR - http://dx.doi.org/10.2196/39063 UR - http://www.ncbi.nlm.nih.gov/pubmed/36179132 ID - info:doi/10.2196/39063 ER - TY - JOUR AU - Whittaker, Robyn AU - Dobson, Rosie AU - Garner, Katie PY - 2022/9/26 TI - Chatbots for Smoking Cessation: Scoping Review JO - J Med Internet Res SP - e35556 VL - 24 IS - 9 KW - chatbot KW - conversational agent KW - COVID-19 KW - smoking cessation N2 - Background: Despite significant progress in reducing tobacco use over the past 2 decades, tobacco still kills over 8 million people every year. Digital interventions, such as text messaging, have been found to help people quit smoking. Chatbots, or conversational agents, are new digital tools that mimic instantaneous human conversation and therefore could extend the effectiveness of text messaging. Objective: This scoping review aims to assess the extent of research in the chatbot literature for smoking cessation and provide recommendations for future research in this area. Methods: Relevant studies were identified through searches conducted in Embase, MEDLINE, APA PsycINFO, Google Scholar, and Scopus, as well as additional searches on JMIR, Cochrane Library, Lancet Digital Health, and Digital Medicine. Studies were considered if they were conducted with tobacco smokers, were conducted between 2000 and 2021, were available in English, and included a chatbot intervention. Results: Of 323 studies identified, 10 studies were included in the review (3 framework articles, 1 study protocol, 2 pilot studies, 2 trials, and 2 randomized controlled trials). Most studies noted some benefits related to smoking cessation and participant engagement; however, outcome measures varied considerably. The quality of the studies overall was low, with methodological issues and low follow-up rates. Conclusions: More research is needed to make a firm conclusion about the efficacy of chatbots for smoking cessation. Researchers need to provide more in-depth descriptions of chatbot functionality, mode of delivery, and theoretical underpinnings. Consistency in language and terminology would also assist in reviews of what approaches work across the field. UR - https://www.jmir.org/2022/9/e35556 UR - http://dx.doi.org/10.2196/35556 UR - http://www.ncbi.nlm.nih.gov/pubmed/36095295 ID - info:doi/10.2196/35556 ER - TY - JOUR AU - Shan, Yi AU - Ji, Meng AU - Xie, Wenxiu AU - Zhang, Xiaomin AU - Qian, Xiaobo AU - Li, Rongying AU - Hao, Tianyong PY - 2022/6/9 TI - Use of Health Care Chatbots Among Young People in China During the Omicron Wave of COVID-19: Evaluation of the User Experience of and Satisfaction With the Technology JO - JMIR Hum Factors SP - e36831 VL - 9 IS - 2 KW - health care chatbots KW - COVID-19 KW - user experience KW - user satisfaction KW - theory of consumption values KW - chatbots KW - adolescent KW - youth KW - digital health KW - health care KW - omicron wave KW - omicron KW - health care system KW - conversational agent N2 - Background: Long before the outbreak of COVID-19, chatbots had been playing an increasingly crucial role and gaining growing popularity in health care. In the current omicron waves of this pandemic when the most resilient health care systems at the time are increasingly being overburdened, these conversational agents (CA) are being resorted to as preferred alternatives for health care information. For many people, especially adolescents and the middle-aged, mobile phones are the most favored source of information. As a result of this, it is more important than ever to investigate the user experience of and satisfaction with chatbots on mobile phones. Objective: The objective of this study was twofold: (1) Informed by Deneche and Warren?s evaluation framework, Zhu et al?s measures of variables, and the theory of consumption values (TCV), we designed a new assessment model for evaluating the user experience of and satisfaction with chatbots on mobile phones, and (2) we aimed to validate the newly developed model and use it to gain an understanding of the user experience of and satisfaction with popular health care chatbots that are available for use by young people aged 17-35 years in southeast China in self-diagnosis and for acquiring information about COVID-19 and virus variants that are currently spreading. Methods: First, to assess user experience and satisfaction, we established an assessment model based on relevant literature and TCV. Second, the chatbots were prescreened and selected for investigation. Subsequently, 413 informants were recruited from Nantong University, China. This was followed by a questionnaire survey soliciting the participants? experience of and satisfaction with the selected health care chatbots via wenjuanxing, an online questionnaire survey platform. Finally, quantitative and qualitative analyses were conducted to find the informants? perception. Results: The data collected were highly reliable (Cronbach ?=.986) and valid: communalities=0.632-0.823, Kaiser-Meyer-Olkin (KMO)=0.980, and percentage of cumulative variance (rotated)=75.257% (P<.001). The findings of this study suggest a considerable positive impact of functional, epistemic, emotional, social, and conditional values on the participants? overall user experience and satisfaction and a positive correlation between these values and user experience and satisfaction (Pearson correlation P<.001). The functional values (mean 1.762, SD 0.630) and epistemic values (mean 1.834, SD 0.654) of the selected chatbots were relatively more important contributors to the students? positive experience and overall satisfaction than the emotional values (mean 1.993, SD 0.683), conditional values (mean 1.995, SD 0.718), and social values (mean 1.998, SD 0.696). All the participants (n=413, 100%) had a positive experience and were thus satisfied with the selected health care chatbots. The 5 grade categories of participants showed different degrees of user experience and satisfaction: Seniors (mean 1.853, SD 0.108) were the most receptive to health care chatbots for COVID-19 self-diagnosis and information, and second-year graduate candidates (mean 2.069, SD 0.133) were the least receptive; freshmen (mean 1.883, SD 0.114) and juniors (mean 1.925, SD 0.087) felt slightly more positive than sophomores (mean 1.989, SD 0.092) and first-year graduate candidates (mean 1.992, SD 0.116) when engaged in conversations with the chatbots. In addition, female informants (mean 1.931, SD 0.098) showed a relatively more receptive attitude toward the selected chatbots than male respondents (mean 1.999, SD 0.051). Conclusions: This study investigated the use of health care chatbots among young people (aged 17-35 years) in China, focusing on their user experience and satisfaction examined through an assessment framework. The findings show that the 5 domains in the new assessment model all have a positive impact on the participants? user experience and satisfaction. In this paper, we examined the usability of health care chatbots as well as actual chatbots used for other purposes, enriching the literature on the subject. This study also provides practical implication for designers and developers as well as for governments of all countries, especially in the critical period of the omicron waves of COVID-19 and other future public health crises. UR - https://humanfactors.jmir.org/2022/2/e36831 UR - http://dx.doi.org/10.2196/36831 UR - http://www.ncbi.nlm.nih.gov/pubmed/35576058 ID - info:doi/10.2196/36831 ER -