Original Paper
Abstract
Background: Chatbots have become a promising tool to support public health initiatives. Despite their potential, little research has examined how individuals interacted with chatbots during the COVID-19 pandemic. Understanding user-chatbot interactions is crucial for developing services that can respond to people’s needs during a global health emergency.
Objective: This study examined the COVID-19 pandemic–related topics online users discussed with a commercially available social chatbot and compared the sentiment expressed by users from 5 culturally different countries.
Methods: We analyzed 19,782 conversation utterances related to COVID-19 covering 5 countries (the United States, the United Kingdom, Canada, Malaysia, and the Philippines) between 2020 and 2021, from SimSimi, one of the world’s largest open-domain social chatbots. We identified chat topics using natural language processing methods and analyzed their emotional sentiments. Additionally, we compared the topic and sentiment variations in the COVID-19–related chats across countries.
Results: Our analysis identified 18 emerging topics, which could be categorized into the following 5 overarching themes: “Questions on COVID-19 asked to the chatbot” (30.6%), “Preventive behaviors” (25.3%), “Outbreak of COVID-19” (16.4%), “Physical and psychological impact of COVID-19” (16.0%), and “People and life in the pandemic” (11.7%). Our data indicated that people considered chatbots as a source of information about the pandemic, for example, by asking health-related questions. Users turned to SimSimi for conversation and emotional messages when offline social interactions became limited during the lockdown period. Users were more likely to express negative sentiments when conversing about topics related to masks, lockdowns, case counts, and their worries about the pandemic. In contrast, small talk with the chatbot was largely accompanied by positive sentiment. We also found cultural differences, with negative words being used more often by users in the United States than by those in Asia when talking about COVID-19.
Conclusions: Based on the analysis of user-chatbot interactions on a live platform, this work provides insights into people’s informational and emotional needs during a global health crisis. Users sought health-related information and shared emotional messages with the chatbot, indicating the potential use of chatbots to provide accurate health information and emotional support. Future research can look into different support strategies that align with the direction of public health policy.
doi:10.2196/40922
Keywords
Introduction
Background
Digital platform usage has increased tremendously during the COVID-19 pandemic. As individuals seek information and guidelines about the novel disease, the internet has become the most significant channel for receiving breaking news and updates. Since social media platforms can provide rich information to raise public awareness and inform people about outbreak locations, these platforms help provide insights into practical issues concerning infectious disease outbreaks [
]. These digital channels can be used to reveal public attitudes and behaviors during a pandemic to better support crisis communication [ ].Chatbots are an emerging digital platform that has received considerable attention from public health initiatives during the novel coronavirus (COVID-19) pandemic [
]. Chatbots are systems that interact with users via text or voice and use natural language [ ]. They have been widely used for amusement (eg, small talk) and business (eg, customer service and sales) [ ]. Many early commercial chatbots use pattern matching to find the most appropriate response for a user’s input. Such matching can be regarded as the simplest form of predicting user intent and conversation context [ ]. Newer chatbots are starting to generate responses through large neural networks, called large language models, which are pretrained on billion-scale conversation data sets [ ]. Recent chatbots try to personalize the user experience by adopting long-term memories and remembering past conversations. Some even build a topic set that can be expanded via a web search, which lets people talk to chatbots about a wide range of new topics that chatbots are not trained on [ ].Chatbots have several advantages over human interaction partners or other online services. For example, chatbots operate at a low cost and are available around the clock to handle user queries [
, , ]. Furthermore, they can provide more concise and less overwhelming information than the lengthy list of results offered by social media or search engines [ ]. These advantages have prompted health organizations and humanitarian organizations to use chatbots to assess health risks [ ]. During COVID-19, chatbots have been used by the World Health Organization (WHO) [ ] and the US Centers for Disease Control and Prevention (CDC) [ ].How users engage with health information may differ depending on the digital platform [
]; thus, studies have analyzed the idiosyncrasies of each platform. In particular, prior studies on pandemics examined online conversations on social media sites, such as Twitter, Facebook, and Reddit [ - ]. In comparison, studies on chatbots are scarce. Studies on COVID-19 have been limited to literature reviews on existing chatbots and how they could be changed for the pandemic [ , , ] and a study on how to use artificial intelligence (AI) techniques to make a COVID-19 chatbot [ ].This study offers a complementary perspective and examines user interactions with a commercial social chatbot during the first 2 years of the COVID-19 pandemic. According to earlier research [
], people are more likely to open up to a virtual agent than to a human agent, so chat logs offer a natural setting for “social listening” of the public’s needs and concerns. We hence anticipate that our chatbot data will reveal users’ real emotional needs during the pandemic.We analyzed chat logs from SimSimi [
], one of the world’s largest and longest-running social chatbot services. The service was launched in 2002 and has served over 400 million cumulative user conversations in 81 languages, with up to 200 million chat utterances being served daily. We examined 19,782 user-chatbot conversation utterances containing COVID-19 keywords and analyzed which topics users brought up in their chats. We also analyzed user interactions by observing each topic’s sentiment using the Linguistic Inquiry and Word Count (LIWC) dictionary and investigating how they differed between users in the United States, Malaysia, and the Philippines.Compared to previous work studying COVID-19 discourse on the internet, our research found that users are more likely to share negative emotions and personal stories with a chatbot than on social media. People’s tendency to seek informational and emotional support on an open domain chatbot platform demonstrates the potential for chatbot-assisted public health support. Our research on how chatbots are used during a health emergency provides several insights for policymakers and public health officials to better prepare for the next global health crisis.
Related Research
The COVID-19 pandemic has made people more reliant on digital platforms to receive information and share their opinions and emotions. Research has been conducted to identify patterns of COVID-19 discourse in diverse data sources, communities, and locations. While some studies analyzed academic papers [
] and news articles [ , ], most research focused on social media platforms like Reddit [ , ], Facebook [ ], and Twitter [ - , ]. Some studies investigated particular languages, such as Chinese [ ], French [ ], Portuguese [ ], and German [ ], as well as specific locations, such as North America [ ] and Asia [ ]. Extensive literature on Twitter discovered that social media could support people’s crisis communication and provide insights into the public discourse and attitudes toward COVID-19 [ , , , , ]. Two common approaches to these research questions have been topic modeling and sentiment analysis, which have been used to explore prominent issues and users’ opinions on social media [ , ].However, no study has investigated the general public’s attitudes based on real data on an emerging medium, that is, chatbots. According to a prior study, users interact differently depending on the platform type [
]. Therefore, it is anticipated that user interactions with chatbots will differ from those on social media.Because of their anonymity, chatbots are regarded as a “safe space” in which users can discuss sensitive topics without fear of being judged [
], making them particularly beneficial in areas where public stigma is prevalent, for example, in discussions involving depression and mental health [ , ]. One such stigmatized subject is COVID-19 [ ].Furthermore, some chatbot users perceive that bots can be helpful when they cannot receive social support from others [
]. Chatbots are also cost-efficient; they can simultaneously communicate with millions of people in different local languages while educating the public about COVID-19 [ , ]. Moreover, chatbots generate more succinct responses than social media or search engines and can promptly respond to user questions around the clock [ ].Despite the potential of chatbots to fulfill online users’ information needs during the pandemic, user-chatbot interaction data are yet to be analyzed. Previous research has reviewed existing COVID-19–related chatbots [
, , ]. In the health domain, the literature has focused on the role of chatbots in promoting mental health [ , ], responding to telemedicine needs [ , ], or answering questions about COVID-19 [ , ]. Some studies have proposed design frameworks for developing chatbots that provide information related to COVID-19 [ , ]. However, no study has looked into how people interact with live chatbots in the real world, which is the subject of this paper.The work most similar to ours is a study by Schubel et al [
], which analyzed chatbot data and investigated variations by population subgroups, such as race, age, and gender. However, the chatbot’s role was limited to the functions of “symptom screener” and “learning module,” and the research only explored the frequency of use of these 2 menus by user groups.Methods
Data
Data Source
We had access to the chat data set of SimSimi, an open-domain chatbot. Like other commercial chatbots (eg, Replika, Anima, and Kuki AI [
- ]), SimSimi is a social chatbot whose main objective is to engage in small talk with users, rather than to provide information ( ). In contrast, many existing chatbots for COVID-19 are tailored to provide specific information [ , ]. Given the growing literature [ ] that suggests people may be more willing to discuss sensitive topics with chatbots and virtual agents than with humans, we wanted to investigate what role a social chatbot might serve during a catastrophic event such as the COVID-19 pandemic. The findings presented below refer to users’ interactions with a social chatbot, that is, SimSimi, and might not be generalizable to other conversational agents.To generate a chat response, SimSimi searches its database, which contains more than 114.6 million user-chatbot conversations. Based on text similarity and internal context embedding, the most similar and appropriate chat response to a user’s input utterance is selected. SimSimi is a global service with millions of users from all over the world. Its massive user data set across numerous nations allows us to research actual COVID-19–related public discourse in the wild.
Data Set
We selected the following top 5 countries with the highest number of user-chatbot conversations in English: the United States, the United Kingdom, Canada, Malaysia, and the Philippines. Geolocation information was obtained at the time users installed the SimSimi app based on the time zone settings in the user devices. SimSimi maps this time zone information to a specific country according to the tz database [
]. Only 1.75% of all SimSimi conversations do not contain a country code for various reasons, such as failing to map a less common time zone to a country. Our analysis is based on data that contained country information (approximately 98.25%). Users who live in a country different from that where they installed the app may not have accurate country information.We queried the SimSimi database for utterances sent between 2020 and 2021 from the listed countries above, which mentioned popular COVID-19–related keywords.
shows the keywords that were used for data collection. The keywords were gathered from the Early AI-supported Response with Social Listening (EARS) tool of the WHO, which is a platform that summarizes real-time information from social media discourse [ ].Keywords used | Period | Countries (number of conversation blocks) |
covid, corona, pandemic, virus, lockdown, social distancing, pneumonia, cough, fever, vaccine, sarscov, mask, wuhan, omicron, delta, fauci, fatigue, symptom, immune, medical equipment, mrna, moderna, patient, cdc, infection | January 1, 2020, to December 31, 2021 | United States (n=1566), Malaysia (n=2485), Philippines (n=2080), United Kingdom (n=318), and Canada (n=145) |
To reduce the impact of short messages (such as “hi” or “how are you”) and capture meaningful contextual information from the data, we aggregated messages identified using the keywords in
with their previous and subsequent utterances. This collection of statements was referred to as a “conversation block.” Because the data contained messages in various languages, we manually filtered the conversation blocks and left only those blocks whose content was at least two-thirds in English.Using Google’s Jigsaw Perspective application programming interface (API), we then eliminated conversation data that were sexually explicit and harmful [
]. Removing profanity may affect data processing, especially concerning negative sentiments, as swearing can be associated with negative emotions such as anger [ ]. Some literature has shown that chatbot conversations can be sexually explicit and contain heavy usage of offensive words like swear words [ ]. However, for this research, we set our goal to examine nontoxic user conversations about COVID-19 to gain health-related insights and hence removed toxic conversations.Concerning the threshold to remove toxic conversations, we selected 20% based on a qualitative assessment of the Jigsaw Perspective API. We tried varying thresholds, including more conservative choices. For example, a threshold of 10% discarded 27% of all user utterances, limiting our potential to study full conversations about COVID-19. After trying various thresholds, we found 20% to be a reasonable option, which removed 13% of all user messages. We also eliminated data containing words in the Offensive Profane Word List [
]. The average number of words per aggregated conversation block was 12.11 (SD 8.22), with a positively skewed distribution that followed the power law. For 2020 and 2021, we gathered data from 5292 and 1302 conversation blocks, respectively.Privacy and Ethical Considerations
Through the Terms and Conditions [
], SimSimi Inc was granted a nonexclusive, transferable, relicensable, royalty-free, worldwide license (the “intellectual property license”) to use intellectual property content posted by SimSimi users or associated with SimSimi services. SimSimi does not collect or store personally identifiable information such as name, gender, age, or address. Users may reveal personally identifiable information during a chat conversation, and SimSimi has an internal logic to protect user privacy (eg, replacing consecutive numbers with strings of ***). Hence, the data provided to researchers did not contain any consecutive numbers that might be phone numbers or social security numbers.In addition to obtaining user consent to access the data via the Terms and Conditions, the researchers and SimSimi took the following precautions to protect users from any possible harm. First, the corresponding author’s institutional review board (KAIST IRB-22-181) reviewed this entire study, from data collection to experimental design, and waived user informed consent for the study. Second, the researchers did not aggregate data in a way that would reveal the user’s identity; the analysis was limited to a subset of the chat content that contained the targeted keywords. Third, following normative rules for mass public health surveillance research on social media [
], the researchers avoided directly quoting the entire conversation block. Fourth, SimSimi Inc publishes novel findings from aggregate data via its research blog, allowing users to gain insights into the service and informing users of data usage [ ]. Finally, our research was conducted in collaboration with health care professionals in order to adopt a high standard of data privacy.Analysis
Topic Modeling: Latent Dirichlet Allocation
The Latent Dirichlet Allocation (LDA) model was used to determine the topics that users discussed. LDA is a statistical natural language processing (NLP) model designed to extract latent topics from a collection of documents. Several hyperparameters can affect the LDA results, including the number of topics and the parameters of 2 Dirichlet distributions. One is the document-topic distribution (α), and the other is the topic-word distribution (η). Using the evaluation metrics of topic coherence and perplexity, we assessed the model’s quality and selected the hyperparameters. For topic modeling, we used the Gensim Python library [
] and selected 18 topics (α=.1; η=0.1), which showed the best topic coherence based on a grid search.After training the LDA model, we extracted the predicted topic composition of each conversation block and matched it with the topic having the largest probability. LDA does not assign a single topic to a specific document but rather a topic distribution, and topics are not as homogeneous as the labels may suggest. However, given that our unit of analysis is small, that is, 3 consecutive utterances termed as a conversation block, they are more likely to be on a single topic. A similar assumption has been made in previous literature [
, ]. The first author manually examined each topic’s top 30 unigrams and a word cloud visualization to arrive at an appropriate topic label. Two other authors reviewed the labeling, and the group discussed the labels until agreement.Sentiment Analysis
We employed the LIWC dictionary to capture the sentiment employed by users when interacting with the chatbot. The LIWC software enables objective and quantitative text analysis by counting words in psychologically important language categories [
]. We investigated people’s positive and negative sentiments expressed in their COVID-19–related chats. We limited this study to only positive and negative sentiments because neutral sentiment is hard to capture [ ]. For this analysis, we assessed 5 ratios derived from the LIWC affect category: positive and negative emotions, as well as the 3 subcategories of negative emotions (anxiety, anger, and sadness). We then examined differences in sentiments by topic and country, focusing on the United States, Malaysia, and the Philippines, which had the most data.Results
Topic Analysis
The LDA topic modeling identified a total of 18 topics, which we qualitatively categorized into the following 5 common themes: (1) outbreak of COVID-19, (2) preventive behaviors, (3) physical and psychological impact of COVID-19, (4) people and life in the pandemic, and (5) questions on COVID-19 asked to the chatbot.
shows example chat utterances for each theme, the corresponding topics, and how frequently they appear throughout the data. In the following subsections, we discuss findings for each theme.Theme and topic | Examples | Conversation blocks (N=6594), n (%) | |||
Outbreak of COVID-19 | 1084 (16.4%) | ||||
1. Spread of COVID-19 |
| 821 (12.4%) | |||
2. Wuhan |
| 263 (4.0%) | |||
Preventive behaviors | 1665 (25.3%) | ||||
3. Masks |
| 354 (5.4%) | |||
4. Avoidance |
| 324 (4.9%) | |||
5. Vaccines |
| 265 (4.0%) | |||
6. Lockdowns |
| 361 (5.5%) | |||
7. Social distancing |
| 361 (5.5%) | |||
Physical and psychological impact of COVID-19 | 1052 (16.0%) | ||||
8. Fear |
| 375 (5.7%) | |||
9. Deaths |
| 310 (4.7%) | |||
10. Symptoms |
| 367 (5.6%) | |||
People and life in the pandemic | 777 (11.7%) | ||||
11. Wishes |
| 146 (2.2%) | |||
12. Small talk |
| 385 (5.8%) | |||
13. Impact of COVID-19 on individuals |
| 246 (3.7%) | |||
Questions on COVID-19 asked to the chatbot | 2016 (30.6%) | ||||
14. Current situation |
| 338 (5.1%) | |||
15. Awareness |
| 676 (10.3%) | |||
16. Quarantine tips |
| 290 (4.4%) | |||
17. Opinions |
| 384 (5.8%) | |||
18. Case and death counts |
| 328 (5.0%) |
Theme 1: Outbreak of COVID-19
The first theme included 2 topics focusing on the COVID-19 outbreak. The first topic (“Spread of COVID-19,” #1) covered conversations about the spread of the virus and how the outbreak was disseminating across the world; this topic was also the most prevalent in our data set (covering 12.5% of the user-chatbot conversations). The second COVID-19–related topic (“Wuhan,” #2) centered on the Chinese city of Wuhan. These chats talked about situations in Wuhan, which was the first place hit hard by the virus. Some users referred to COVID-19 as the “Wuhan virus” (eg, “When will Wuhan virus end?”).
Theme 2: Preventive Behavior
The second most common theme addressed preventive behaviors that users adopted during the pandemic. Each of the 5 topics we identified focused on a distinct behavior. The first topic (“Masks,” #3) covered discussions about masks, their function in preventing outbreaks, and the inconvenience of wearing them constantly (eg, “I hate wearing a mask all the time”). Users’ desire to avoid contracting the virus was covered in the second topic (“Avoidance,” #4). The third topic (“Vaccines,” #5) was about vaccines and users’ (un)willingness to undergo vaccination. Chats categorized under this topic also included conversations about vaccine scheduling and distribution (eg, “When can I get a vaccine?”). The fourth topic (“Lockdowns,” #6) dealt with lockdowns and the inconveniences and boredom they brought about (eg, “I don't want to just stay at home..i'm bored”). Social distancing was the final topic (“Social distance,” #7), which focused on the value of social distancing and effective methods for achieving it (eg, “people have to practice social distancing”).
Theme 3: Physical and Psychological Impact of COVID-19
Discussions on this theme touch on COVID-19’s negative psychological and physical effects. Three topics were identified. The first topic (“Fear,” #8) focused on user concerns and fears related to the coronavirus. Some users discussed their anxiety and worries regarding the pandemic (eg, “I’m scared about the pandemic”). COVID-19–related deaths were the focus of the second topic (“Deaths,” #9), which covered conversations about the death toll and included mourning over victims (eg, “My country many people died because of the virus”). The final topic (“Symptoms,” #10) concerned COVID-19 symptoms. Users talked about COVID-19 symptoms or asked if the coronavirus was to blame (eg, “hi SimSimi, I got a fever 3 days ago. I’m not really fine”).
Theme 4: People and Life in the Pandemic
The fourth theme emerged from conversations centered on personal experiences with the pandemic. Three topics made up this theme. Messages wishing for the end of the COVID-19 pandemic and everyone’s safety were included in the first topic (“Wishes,” #11). Interestingly, many users treated the chatbot as a social being, wishing that the chatbot was well and healthy during the pandemic (eg, “Hope you are safe during this pandemic”). The second topic (“Small talk,” #12) comprised small talk and discussions of the users’ day-to-day activities during the pandemic (eg, “How are you holding up during like pandemic?”). The last topic (“Impact of COVID-19 on individuals,” #13) included stories about how the pandemic affected those in the users’ immediate vicinity. Users discussed how the pandemic had affected them and their families.
Theme 5: Questions on COVID-19 Asked to the Chatbot
The last theme covered conversations in which users asked about COVID-19. Although the chatbot was not designed for sharing specific information, users frequently inquired about the virus, as shown by the fact that this theme was predominant in our data set. This theme was composed of 5 topics. The first topic (“Current situation,” #14) covered users’ questions about local and global COVID-19 situations. The second topic (“Awareness,” #15) included user inquiries about COVID-19, mainly whether SimSimi was aware of the virus. Questions about dealing with quarantine comprised the third topic (“Quarantine tips,” #16), which included conversations in which users asked what quarantine means, how they should deal with it, and how others (including SimSimi) were doing during quarantine. The fourth topic (“Opinions,” #17) included conversations in which users asked the chatbot’s opinion about COVID-19 and the pandemic. The fifth topic (“Case and death counts,” #18) was related to questions about the number of positive cases and deaths caused by the virus in specific locations.
Sentiment Analysis
Sentiment Analysis by Topic
We used sentiment analysis to investigate how users discussed the pandemic with the chatbot.
presents the average percentage of positive and negative words in chats by topic according to the LIWC dictionary.depicts the average percentage of words related to anger, anxiety, and sadness. We performed Friedman tests, which are nonparametric tests, similar to repeated-measures ANOVA, to explore whether each emotion was expressed to a different extent in each topic.
User-chatbot conversations concerning lockdowns (Topic 6), fear (Topic 8), and small talk (Topic 12) exhibited the largest percentage of sentimental words (
). On the other hand, words that showed positive or negative feelings were less likely to be used in questions (Theme 5, Topics 14-18) or in chats about the COVID-19 outbreak (Theme 1, Topics 1 and 2). This is shown in ; the negative (green) and positive (yellow) bars of these themes were relatively lower than those of the other themes.We observed significant differences in the percentage of positive and negative words in 6 topics. Users referred to Wuhan (Topic 2; Z=262; P<.05) and discussed masks (Topic 3; Z=756; P<.001) with more negative words than positive words. Similarly, more negative words were used when discussing COVID-19 lockdowns (Topic 6; Z=945.5; P<.001) and sharing fear about the disease (Topic 8; Z=3767; P<.001). In contrast, positive words were more frequently used in small talk (Topic 12; Z=9321; P<.001) and in reminiscing about the impact of COVID-19 on individuals (Topic 13; Z=1816; P<.05). Finally, when asked about the case and death counts, a greater proportion of negative words was used (Topic 18; Z=1065.5; P<.05).
When delving deeper into specific negative emotions, we saw that conversations about masks (Topic 3) and fear caused by the pandemic (Topic 8) were more emotionally charged than other topics, as identified by the spikes in
. This result is similar to the sentiment analysis above. Users’ word usage for 5 topics varied significantly in terms of their expressions of anger, anxiety, and sadness. When discussing masks, people were more likely to express anger (Topic 3; χ22=48.918; P<.001), whereas people expressed anxiety and sadness more frequently when discussing social distancing (Topic 7; χ22=7.1034; P<.05). Users more frequently used words associated with anxiety and sadness when talking about the fear-related physical and psychological impact of COVID-19 (Topic 8; χ22=21.777; P<.001). Questions about SimSimi’s awareness of the pandemic were more closely associated with anxiety (Topic 15; χ22=14.244; P<.001), whereas requests for quarantine tips were followed by anger and sadness (Topic 16; χ22=6.8667; P<.05).Sentiment Analysis by Country
Finally, we examined variations in emotional expressions among users in the United States, the Philippines, and Malaysia. The number of COVID-19–related conversation blocks (ie, 3 consecutive utterances) from users in the United Kingdom and Canada (n=318 and n=145, respectively) was small compared to the studied countries (United States, n=1566; Malaysia, n=2485; Philippines, n=2080). When sample sizes are different, unequal variances can be a problem for countries with small data sizes, which impacts statistical power [
]. Therefore, we only presented the sentiment analysis results for countries with at least 1500 samples. Sentiment analysis results for all 5 countries (including the United Kingdom and Canada) are presented in .For each sentiment (ie, positive and negative) and emotion (ie, anger, anxiety, and sadness) category, we conducted Kruskal-Wallis tests to identify the degree of country-wise differences (
). Pairwise Wilcoxon tests with Bonferroni correction were performed on results significant at the .05 level to determine which geographical regions are more likely to express the studied emotions and sentiments.shows significant differences in the level of negativity, with users in the United States being substantially more negative than users in the Philippines (P<.05). Similar to the negative sentiment, we noticed a key difference in the positive sentiment. Users in the Philippines were not only less negative, but also used more positive words compared to users in the United States (P<.01). No other pairwise comparison was significant. shows that users in the United States appeared to be the most negative and the least positive in terms of word usage, whereas users in the Philippines showed an opposite trend (users in Malaysia showed an opposite trend to a lesser extent).
Sentiment/emotion | Malaysia (n=2485), mean (SD) | Philippines (n=2080), mean (SD) | United States (n=1566), mean (SD) | Kruskal-Wallis chi-square (df) | P value |
Negative | 1.35 (3.95) | 1.10 (3.41) | 1.52 (4.51) | 7.08 (2) | <.05 |
Positive | 1.13 (3.81) | 1.27 (3.88) | 0.96 (3.68) | 10.59 (2) | <.01 |
Anger | 0.21 (1.57) | 0.19 (1.57) | 0.30 (1.86) | 9.80 (2) | <.01 |
Anxiety | 0.32 (1.82) | 0.25 (1.54) | 0.34 (1.72) | 5.50 (2) | .06 |
Sadness | 0.25 (1.84) | 0.24 (1.62) | 0.21 (1.78) | 0.80 (2) | .67 |
presents a similar trend for anger, with users in the United States employing anger-related words more frequently than users in the Philippines (P<.05) and Malaysia (P<.05). There was no substantial difference in the expression of anxiety and sadness across countries (all P>.05; ). In summary, users in the United States appeared to be the most negative and angry in their word choice. In contrast, users in the Philippines used positive words the most (users in Malaysia used positive words to a lesser extent).
We provide an analysis of country differences by topic in
. Fear-related conversations (Topic 8), which involved users’ expressions of fear related to the physical and psychological impact of COVID-19, had particularly high levels of negative sentiment among users in the United States. “Confirmed cases and deaths” (Topic 18) was another topic on which users in the United States were particularly negative compared to users in Malaysia or the Philippines.Topics around “Preventive behaviors” (Theme 2, Topics 3-7) also showed distinct patterns for users in the United States. Mask-related chats (Topic 3) were primarily framed in a negative lens, while other conversations (ie, topics like “Vaccines,” “Quarantine,” and “Symptoms”) displayed a less positive sentiment in this same group of users. Users in the Philippines exhibited an exceptionally high positive sentiment for the themes “People and life in the pandemic” (Theme 4) and “Questions on COVID-19 asked to the chatbot” (Theme 5).
Discussion
Principal Findings
In this study, we investigated how people interacted with virtual social agents during the COVID-19 pandemic by analyzing 19,782 conversation utterances from a free commercial chatbot, SimSimi. Using an NLP approach, we discovered 18 major topics and grouped them into 5 main themes. We then examined user sentiment on specific topics and investigated differences between countries.
Our main finding is that many online users turned to the chatbot and asked for COVID-19 information, despite the chatbot not being developed for this purpose. Our data set’s most prevalent theme (“Questions on COVID-19 asked to the chatbot,” Theme 5) covered conversations in which users inquired about the virus, how it spreads throughout the world, and how to deal with regulations (eg, lockdowns). The chatbot was interestingly treated as an information source by online users who sought its advice on how to handle the pandemic.
When talking about COVID-19, users interacting with SimSimi also anticipated it playing specific social roles. Users wished for the safety and well-being of the chatbot, indicating that they might anthropomorphize such conversational agents (“People and life in the pandemic,” Theme 4). By sharing their struggles with the pandemic, being unable to see peers during lockdowns, or losing their jobs due to the recession, users tended to treat the chatbot as a social being (“Physical and psychological impact of COVID-19,” Theme 3). Our results align with the Computers Are Social Actors (CASA) paradigm, which states that users may apply social norms when interacting with computers and other virtual agents, such as chatbots [
]. Our results suggest that people rely on chatbots to provide emotional and informational support when they cannot receive it from others.Themes, such as “Outbreak of COVID-19,” “Preventive behaviors,” and “Physical and psychological impact of COVID-19,” correspond to those found in earlier research on Twitter [
, , ], indicating that chatbot users discuss the origin, spread, and prevention of the coronavirus much like social media users do. Users also mentioned personal symptoms and shared their fear of the disease and possible complications. According to a study on Twitter, social media users frequently share links to notify or warn their online contacts and followers about risks [ ]. Such link-sharing behavior is rare in conversations with SimSimi; instead, chatbot users mainly express their feelings and thoughts. This distinction between social media and chatbot interactions most likely results from the chatbot’s one-to-one interaction pattern (ie, involving 2 parties), as opposed to the one-to-many interactions provided by social media platforms (ie, where a single user can reach many other users).Our research also showed that users were more likely to express their negative emotions when using social chatbots than on Twitter. All of the topics in the “Preventive behavior” and “Physical and psychological impact of COVID-19” themes, except for the “Avoidance” and “Vaccines”' topics, were more negative than positive. In contrast, earlier research found that COVID-19 topics’ sentiments on Twitter were either positive or neutral [
]. In particular, researchers found that the sentiment expressed on topics surrounding fear and masks was neutral or close to positive. In contrast, our study found that negative sentiments outweighed positive ones in both topics, with the “Fear” topic being associated with anxiety and sadness, and mask-related conversations being related to anger. This result suggests that people are more open to expressing negative emotions with a chatbot than on social media. We conjecture this is because virtual agents offer anonymity and privacy, allowing users to express themselves without fear of being judged [ ]. Other studies have also identified that people are more open to discussing sensitive topics with virtual agents than humans [ ]. These properties of chatbots suggest the potential for widespread AI-supported psychological support in times of crisis. For example, freely writing about one’s emotions and feelings (eg, expressive writing) has long been used as a treatment regimen to decrease depressive symptoms [ ], and it has been shown to be effective in increasing COVID-19 resilience [ ].Regarding the sentiment analysis by country, we found that users in the United States shared the most negative and least positive sentiments, whereas users in the Philippines were the opposite. The attitudes of users in the United States toward “Preventive behaviors” topics were comparatively more negative than in other countries. This finding aligns with a previous Twitter study that reported users in the United States having strong negative feelings toward masks, quarantine, and social distancing. Additionally, we discovered that users in the United States were more negative when discussing the “Fear” and “Case and death counts” topics. These findings could be explained by the United States leading the world in coronavirus-related fatalities [
].Implications
Our findings have several ramifications for chatbot designers, health care professionals, and policymakers. Our findings suggest that there can be active collaboration between health organizations and chatbot platforms to address pandemic-related queries better and deliver accurate information to the general public. As identified previously [
], chatbots can deliver more concise information to users compared to social media or search engines. Designers can exploit this functionality to better inform users of critical and urgent health information. At the same time, there is also a concern that, as chatbots learn new information from users’ inputs, they could turn into a source of misinformation. In the past, major social platforms, such as Facebook, Instagram, YouTube, and Twitter, addressed misinformation by attaching warning labels, removing content, banning accounts, and connecting users with information from reputable sources [ ]. Chatbots can adopt a similar strategy by providing links to trusted information sources, such as local CDCs and the WHO, satisfying users’ needs and disseminating accurate information.However, adapting current chatbot strategies might not be enough to meet users’ needs. Most commercial chatbots are built for casual conversation, and the utterances they produce are based on the text corpora they were trained on. SimSimi is one such example, generating responses based on its existing data. New topics like COVID-19 cannot be easily handled. While some new chatbots employ search functionalities, such as Facebook’s Blender Bot 2.0 [
], they still suffer from various limitations [ ]. For instance, Blender Bot 2.0 “still makes mistakes, like contradiction or repetition, and can hallucinate knowledge” [ ]. More work is needed to address the limitations of large language models [ ], including their ability to generate potentially harmful language and reinforce stereotypes [ , ]. In high-stakes scenarios like the COVID-19 pandemic, chatbots can be tailored to overcome these limitations and assist users in accessing crucial information rapidly.Users’ tendency to view chatbots as social partners must be considered when designing chatbots that satisfy information needs. Our study suggests the importance of adding social functionalities to chatbots and offering critical public health information. Like in previous studies on conversational agents [
], users responded socially to a chatbot. For instance, users prayed for SimSimi’s safety and talked about their struggles due to COVID-19. Along with providing accurate information, social interaction will continue to be a key component of chatbots.We also found that users were more likely to disclose their negative emotions toward COVID-19 with a chatbot than on Twitter. People’s tendency to open up when interacting with chatbots points to the possibility that these systems could develop into alternative tools for managing the depression and stress brought on by the pandemic, especially for replacing limited interactions due to lockdowns.
Finally, our research shows how chatbot data, like social media data, can be used to examine public perceptions and sentiments during a health crisis. People use the chatbot to express positive and negative emotions, and search for information like on any other digital platform. Rather than tracking individuals’ moods, the gross emotional state of users seen in chat conversations could be a barometer of the psychological impact of social upheaval. Chatbots could also get early feedback on policies that impact people’s everyday lives, such as wearing masks.
Limitations and Future Work
Our study has several limitations. First, the data in this study are limited to users who accessed and used a particular chatbot. Given that the primary user group of the chatbot service studied is young adults [
], our findings may not be generalizable to other populations and age groups. Our study also focused on a particular social chatbot, and its findings might not be generalizable to other types of conversational agents.Moreover, most research on chatbots for COVID-19, including this study, has been limited to English data. In particular, we analyzed English conversations from countries whose native language is not English (eg, Malaysia and the Philippines). Our results might thus be specific to users who are proficient in English and may not be generalizable to all users and citizens in Malaysia and the Philippines. Therefore, future studies could consider multilingual topic modeling and sentiment analysis.
Furthermore, due to the limitations of existing computational approaches, we focused on positive and negative sentiments rather than neutral ones in this study. However, studying neutral sentiments could be an interesting future line of work.
Finally, we assumed that each conversation block consisting of 3 consecutive utterances could be classified into a single topic, consistent with conventional topic modeling. However, users’ conversations with chatbots may cut through various subjects at the same time. Future work could explore how coronavirus-related topics intersect by using multiclass labels.
Conclusions
This study was the first to analyze chatbot-user conversations from a live system over 2 years in the context of the COVID-19 pandemic. We examined user conversations about COVID-19 with a commercial social chatbot using NLP tools and found 18 topics that were categorized into 5 overarching themes. Our findings demonstrate how people used the chatbot to seek information about the global health crisis, despite the chatbot not being designed as a resource for such information. Users’ expectations of chatbots to simultaneously play particular social roles, like engaging in small talk, show that these conversational agents may need to be created with various social functions. The sentiments expressed by users were then examined in relation to how these topics were discussed, showing that people were more likely to engage in emotional conversations with a chatbot than on social media.
Chatbot data can be used to better understand people's emotional and informational needs during crises like the COVID-19 pandemic. Instead of creating responses based on pretrained text, virtual social agent designers can consider the informational needs of people looking for the latest and most accurate information. Initiatives like Facebook’s Blender Bot 2.0 [
] can search the internet for information not covered during the training processes, and such search functionality will be useful in meeting users’ information needs. Additionally, given that people tend to be more open to chatbots when sharing negative emotions [ ], chatbots can play an increasing role in meeting the emotional needs of users and helping alleviate depressive moods.Acknowledgments
HC, GL, MS, AZ, and MC were supported by the Institute for Basic Science, Republic of Korea (IBS-R029-C2). CC was supported by a National Research Foundation of Korea grant funded by the Ministry of Science and ICT (Information and Communications Technology) (2021R1A2C2008166). JC was funded by the Technology Development Program funded by the Ministry of SMEs (small and medium-sized enterprises) and Startups in 2022 (S3051210). The authors thank SimSimi for providing data for research purposes.
Conflicts of Interest
None declared.
The average percentage of positive; negative; and anger-, anxiety-, and sadness-related words by 5 countries in COVID-19–related conversations according to the Linguistic Inquiry and Word Count dictionary. Error bars indicate standard errors. CA: Canada, GB: United Kingdom; MY: Malaysia; PH: Philippines; US: United States.
PNG File , 32 KB
The average percentage of positive and negative words in COVID-19–related chat sessions by topic and country, according to the Linguistic Inquiry and Word Count dictionary. Error bars indicate standard errors.
PNG File , 197 KBReferences
- Boon-Itt S, Skunkan Y. Public Perception of the COVID-19 Pandemic on Twitter: Sentiment Analysis and Topic Modeling Study. JMIR Public Health Surveill 2020 Nov 11;6(4):e21978 [FREE Full text] [CrossRef] [Medline]
- Abd-Alrazaq A, Alhuwail D, Househ M, Hamdi M, Shah Z. Top Concerns of Tweeters During the COVID-19 Pandemic: Infoveillance Study. J Med Internet Res 2020 Apr 21;22(4):e19016 [FREE Full text] [CrossRef] [Medline]
- Schubel LC, Wesley DB, Booker E, Lock J, Ratwani RM. Population subgroup differences in the use of a COVID-19 chatbot. NPJ Digit Med 2021 Feb 19;4(1):30 [FREE Full text] [CrossRef] [Medline]
- Brandtzæg PB, Skjuve M, Dysthe KK, Følstad A. When the Social Becomes Non-Human: Young People's Perception of Social Support in Chatbots. In: CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 2021 Presented at: 2021 CHI Conference on Human Factors in Computing Systems; May 8-13, 2021; Yokohama, Japan p. 1-13. [CrossRef]
- Chin H, Molefi LW, Yi MY. Empathy Is All You Need: How a Conversational Agent Should Respond to Verbal Abuse. In: CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 2020 Presented at: 2020 CHI Conference on Human Factors in Computing Systems; April 25-30, 2020; Honolulu, HI, USA p. 1-13. [CrossRef]
- Shum H, He X, Li D. From Eliza to XiaoIce: challenges and opportunities with social chatbots. Frontiers Inf Technol Electronic Eng 2018 Jan 8;19(1):10-26. [CrossRef]
- A state-of-the-art open source chatbot. Facebook AI. URL: https://ai.facebook.com/blog/state-of-the-art-open-source-chatbot/ [accessed 2022-07-06]
- Siedlikowski S, Noël LP, Moynihan SA, Robin M. Chloe for COVID-19: Evolution of an Intelligent Conversational Agent to Address Infodemic Management Needs During the COVID-19 Pandemic. J Med Internet Res 2021 Sep 21;23(9):e27283 [FREE Full text] [CrossRef] [Medline]
- Miner AS, Laranjo L, Kocaballi AB. Chatbots in the fight against the COVID-19 pandemic. NPJ Digit Med 2020;3:65 [FREE Full text] [CrossRef] [Medline]
- WHO launches a chatbot on Facebook Messenger to combat COVID-19 misinformation. World Health Organization. URL: https://www.who.int/news-room/feature-stories/detail/who-launches-a-chatbot-powered-facebook-messenger-to-combat-covid-19-misinformation [accessed 2022-06-29]
- COVID-19 Testing: What You Need to Know. CDC. URL: https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/testing.html [accessed 2022-06-29]
- Cinelli M, Quattrociocchi W, Galeazzi A, Valensise CM, Brugnoli E, Schmidt AL, et al. The COVID-19 social media infodemic. Sci Rep 2020 Oct 06;10(1):16598 [FREE Full text] [CrossRef] [Medline]
- Patel J, Desai H, Okhowat A. The Role of the Canadian Media During the Initial Response to the COVID-19 Pandemic: A Topic Modelling Approach Using Canadian Broadcasting Corporation News Articles. JMIR Infodemiology 2021;1(1):e25242. [CrossRef] [Medline]
- Whitfield C, Liu Y, Anwar M. Surveillance of COVID-19 pandemic using social media: a reddit study in North Carolina. In: BCB '21: Proceedings of the 12th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics. 2021 Presented at: 12th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics; August 1-4, 2021; Gainesville, FL, USA p. 1-8. [CrossRef]
- Zhunis A, Lima G, Song H, Han J, Cha M. Emotion Bubbles: Emotional Composition of Online Discourse Before and After the COVID-19 Outbreak. In: WWW '22: Proceedings of the ACM Web Conference 2022. 2022 Presented at: ACM Web Conference 2022; April 25-29, 2022; Virtual Event, Lyon France p. 2603-2613. [CrossRef]
- Jang H, Rempel E, Roth D, Carenini G, Janjua NZ. Tracking COVID-19 Discourse on Twitter in North America: Infodemiology Study Using Topic Modeling and Aspect-Based Sentiment Analysis. J Med Internet Res 2021 Feb 10;23(2):e25431 [FREE Full text] [CrossRef] [Medline]
- Hussain A, Tahir A, Hussain Z, Sheikh Z, Gogate M, Dashtipour K, et al. Artificial Intelligence-Enabled Analysis of Public Attitudes on Facebook and Twitter Toward COVID-19 Vaccines in the United Kingdom and the United States: Observational Study. J Med Internet Res 2021 Apr 05;23(4):e26627 [FREE Full text] [CrossRef] [Medline]
- Almalki M, Azeez F. Health Chatbots for Fighting COVID-19: a Scoping Review. Acta Inform Med 2020 Dec;28(4):241-247 [FREE Full text] [CrossRef] [Medline]
- Amiri P, Karahanna E. Chatbot use cases in the Covid-19 public health response. J Am Med Inform Assoc 2022 Apr 13;29(5):1000-1010 [FREE Full text] [CrossRef] [Medline]
- Gratch J, Lucas GM, King AA, Morency LP. It's only a computer: the impact of human-agent interaction in clinical interviews. In: AAMAS '14: Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems. 2014 Presented at: 2014 International Conference on Autonomous Agents and Multi-agent Systems; May 5-9, 2014; Paris, France p. 85-92. [CrossRef]
- SimSimi. URL: https://simsimi.com/ [accessed 2022-06-29]
- Älgå A, Eriksson O, Nordberg M. Analysis of Scientific Publications During the Early Phase of the COVID-19 Pandemic: Topic Modeling Study. J Med Internet Res 2020 Nov 10;22(11):e21559 [FREE Full text] [CrossRef] [Medline]
- Gozzi N, Tizzani M, Starnini M, Ciulla F, Paolotti D, Panisson A, et al. Collective Response to Media Coverage of the COVID-19 Pandemic on Reddit and Wikipedia: Mixed-Methods Analysis. J Med Internet Res 2020 Oct 12;22(10):e21597 [FREE Full text] [CrossRef] [Medline]
- Park S, Han S, Kim J, Molaie MM, Vu HD, Singh K, et al. COVID-19 Discourse on Twitter in Four Asian Countries: Case Study of Risk Communication. J Med Internet Res 2021 Mar 16;23(3):e23272 [FREE Full text] [CrossRef] [Medline]
- Schück S, Foulquié P, Mebarki A, Faviez C, Khadhar M, Texier N, et al. Concerns Discussed on Chinese and French Social Media During the COVID-19 Lockdown: Comparative Infodemiology Study Based on Topic Modeling. JMIR Form Res 2021 Apr 05;5(4):e23593 [FREE Full text] [CrossRef] [Medline]
- de Melo T, Figueiredo CMS. Comparing News Articles and Tweets About COVID-19 in Brazil: Sentiment Analysis and Topic Modeling Approach. JMIR Public Health Surveill 2021 Feb 10;7(2):e24585 [FREE Full text] [CrossRef] [Medline]
- Drescher LS, Roosen J, Aue K, Dressel K, Schär W, Götz A. The Spread of COVID-19 Crisis Communication by German Public Authorities and Experts on Twitter: Quantitative Content Analysis. JMIR Public Health Surveill 2021 Dec 22;7(12):e31834 [FREE Full text] [CrossRef] [Medline]
- Xue J, Chen J, Hu R, Chen C, Zheng C, Su Y, et al. Twitter Discussions and Emotions About the COVID-19 Pandemic: Machine Learning Approach. J Med Internet Res 2020 Nov 25;22(11):e20550 [FREE Full text] [CrossRef] [Medline]
- Kothari A, Walker K, Burns K. #CoronaVirus and public health: the role of social media in sharing health information. OIR 2022 Feb 21;46(7):1293-1312. [CrossRef]
- Ta V, Griffith C, Boatfield C, Wang X, Civitello M, Bader H, et al. User Experiences of Social Support From Companion Chatbots in Everyday Contexts: Thematic Analysis. J Med Internet Res 2020 Mar 06;22(3):e16235 [FREE Full text] [CrossRef] [Medline]
- Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using Psychological Artificial Intelligence (Tess) to Relieve Symptoms of Depression and Anxiety: Randomized Controlled Trial. JMIR Ment Health 2018 Dec 13;5(4):e64 [FREE Full text] [CrossRef] [Medline]
- Fitzpatrick KK, Darcy A, Vierhile M. Delivering Cognitive Behavior Therapy to Young Adults With Symptoms of Depression and Anxiety Using a Fully Automated Conversational Agent (Woebot): A Randomized Controlled Trial. JMIR Ment Health 2017 Jun 06;4(2):e19 [FREE Full text] [CrossRef] [Medline]
- Budhwani H, Sun R. Creating COVID-19 Stigma by Referencing the Novel Coronavirus as the "Chinese virus" on Twitter: Quantitative Analysis of Social Media Data. J Med Internet Res 2020 May 06;22(5):e19301 [FREE Full text] [CrossRef] [Medline]
- Raji VPH, Maheswari PU. COVID-19 Lockdown in India: An Experimental Study on Promoting Mental Wellness Using a Chatbot during the Coronavirus. International Journal of Mental Health Promotion 2022;24(2):189-205. [CrossRef]
- Tanoue Y, Nomura S, Yoneoka D, Kawashima T, Eguchi A, Shi S, et al. Mental health of family, friends, and co-workers of COVID-19 patients in Japan. Psychiatry Res 2020 Sep;291:113067 [FREE Full text] [CrossRef] [Medline]
- Bharti U, Bajaj D, Batra H, Lalit S, Lalit S, Gangwani A. Medbot: Conversational Artificial Intelligence Powered Chatbot for Delivering Tele-Health after COVID-19. 2020 Presented at: 5th International Conference on Communication and Electronics Systems (ICCES); June 10-12, 2020; Coimbatore, India p. 870-875. [CrossRef]
- Smith AC, Thomas E, Snoswell CL, Haydon H, Mehrotra A, Clemensen J, et al. Telehealth for global emergencies: Implications for coronavirus disease 2019 (COVID-19). J Telemed Telecare 2020 Mar 20;26(5):309-313. [CrossRef]
- Amer E, Hazem A, Farouk O, Louca A, Mohamed Y, Ashraf M. A Proposed Chatbot Framework for COVID-19. 2021 Presented at: 2021 International Mobile, Intelligent, and Ubiquitous Computing Conference (MIUCC); May 26-27, 2021; Cairo, Egypt. [CrossRef]
- Altay S, Hacquin A, Chevallier C, Mercier H. Information delivered by a chatbot has a positive impact on COVID-19 vaccines attitudes and intentions. J Exp Psychol Appl 2021 Oct 28:400. [CrossRef] [Medline]
- Li Y, Grandison T, Silveyra P, Douraghy A, Guan X, Kieselbach T, et al. Jennifer for COVID-19: An NLP-Powered Chatbot Built for the People and by the People to Combat Misinformation. In: Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020. 2020 Presented at: 1st Workshop on NLP for COVID-19 at ACL 2020; July 2020; Online URL: https://aclanthology.org/2020.nlpcovid19-acl.9/
- Replika. URL: https://replika.com/ [accessed 2022-08-31]
- Amina. URL: https://myanima.ai/ [accessed 2022-08-31]
- Kuki AI. URL: https://www.kuki.ai/ [accessed 2022-08-31]
- List of tz database time zones. Wikipedia. URL: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones [accessed 2022-09-02]
- Early AI-supported Response with Social Listening. WHO EARS. URL: https://www.who-ears.com/#/ [accessed 2022-06-29]
- Perspective API. URL: https://perspectiveapi.com/ [accessed 2022-06-29]
- Vingerhoets AJJM, Bylsma LM, de Vlam C. Swearing: A biopsychosocial perspective. Psihologijske Teme 2013;22(2):287-304 [FREE Full text]
- Hill J, Randolph Ford W, Farreras I. Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior 2015 Aug;49:245-250 [FREE Full text] [CrossRef]
- Offensive/Profane Word List. CMU School of Computer Science. URL: https://www.cs.cmu.edu/~biglou/resources/bad-words.txt [accessed 2022-06-29]
- SimSimi Policy on Rights and Responsibilities. SimSimi Team Blog. URL: http://blog.simsimi.com/2017/04/simsimi-terms-and-conditions.html [accessed 2022-09-02]
- Conway M. Ethical issues in using Twitter for public health surveillance and research: developing a taxonomy of ethical concepts from the research literature. J Med Internet Res 2014 Dec 22;16(12):e290 [FREE Full text] [CrossRef] [Medline]
- SimSimi Blog. NAVER. URL: https://blog.naver.com/simsimi_official [accessed 2022-11-20]
- Gensim: topic modeling for humans. Radim Řehůřek. URL: https://radimrehurek.com/gensim/ [accessed 2022-06-29]
- Lyu JC, Han EL, Luli GK. COVID-19 Vaccine-Related Discussion on Twitter: Topic Modeling and Sentiment Analysis. J Med Internet Res 2021 Jun 29;23(6):e24435 [FREE Full text] [CrossRef] [Medline]
- Miller M, Banerjee T, Muppalla R, Romine W, Sheth A. What Are People Tweeting About Zika? An Exploratory Study Concerning Its Symptoms, Treatment, Transmission, and Prevention. JMIR Public Health Surveill 2017 Jun 19;3(2):e38 [FREE Full text] [CrossRef] [Medline]
- LIWC-22. LIWC. URL: https://www.liwc.app/ [accessed 2022-06-29]
- Ribeiro F, Araújo M, Gonçalves P, André Gonçalves M, Benevenuto F. SentiBench - a benchmark comparison of state-of-the-practice sentiment analysis methods. EPJ Data Sci 2016 Jul 7;5(1):1-29 [FREE Full text] [CrossRef]
- Rusticus SA, Lovato CY. Impact of Sample Size and Variability on the Power and Type I Error Rates of Equivalence Tests: A Simulation Study. Practical Assessment, Research, and Evaluation 2014;19:Article 11 [FREE Full text] [CrossRef]
- Reeves B, Nass C. The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge, UK: Cambridge University Press; 1996.
- Chandrasekaran R, Mehta V, Valkunde T, Moustakas E. Topics, Trends, and Sentiments of Tweets About the COVID-19 Pandemic: Temporal Infoveillance Study. J Med Internet Res 2020 Oct 23;22(10):e22624 [FREE Full text] [CrossRef] [Medline]
- Lucas GM, Gratch J, King A, Morency L. It’s only a computer: Virtual humans increase willingness to disclose. Computers in Human Behavior 2014 Aug;37:94-100. [CrossRef]
- McKinney F. Free writing as therapy. Psychotherapy: Theory, Research & Practice 1976;13(2):183-187. [CrossRef]
- Bechard E, Evans J, Cho E, Lin Y, Kozhumam A, Jones J, et al. Feasibility, acceptability, and potential effectiveness of an online expressive writing intervention for COVID-19 resilience. Complement Ther Clin Pract 2021 Nov;45:101460 [FREE Full text] [CrossRef] [Medline]
- WHO Coronavirus (COVID-19) Dashboard. World Health Organization. URL: https://covid19.who.int/table [accessed 2022-06-30]
- Krishnan N, Gu J, Tromble R, Abroms LC. Research note: Examining how various social media platforms have responded to COVID-19 misinformation. Harvard Kennedy School Misinformation Review 2021;2(6):1-25. [CrossRef]
- Blender Bot 2.0: An open source chatbot that builds long-term memory and searches the internet. Facebook AI. URL: https://ai.facebook.com/blog/blender-bot-2-an-open-source-chatbot-that-builds-long-term-memory-and-searches-the-internet/ [accessed 2022-06-30]
- Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 2021 Presented at: 2021 ACM Conference on Fairness, Accountability, and Transparency; March 3-10, 2021; Virtual Event Canada p. 610-623. [CrossRef]
Abbreviations
AI: artificial intelligence |
API: application programming interface |
CDC: Centers for Disease Control and Prevention |
LDA: Latent Dirichlet Allocation |
LIWC: Linguistic Inquiry and Word Count |
NLP: natural language processing |
WHO: World Health Organization |
Edited by T Leung, V Arnold, H Gouda; submitted 09.07.22; peer-reviewed by Z Butt, K Roberts, K Dashtipour; comments to author 17.08.22; revised version received 06.09.22; accepted 22.12.22; published 27.01.23
Copyright©Hyojin Chin, Gabriel Lima, Mingi Shin, Assem Zhunis, Chiyoung Cha, Junghoi Choi, Meeyoung Cha. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 27.01.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.