Original Paper
1Department of Educational Psychology and Counseling, National Tsing Hua University, Hsinchu, Taiwan
2Institute of Learning Sciences and Technologies, National Tsing Hua University, Hsinchu, Taiwan
*all authors contributed equally
Corresponding Author:
Hsin-Yu Kuo, MA
Department of Educational Psychology and Counseling
National Tsing Hua University
101, Section 2, Kuang-Fu Road
Hsinchu, 300044
Taiwan
Phone: 886 3 571 5131 ext 34510
Email: kuohy@mx.nthu.edu.tw
Abstract
Background: Health misinformation undermines responses to health crises, with social media amplifying the issue. Although organizations work to correct misinformation, challenges persist due to reasons such as the difficulty of effectively sharing corrections and information being overwhelming. At the same time, social media offers valuable interactive data, enabling researchers to analyze user engagement with health misinformation corrections and refine content design strategies.
Objective: This study aimed to identify the attributes of correction posts and user engagement and investigate (1) the trend of user engagement with health misinformation correction during 3 years of the COVID-19 pandemic; (2) the relationship between post attributes and user engagement in sharing and reactions; and (3) the content generated by user comments serving as additional information attached to the post, affecting user engagement in sharing and reactions.
Methods: Data were collected from the Facebook pages of a fact-checking organization and a health agency from January 2020 to December 2022. A total of 1424 posts and 67,378 corresponding comments were analyzed. The posts were manually annotated by developing a research framework based on the fuzzy-trace theory, categorizing information into “gist” and “verbatim” representations. Three types of gist representations were examined: risk (risks associated with misinformation), awareness (awareness of misinformation), and value (value in health promotion). Furthermore, 3 types of verbatim representations were identified: numeric (numeric and statistical bases for correction), authority (authority from experts, scholars, or institutions), and facts (facts with varying levels of detail). The basic metrics of user engagement included shares, reactions, and comments as the primary dependent variables. Moreover, this study examined user comments and classified engagement as cognitive (knowledge-based, critical, and bias-based) or emotional (positive, negative, and neutral). Statistical analyses were performed to explore the impact of post attributes on user engagement.
Results: On the basis of the results of the regression analysis, risk (β=.07; P=.001), awareness (β=.09; P<.001), and facts (β=.14; P<.001) predicted higher shares; awareness (β=.07; P=.001) and facts (β=.24; P<.001) increased reactions; and awareness (β=.06; P=.005), numeric representations (β=.06; P=.02), and facts (β=.19; P<.001) increased comments. All 3 gist representations significantly predicted shares (risk: β=.08; P<.001, awareness: β=.08; P<.001, and value: β=.06; P<.001) and reactions (risk: β=.04; P=.007, awareness: β=.06; P<.001, and value: β=.05; P<.001) when considering comment content. In addition, comments with bias-based engagement (β=–.11; P=.001) negatively predicted shares. Generally, posts providing gist attributes, especially awareness of misinformation, were beneficial for user engagement in misinformation correction.
Conclusions: This study enriches the theoretical understanding of the relationship between post attributes and user engagement within web-based communication efforts to correct health misinformation. These findings provide a foundation for designing more effective content approaches to combat misinformation and strengthen public health communication.
doi:10.2196/65631
Keywords
Introduction
Background
Health misinformation negatively impacts individuals’ and society’s adaptive response to a health crisis [Ecker UK, Lewandowsky S, Cook J, Schmid P, Fazio LK, Brashier N, et al. The psychological drivers of misinformation belief and its resistance to correction. Nat Rev Psychol. Jan 12, 2022;1(1):13-29. [CrossRef]1,Gallotti R, Valle F, Castaldo N, Sacco P, De Domenico M. Assessing the risks of 'infodemics' in response to COVID-19 epidemics. Nat Hum Behav. Dec 2020;4(12):1285-1293. [CrossRef] [Medline]2]. As a communication tool, social media has worsened the problem by speeding up the spread of misinformation and strengthening people’s existing beliefs [Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M. The echo chamber effect on social media. Proc Natl Acad Sci U S A. Mar 02, 2021;118(9):e2023301118. [FREE Full text] [CrossRef] [Medline]3-Yoon HY, You KH, Kwon JH, Kim JS, Rha SY, Chang YJ, et al. Understanding the social mechanism of cancer misinformation spread on YouTube and lessons learned: infodemiological study. J Med Internet Res. Nov 14, 2022;24(11):e39571. [FREE Full text] [CrossRef] [Medline]5]. Conversely, social media platforms can be effectively used for health communication to disseminate accurate information by correcting misinformation and engaging users by providing verified health advice [Bode L, Vraga EK. See something, say something: correction of global health misinformation on social media. Health Commun. Sep 2018;33(9):1131-1140. [CrossRef] [Medline]6-Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9]. Many organizations devote efforts to correcting misinformation and releasing them on social media. However, the efficacy of correction is sometimes limited for various reasons, such as people feeling uncomfortable with the knowledge gap when retracting misinformation [Lewandowsky S, Ecker UK, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interest. Dec 17, 2012;13(3):106-131. [CrossRef] [Medline]10]. Moreover, in an increasingly complex health information environment, people are flooded with knowledge they may not be able to use correctly [Koh HK, Rudd RE. The arc of health literacy. JAMA. 2015;314(12):1225-1226. [CrossRef] [Medline]11]. In this context, the conditions and message designs that make misinformation correction an effective countermeasure have been examined in experimental studies [Bode L, Vraga EK. See something, say something: correction of global health misinformation on social media. Health Commun. Sep 2018;33(9):1131-1140. [CrossRef] [Medline]6,van der Meer TG, Jin Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. May 2020;35(5):560-575. [CrossRef] [Medline]12,Zeng HK, Lo SY, Li SS. Credibility of misinformation source moderates the effectiveness of corrective messages on social media. Public Underst Sci. Jul 31, 2024;33(5):587-603. [CrossRef] [Medline]13]. As an interactive digital environment, social media also allows researchers to access data on user responses to health communication, obtaining results from different perspectives [Chu WM, Shieh GJ, Wu SL, Sheu WH. Use of Facebook by academic medical centers in Taiwan during the COVID-19 pandemic: observational study. J Med Internet Res. Nov 20, 2020;22(11):e21501. [FREE Full text] [CrossRef] [Medline]7-Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9,Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the disneyland measles outbreak. Vaccine. Jun 14, 2016;34(28):3225-3228. [FREE Full text] [CrossRef] [Medline]14].
In this study, we investigate how to enhance the effectiveness of misinformation correction based on real-world data. We selected 2 representative Facebook fan pages in Taiwan (Facebook, Meta Platforms, Inc) that provide rich health misinformation corrections in post forms. Users engage with the content provided by these 2 organizations and other users’ comments. In general, correction is a remedy that offers accurate information to counteract misinformation presented in media [Lewandowsky S, Ecker UK, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interest. Dec 17, 2012;13(3):106-131. [CrossRef] [Medline]10]. According to the observed data, the definition of health misinformation correction in this study is the content with the intent to resist health-related misinformation. This includes clarifications of health misinformation, fact-checking that covers the verification process, and content that addresses potential misinformation while not directly correcting a specific claim (examples are provided in Table S1 in Examples of health misinformation correction, coding schemes, prompts for GPT-3.5, and generated output examples.Multimedia Appendix 1
The responses social media users have when they see posts, particularly those explicitly displayed on social media, are referred to as user engagement in this study. These are important metrics for examining the impacts of post attributes in many studies [Xue H, Gong X, Stevens H. COVID-19 vaccine fact-checking posts on Facebook: observational study. J Med Internet Res. Jun 21, 2022;24(6):e38423. [FREE Full text] [CrossRef] [Medline]8,Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9,Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the disneyland measles outbreak. Vaccine. Jun 14, 2016;34(28):3225-3228. [FREE Full text] [CrossRef] [Medline]14]. This study explored user engagement in 2 dimensions. The first is based on the metrics Facebook provides (sharing, reacting, and commenting), which are the main focus of the predictions in this research. The second is to examine the content users create when they comment. We assume that users engage with different intentions in their comments, and this is reflected in the comment content, which is presented as additional information accessible to other users. Then, this study proposes a framework to classify the engagement categories in comments and examine if this content affects the other 2 metrics: sharing and reacting.
We then introduced prior work about health communication on social media, explained the research gap, and explored the perspective of FTT to identify post attributes and the relationship between post attributes and user engagement in this study.
Prior Work
In previous studies, the acceptance of misinformation correction across different contexts and content types has been examined through experiments. The variables often examined include simple versus detailed or factual elaboration [van der Meer TG, Jin Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. May 2020;35(5):560-575. [CrossRef] [Medline]12,Zeng HK, Lo SY, Li SS. Credibility of misinformation source moderates the effectiveness of corrective messages on social media. Public Underst Sci. Jul 31, 2024;33(5):587-603. [CrossRef] [Medline]13,Martel C, Mosleh M, Rand DG. You’re definitely wrong, maybe: correction style has minimal effect on corrections of misinformation online. Media Commun. Feb 03, 2021;9(1):120-133. [CrossRef]18]; correction sources [Bode L, Vraga EK. See something, say something: correction of global health misinformation on social media. Health Commun. Sep 2018;33(9):1131-1140. [CrossRef] [Medline]6,van der Meer TG, Jin Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. May 2020;35(5):560-575. [CrossRef] [Medline]12,Zeng HK, Lo SY, Li SS. Credibility of misinformation source moderates the effectiveness of corrective messages on social media. Public Underst Sci. Jul 31, 2024;33(5):587-603. [CrossRef] [Medline]13,Wang Y. Debunking misinformation about genetically modified food safety on social media: can heuristic cues mitigate biased assimilation? Sci Commun. Jun 18, 2021;43(4):460-485. [CrossRef]19]; and information content such as quality, intensity, or emphasis [Martel C, Mosleh M, Rand DG. You’re definitely wrong, maybe: correction style has minimal effect on corrections of misinformation online. Media Commun. Feb 03, 2021;9(1):120-133. [CrossRef]18,Song Y, Wang S, Xu Q. Fighting misinformation on social media: effects of evidence type and presentation mode. Health Educ Res. May 24, 2022;37(3):185-198. [FREE Full text] [CrossRef] [Medline]20,Sui Y, Zhang B. Determinants of the perceived credibility of rebuttals concerning health misinformation. Int J Environ Res Public Health. Feb 02, 2021;18(3):1345. [FREE Full text] [CrossRef] [Medline]21].
Another approach in the context of social media involves collecting data from these platforms, labeling them manually or automatically, and analyzing them. A study using FTT as a framework explored news reports about measles and vaccines and expressed the opinions of vaccination proponents or opponents as the gist. In this study, data were coded to predict the likelihood of news sharing on Facebook. The regression model included other variables, such as statistics and stories, but only the gist could significantly predict news sharing [Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the disneyland measles outbreak. Vaccine. Jun 14, 2016;34(28):3225-3228. [FREE Full text] [CrossRef] [Medline]14]. Health communication efforts increased significantly during the COVID-19 pandemic, allowing for analysis through publicly available data. A study of 1816 posts from hospitals in Taiwan during the first 4 months of the COVID-19 pandemic found that specific social media strategies significantly increased user engagement, indicating the potential to improve health literacy among health care organizations [Chu WM, Shieh GJ, Wu SL, Sheu WH. Use of Facebook by academic medical centers in Taiwan during the COVID-19 pandemic: observational study. J Med Internet Res. Nov 20, 2020;22(11):e21501. [FREE Full text] [CrossRef] [Medline]7]. Such studies have also observed that users inevitably share misinformation through comments when discussing health issues on social media. A study analyzed 1534 comments from 2 Facebook groups discussing mental health symptoms between January 2019 and December 2021. It found that 26.1% of the comments contained medically inaccurate information, specifically regarding complementary and alternative medicine (60.3%) [Bizzotto N, Schulz PJ, de Bruijn GJ. The "Loci" of misinformation and its correction in peer- and expert-led online communities for mental health: content analysis. J Med Internet Res. Sep 18, 2023;25:e44656. [FREE Full text] [CrossRef] [Medline]22]. By observing long-term trends, it is also possible to track changes in the quality of health communication and user attitudes. An analysis of 12,553 COVID-19 vaccine fact-checking posts and 122,362 comments from January 2020 to March 2022 showed increased analytical thinking and confidence in the posts, alongside decreased tentativeness among comments, showing an evolution in public sentiment over time [Xue H, Gong X, Stevens H. COVID-19 vaccine fact-checking posts on Facebook: observational study. J Med Internet Res. Jun 21, 2022;24(6):e38423. [FREE Full text] [CrossRef] [Medline]8]. Finally, a study analyzed 914 fact-checking news articles to understand how source transparency, contextual information, and vividness influence user engagement. The study found that these factors significantly affected user responses, especially likes and angry reactions, emphasizing the importance of content style and depth in capturing public attention [Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9]. These studies underline the critical role of social media in shaping public health narratives and emphasize the necessity of designing engaging yet accurate content during health crises.
Research Gap
Regarding the research gap, there is a lack of health communication studies that incorporate both data and theory, and finding a theory that fits the data is not easy. To explore the effect of misinformation correction, the primary theories considered include the inoculation theory [McGuire WJ. Resistance to persuasion conferred by active and passive prior refutation of the same and alternative counterarguments. J Abnorm Soc Psychol. Sep 1961;63(2):326-332. [CrossRef]23], elaboration likelihood model [Petty RE, Cacioppo JT. The elaboration likelihood model of persuasion. In: Berkowitz L, editor. Advances in Experimental Social Psychology. New York, NY. Academic Press; 1986:123-205.24], heuristic-systematic model [Chaiken S. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J Pers Soc Psychol. Nov 1980;39(5):752-766. [CrossRef]25], and FTT [Reyna VF, Brainerd CJ. Fuzzy-trace theory: an interim synthesis. Learn Individ Differ. Jan 1995;7(1):1-75. [CrossRef]15], all of which offer different interpretations from a cognitive perspective on how misinformation correction can be effectively achieved. The FTT emphasizes the importance of conveying the essence of a message, focusing on its core meaning rather than merely summarizing or simplifying it. It highlights the need to avoid overwhelming people with excessive quantitative details. [Edelson SM, Reyna VF. Who makes the decision, how, and why: a fuzzy-trace theory approach. Med Decis Making. Aug 26, 2024;44(6):614-616. [CrossRef] [Medline]26]. Moreover, FTT has robust cognitive neuroscience foundations, making it suitable for scientific communication and compliant with nonpersuasive fact-checking methods that prioritize reference-based corrections. Therefore, this study adopted FTT for analysis. The method of conveying essential meaning is called gist communication [Reyna VF. A scientific theory of gist communication and misinformation resistance, with implications for health, education, and policy. Proc Natl Acad Sci U S A. Apr 13, 2021;118(15):e1912441117. [FREE Full text] [CrossRef] [Medline]17].
In addition, we proposed a framework for classifying user-generated comment content, as previous studies have rarely examined this aspect. We have included them as independent variables in the prediction model to explore whether the FTT’s gist communication remains effective even when different comments are present as additional post information.
The FTT and Health Misinformation Correction
Overview
The FTT explains how we process and remember information through 2 types of cognitive representation: “gist” and “verbatim.” Gist captures the essential, top-down meaning, including causal relations and key concepts [Broniatowski DA, Reyna VF. To illuminate and motivate: a fuzzy-trace model of the spread of information online. Comput Math Organ Theory. Dec 2020;26:431-464. [FREE Full text] [CrossRef] [Medline]27,Reyna VF, Brainerd CJ. Dual processes in decision making and developmental neuroscience: a fuzzy-trace model. Dev Rev. Sep 2011;31(2-3):180-206. [FREE Full text] [CrossRef] [Medline]28]. Verbatim captures details such as exact words and elements [Reyna VF, Brainerd CJ. Fuzzy-trace theory: an interim synthesis. Learn Individ Differ. Jan 1995;7(1):1-75. [CrossRef]15,Blalock SJ, DeVellis RF, Chewning B, Sleath BL, Reyna VF. Gist and verbatim communication concerning medication risks/benefits. Patient Educ Couns. Jun 2016;99(6):988-994. [FREE Full text] [CrossRef] [Medline]16]. This theory was originally developed to study how individuals shape their memory and engage in reasoning and decision-making [Reyna VF, Brainerd CJ. Fuzzy-trace theory: an interim synthesis. Learn Individ Differ. Jan 1995;7(1):1-75. [CrossRef]15]. The gist is more than a mental shortcut; it is a distilled way of presenting objective facts to help individuals make health and medical decisions and navigate life [Reyna VF, Edelson S, Hayes B, Garavito D. Supporting health and medical decision making: findings and insights from fuzzy-trace theory. Med Decis Making. Aug 23, 2022;42(6):741-754. [FREE Full text] [CrossRef] [Medline]29].
According to the FTT, information containing the gist is likely to evoke motivationally relevant values that may be shared online, whether true or not [Broniatowski DA, Hosseini P, Porter EV, Wood TJ. The role of mental representation in sharing misinformation online. J Exp Psychol Appl. Jul 25, 2024. [CrossRef] [Medline]30]. This explains why people easily share misinformation that offers a clear core message and causal explanation [Broniatowski DA, Reyna VF. To illuminate and motivate: a fuzzy-trace model of the spread of information online. Comput Math Organ Theory. Dec 2020;26:431-464. [FREE Full text] [CrossRef] [Medline]27]. This mechanism can be applied in fact-checking or correction as an intervention. FTT explains that providing a brief explanation has more lasting benefits than merely offering a false label [Ecker UK, O'Reilly Z, Reid JS, Chang EP. The effectiveness of short-format refutational fact-checks. Br J Psychol. Feb 2020;111(1):36-54. [FREE Full text] [CrossRef] [Medline]31], but offering decontextualized detailed information may be counterproductive [Martel C, Mosleh M, Rand DG. You’re definitely wrong, maybe: correction style has minimal effect on corrections of misinformation online. Media Commun. Feb 03, 2021;9(1):120-133. [CrossRef]18]. The critical factor may be the inclusion of the gist. The gist of information triggers emotions and evokes core values [Reyna VF. A scientific theory of gist communication and misinformation resistance, with implications for health, education, and policy. Proc Natl Acad Sci U S A. Apr 13, 2021;118(15):e1912441117. [FREE Full text] [CrossRef] [Medline]17]. Furthermore, when the content provides gist cues that connect with people’s inner motivations and values, it is more likely to catch attention and be shared [Broniatowski DA, Hosseini P, Porter EV, Wood TJ. The role of mental representation in sharing misinformation online. J Exp Psychol Appl. Jul 25, 2024. [CrossRef] [Medline]30].
Health interventions should emphasize conveying the gist of information with an organized approach to highlight essential points [Blalock SJ, Reyna VF. Using fuzzy-trace theory to understand and improve health judgments, decisions, and behaviors: a literature review. Health Psychol. Aug 2016;35(8):781-792. [FREE Full text] [CrossRef] [Medline]32]. The next question is to identify the gist of the health misinformation correction. Different gist representations exist in diverse contexts. In a study on physician-patient communication, the gist of the effectiveness and risks of medications was extracted from communication records to examine its relationship with patient satisfaction [Blalock SJ, DeVellis RF, Chewning B, Sleath BL, Reyna VF. Gist and verbatim communication concerning medication risks/benefits. Patient Educ Couns. Jun 2016;99(6):988-994. [FREE Full text] [CrossRef] [Medline]16]. On the basis of the FTT and important concepts discussed in misinformation research, we proposed 3 gist representations: risks associated with misinformation, awareness of misinformation, and value in health promotion. In addition, we proposed 3 verbatim representations: numeric and statistical information as a reference for correction; authority from named experts, scholars, or official institutions used as references; and facts presented with varying levels of detail. These 6 representations, abbreviated as risk, awareness, value, numeric, authority, and facts, serve as attributes of misinformation correction posts.
Risk: Risks Associated With Misinformation
During health crises, it is important to communicate risks accurately and effectively [Blalock SJ, Reyna VF. Using fuzzy-trace theory to understand and improve health judgments, decisions, and behaviors: a literature review. Health Psychol. Aug 2016;35(8):781-792. [FREE Full text] [CrossRef] [Medline]32,Reyna VF. Risk perception and communication in vaccination decisions: a fuzzy-trace theory approach. Vaccine. May 28, 2012;30(25):3790-3797. [FREE Full text] [CrossRef] [Medline]33], as it significantly correlates with adopting preventive health behaviors [Dryhurst S, Schneider CR, Kerr J, Freeman AL, Recchia G, van der Bles AM, et al. Risk perceptions of COVID-19 around the world. J Risk Res. May 05, 2020;23(7-8):994-1006. [CrossRef]34]. The risks associated with misinformation in this study included harm to personal and public health, social panic, and legal risks of spreading misinformation.
Awareness: Awareness of Misinformation
Public awareness plays a crucial role in improving resilience from misinformation, including awareness of exposure to misinformation and the ability to identify it [Rodríguez-Pérez C, Canel MJ. Exploring European citizens’ resilience to misinformation: media legitimacy and media trust as predictive variables. Media Commun. Apr 28, 2023;11(2):30-41. [CrossRef]35]. Moreover, providing awareness and warnings beforehand usually has a beneficial impact [McGuire WJ. Resistance to persuasion conferred by active and passive prior refutation of the same and alternative counterarguments. J Abnorm Soc Psychol. Sep 1961;63(2):326-332. [CrossRef]23]. In this study, awareness of misinformation involved practical information to remind and help individuals recognize and verify misinformation.
Value: Value in Health Promotion
Promoting health and well-being is a primary goal of health communication [Rimal RN, Lapinski MK. Why health communication is important in public health. Bull World Health Organ. Apr 2009;87(4):247. [FREE Full text] [CrossRef] [Medline]36]. When correcting misinformation, it is recommended not only to mention what is false but also to provide alternative accurate information and encourage self-affirmation [Lewandowsky S, Ecker UK, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interest. Dec 17, 2012;13(3):106-131. [CrossRef] [Medline]10]. Conveying public trust and confidence is vital to combat anxiety-induced acceptance of misinformation [Pan W, Liu D, Fang J. An examination of factors contributing to the acceptance of online health misinformation. Front Psychol. 2021;12:630268. [FREE Full text] [CrossRef] [Medline]37]. Therefore, this study identified the third gist of health promotion as the core value toward health crises, including health literacy and a positive attitude.
Numeric: Numeric and Statistical Information as a Reference for Correction
Scientific communication often includes numbers and statistics as objective data. People’s numeracy levels can affect their interpretations of risks, probabilities, and numerical results, leading to potential misunderstandings [Reyna VF, Brainerd CJ. Numeracy, gist, literal thinking and the value of nothing in decision making. Nat Rev Psychol. May 18, 2023;2(7):1-19. [FREE Full text] [CrossRef] [Medline]38]. This study examined the use of numerical information to correct misinformation. Indeed, prior content analysis studies have considered statistical data as an attribute or characteristic of posts [Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9,Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the disneyland measles outbreak. Vaccine. Jun 14, 2016;34(28):3225-3228. [FREE Full text] [CrossRef] [Medline]14].
Authority: Authority From Named Experts, Scholars, or Official Institutions Used as References
Experts, scholars, and professional organizations are trusted sources of knowledge for authoritative communication, and health care professionals can correct health misinformation on social media [Looi JC, Allison S, Bastiampillai T, Maguire PA. Clinical update on managing media exposure and misinformation during COVID-19: recommendations for governments and healthcare professionals. Australas Psychiatry. Feb 08, 2021;29(1):22-25. [CrossRef] [Medline]39,Bautista JR, Zhang Y, Gwizdka J. Healthcare professionals' acts of correcting health misinformation on social media. Int J Med Inform. Apr 2021;148:104375. [CrossRef] [Medline]40]. Fact-checking organizations consult experts to improve the credibility of misinformation corrections. Thus, this study also investigated the effects of referencing authority.
Facts: Facts Presented With Varying Levels of Detail
People’s attention span is generally limited on social media, and the effects of detailed factual information should be evaluated. The outcomes of detailed factual elaboration and simple rebuttal have been discussed in prior research [van der Meer TG, Jin Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. May 2020;35(5):560-575. [CrossRef] [Medline]12,Chan MP, Jones CR, Hall Jamieson K, Albarracín D. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol Sci. Nov 2017;28(11):1531-1546. [FREE Full text] [CrossRef] [Medline]41]. In this study, fact representations refer to detailed and scientific elaborations with evidence for correcting misinformation. These can be categorized into three levels: (1) those that do not target specific misinformation; (2) simple corrections involving straightforward explanations; and (3) detailed corrections encompassing comprehensive information, typically with a complete structure, including background, reasons for errors, and alternative explanations.
User Engagement on Social Media
Social media offers benefits for health communication, such as improving the accessibility of health information and allowing users to generate and exchange information, but the quality and reliability of the shared information must be monitored [Moorhead SA, Hazlett DE, Harrison L, Carroll JK, Irwin A, Hoving C. A new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication. J Med Internet Res. Apr 23, 2013;15(4):e85. [FREE Full text] [CrossRef] [Medline]42]. The FTT suggests that information with gist cues supports reasoning and decision-making, and they are more likely to share when the cues connect their motivations or values. Facebook shares are commonly used as a metric, indicating users’ affirmation and commitment to a post [Kim C, Yang SU. Like, comment, and share on Facebook: how each behavior differs from the other. Public Relat Rev. Jun 2017;43(2):441-449. [CrossRef]43]. In addition to sharing, users can also engage through reactions (including likes and 6 emoticons) and comments on social media platforms. A study used the common-sense model of illness self-regulation to describe humans’ cognitive and emotional systems to increase user engagement toward some attributes of illness representations on social media [Rus HM, Cameron LD. Health communication in social media: message features predicting user engagement on diabetes-related Facebook pages. Ann Behav Med. Oct 2016;50(5):678-689. [CrossRef] [Medline]44]; that is, shares, reactions, and comments were metrics driven by cognitive and emotional factors.
This study defined the basic metrics of user engagement as follows:
- Shares—user engagement in sharing, measured by share count
- Reactions—user engagement in reacting, measured by summing up clicks of likes and 6 other emoticons
- Comments—user engagement in commenting, measured by comment count
Metrics such as shares, reactions, and comments were commonly used in health communication studies to identify factors that increase user engagement [Rus HM, Cameron LD. Health communication in social media: message features predicting user engagement on diabetes-related Facebook pages. Ann Behav Med. Oct 2016;50(5):678-689. [CrossRef] [Medline]44-Bhattacharya S, Srinivasan P, Polgreen P. Social media engagement analysis of U.S. Federal health agencies on Facebook. BMC Med Inform Decis Mak. Apr 21, 2017;17(1):49. [FREE Full text] [CrossRef] [Medline]46]. However, user engagement is not always positive, especially in comments, which often contain errors, biases, and uncivil behavior [Bizzotto N, Schulz PJ, de Bruijn GJ. The "Loci" of misinformation and its correction in peer- and expert-led online communities for mental health: content analysis. J Med Internet Res. Sep 18, 2023;25:e44656. [FREE Full text] [CrossRef] [Medline]22,Weber P, Prochazka F, Schweiger W. Why user comments affect the perceived quality of journalistic content: the role of judgment processes. J Media Psychol. Jan 2019;31(1):24-34. [CrossRef]47,Mehmet M, Heffernan T, Algie J, Forouhandeh B. Harnessing social listening to explore consumer cognitive bias: implications for upstream social marketing. J Soc Mark. Oct 02, 2021;11(4):575-596. [CrossRef]48]. The phenomenon of echo chambers on social media refers to selective exposure and confirmation bias, which limit diverse perspectives and foster polarized groups [Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M. The echo chamber effect on social media. Proc Natl Acad Sci U S A. Mar 02, 2021;118(9):e2023301118. [FREE Full text] [CrossRef] [Medline]3]. A study distinguishes between 2 belief systems among users: scientific and conspiracy [Brugnoli E, Cinelli M, Quattrociocchi W, Scala A. Recursive patterns in online echo chambers. Sci Rep. Dec 27, 2019;9(1):20118. [FREE Full text] [CrossRef] [Medline]49]. Some studies have suggested that user-generated comments influence other users’ perceptions of posts. For example, it was found that the perceived quality of journalistic content being commented on decreased due to a lack of reasoning and incivility in user comments [Weber P, Prochazka F, Schweiger W. Why user comments affect the perceived quality of journalistic content: the role of judgment processes. J Media Psychol. Jan 2019;31(1):24-34. [CrossRef]47]. When exposed to critical comments about a fake news article, people are less likely to share or comment positively on the article [Colliander J. “This is fake news”: investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media. Comput Human Behav. Aug 2019;97:202-215. [CrossRef]50]. A study reported that positive comments influenced people’s perceptions of a company’s reputation in crisis [Hong S, Cameron GT. Will comments change your opinion? The persuasion effects of online comments and heuristic cues in crisis communication. J Conting Crisis Manag. Nov 24, 2017;26(1):173-182. [CrossRef]51]. However, well-informed individuals often remain quiet on certain issues, whereas misinformed individuals tend to be more vocal online [Bode L, Vraga EK. Correction experiences on social media during COVID-19. Soc Media Soc. Apr 12, 2021;7(2):205630512110088. [CrossRef]52].
This study categorized comments into cognitive and emotional engagement [Rus HM, Cameron LD. Health communication in social media: message features predicting user engagement on diabetes-related Facebook pages. Ann Behav Med. Oct 2016;50(5):678-689. [CrossRef] [Medline]44]. For cognitive engagement, we were concerned with the conspiracy belief group and its counterpart, the scientific group [Brugnoli E, Cinelli M, Quattrociocchi W, Scala A. Recursive patterns in online echo chambers. Sci Rep. Dec 27, 2019;9(1):20118. [FREE Full text] [CrossRef] [Medline]49]. We observed the data potentially driven by conspiracy belief as bias-based engagement and identified 2 types of engagement driven by scientific belief: one focused on knowledge itself, called knowledge-based engagement, and the other on misinformation issues, called critical engagement. Furthermore, we observed user engagement driven by emotions, which can be divided into positive, negative, and neutral emotional engagement according to the positive-negative valence framework [Russell JA, Barrett LF. Core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. J Pers Soc Psychol. May 1999;76(5):805-819. [CrossRef] [Medline]53]. We classified comments into one of six categories, which are further defined as follows:
- Knowledge-based engagement. This study observed users’ knowledge-related behaviors on social media driven by scientific beliefs, which include providing additional information, clarifying facts, and asking questions.
- Critical engagement. Developing critical thinking skills and pragmatic skepticism is crucial for citizens to combat misinformation [Rodríguez-Pérez C, Canel MJ. Exploring European citizens’ resilience to misinformation: media legitimacy and media trust as predictive variables. Media Commun. Apr 28, 2023;11(2):30-41. [CrossRef]35]. Unlike knowledge-based engagement, which focuses on knowledge itself, critical engagement identified in this study focused on discussing misinformation by offering warnings or explanations.
- Bias-based engagement. Bias-based engagement refers to users disseminating biased content driven by conspiracy beliefs, including misinformation, incorrect inferences, logical fallacies, or conspiracy theories, potentially undermining information reliability and impeding evidence-based discourse.
- Positive emotional engagement. Positive emotional engagement is characterized by users expressing positive emotions, such as joy, happiness, and excitement.
- Negative emotional engagement. Negative emotional engagement is characterized by users expressing negative emotions, such as anger, fear, sadness, or frustration.
- Neutral emotional engagement. Neutral emotional engagement describes a user’s subdued or indifferent emotional response to content or events.
Research Questions
User attitudes toward correction may change over time [Xue H, Gong X, Stevens H. COVID-19 vaccine fact-checking posts on Facebook: observational study. J Med Internet Res. Jun 21, 2022;24(6):e38423. [FREE Full text] [CrossRef] [Medline]8], and the misinformation contained in comments is also noteworthy [Bizzotto N, Schulz PJ, de Bruijn GJ. The "Loci" of misinformation and its correction in peer- and expert-led online communities for mental health: content analysis. J Med Internet Res. Sep 18, 2023;25:e44656. [FREE Full text] [CrossRef] [Medline]22]. Therefore, this study first investigated whether user engagement toward health misinformation correction has changed over the past 3 years. Second, according to the FTT, providing the gist as cues can effectively and directly convey messages to users [Reyna VF. A scientific theory of gist communication and misinformation resistance, with implications for health, education, and policy. Proc Natl Acad Sci U S A. Apr 13, 2021;118(15):e1912441117. [FREE Full text] [CrossRef] [Medline]17]. Accordingly, we hypothesized that providing gist representations would help users grasp the main aspects of health misinformation issues. We also identified some important verbatim representations as post attributes to examine their relationship with user engagement. Finally, we considered whether the content of comments affects users’ choice to share and react to a post. On the basis of the earlier discussion, the research questions (RQs) posed by this study are as follows:
- How has user engagement toward health misinformation correction changed during the COVID-19 pandemic? (RQ1)
- Can the gist and verbatim representations identified in this study as post attributes predict user engagement in shares, reactions, and comments? (RQ2)
- On the basis of RQ2, does the inclusion of user-generated comment content from engagement in commenting affect engagement in sharing and reactions? (RQ3)
Methods
Corpus and Sample Period
This study focused on health misinformation correction, drawing on 2 representative platforms, the Taiwan Fact-Checking Center (TFC) and the Ministry of Health and Welfare (MOHW), with rich data, including fact-checking and clarifying health-related event or policy posts and user feedback. Data were collected from the Facebook pages of the TFC, which was established in 2018 and has >160,000 followers on Facebook, and the MOHW, which has a significant following of approximately 1.52 million Facebook users, accounting for 6.45% of the total population of Taiwan. We collected 3769 posts from the TFC and 5103 posts from the MOHW from January 2020 to December 2022. The TFC targets various misinformation issues such as health, politics, economics, and celebrities. By contrast, the post content of MOHW focuses on public health, such as health policy announcements, and the correction of health misinformation accounts for a small portion. To ensure our focus was on health misinformation correction, we manually reviewed each post to identify those that involved both health and misinformation correction. For TFC posts, the reference keywords included health, pandemic, disease, food safety, folk remedies, vaccine, and health care. For MOHW posts, the reference keywords included misinformation, false information, clarification, and fake news. Because relying solely on keywords may either miss or capture non-target content, we primarily relied on manual review. After the filtering process, we found 34.89% (1315/3769) posts from the TFC source and 2.14% (109/5103) posts from the MOHW source. Corresponding to the posts from TFC and MOHW, 26,458 (including 913 self-replies by the TFC manager) and 40,920 comments (including 97 self-replies by the MOHW manager) were downloaded, respectively. Finally, the dataset consisted of 1424 posts and 67,378 comments from both platforms.
Research Process
Overview
This study included three phases, as shown in Figure 1: (1) obtaining research data and organizing them into a dataset, (2) processing post and comment data, and (3) data integration and statistical analyses. First, after defining the research participants, we used web scraping techniques to acquire the necessary posts and relevant metrics and obtain the content of comments. Subsequently, to process unstructured data into structured data for further analysis, manual annotation was applied to posts to obtain 1 indicator (readability) using computational methods, whereas a large amount of comment data was automatically annotated using GPT-3.5 (OpenAI). Finally, a statistical analysis was conducted. Detailed information regarding data annotation processing is provided in the following sections.

Annotation and Transformation Data for Posts
Regarding the posts, we categorized them into five main themes based on the emphasized content: (1) understanding and controlling diseases; (2) vaccine safety and policies; (3) health care, food safety, and home remedies; (4) livelihood, economy, and information security; and (5) misinformation awareness and identification. Livelihood, economy, and information security refers to daily life aspects impacted by the COVID-19 pandemic, such as subsidy applications. Two coders manually coded this component (Cohen κ coefficient=0.672). In addition, as control variables, this study considered the length of posts, their readability, whether they were from official platforms, whether they adopted a formal tone, and whether they used images.
Due to the lack of Chinese readability tools, this study created a difficult-word dictionary based on 2 resources: the “Chinese 8000-Word List” levels 3 to 5 [Chinese 8,000-word list. National Chinese Quiz Promotion Working Committee. URL: https://tocfl.edu.tw/tocfl/index.php/exam/download [accessed 2025-01-21] 54] and the “National Academy for Educational Research Third Level Seven-Word List” level 7 [Third level seven-word list. National Academy for Educational Research. URL: https://coct.naer.edu.tw/file/files/TBCL%E8%A9%9E%E8%AA%9E%E8%A1%A8_14420.pdf [accessed 2025-01-21] 55]. In addition, the word frequency table of this dataset was examined, and 340 words, such as reagents and nucleic acid, were added to the dictionary. The difficult-word dictionary included 11,138 words. Then, we used the Dale-Chall readability formula to compute the scores. The formula of the Dale-Chall Readability Score is 0.1579 × [(number of difficult words / total number of words) × 100] + 0.0496 × (total number of words / total number of sentences) [Pancer E, Chandler V, Poole M, Noseworthy TJ. How readability shapes social media engagement. J Consum Psychol. Nov 07, 2018;29(2):262-270. [CrossRef]56,Dale E, Chall JS. A formula for predicting readability: instructions. Educ Res Bull. 1948;27(2):37-54. [FREE Full text]57]. The original score ranges from 0 to 10, with higher scores indicating greater difficulty in reading. Furthermore, the additive inverse of this score was calculated for intuitive interpretation, with higher scores indicating easier readability.
Whether a post adopted a formal tone was determined by manual annotation (Fleiss κ=0.521). Because determining whether the tone is formal can be subjective, we gradually reached a consensus throughout the process. This included discussions on the use of common Chinese modal particles and slang. At least 2 annotators annotated each post, and after comparing the annotation results, we discussed and resolved any inconsistencies individually.
The last control variable was whether an image was used. The data we obtained through web scraping included information about attachments, such as images, links, or videos. Uploaded images were prioritized for display in Facebook posts. If no image was uploaded but the content included a link, a preview image was selected from the linked page for display. Similarly, a thumbnail image was shown when a video was uploaded in a post. When an image was used as an attachment, it was typically designed. We did not evaluate the content of the images. Of the 1424 posts, there were 1310 (91.99%) posts with uploaded images as attachments (ranging from 1 to 15 images), 109 (7.65%) posts that displayed an image from a link, 4 (0.28%) posts with an uploaded video, and only 1 (0.07%) post without any attachment. We assigned a dummy code of 1 to the 91.99% (1310/1424) posts with a designed image as an attachment, while the others were coded as 0.
This study focused on examining 6 representations: risk, awareness, value, numeric, authority, and facts, which serve as attributes of misinformation correction posts identified by the FTT and related literature. Please refer to Tables S2 and S3 in Examples of health misinformation correction, coding schemes, prompts for GPT-3.5, and generated output examples.Multimedia Appendix 1
Annotation Data for Comments
Regarding the comments, we classified engagement in comments into 6 categories based on a literature review and empirical data observation. Three categories were related to cognition: knowledge-based, critical, and bias-based engagement. In addition, 3 categories were related to emotion: positive, negative, and neutral. Each comment was assigned 1 of these 6 constructs.
GPT-3.5 has been trained on sufficiently diverse multilingual data and can perform cross-lingual transfer learning by using knowledge from source languages to make inferences in target languages, even with limited target language data [Feng K, Huang L, Wang K, Wei W, Zhang R. Prompt-based learning framework for zero-shot cross-lingual text classification. Eng Appl Artif Intell. Jul 2024;133:108481. [CrossRef]58]. The comment content was mainly in Chinese, and the average length of a comment was 26.40 (SD 52.70) words. We assigned each comment to a category, although it is possible for a single comment to contain multiple categories in short texts. We prioritized using GPT-3.5 to determine cognitive engagement; if not, we assessed for emotional engagement. The comment was classified as None if it was indistinguishable and neither aspect could be determined from the content. Examples of the prompts and outputs are provided in Tables S4 and S5 in Examples of health misinformation correction, coding schemes, prompts for GPT-3.5, and generated output examples.Multimedia Appendix 1
GPT-3.5 has some issues with instability and hallucinations. Regarding annotation instability, we set the temperature parameter to 0.5. This parameter ranges from 0 to 1, with higher values increasing creativity. We found that setting the temperature too low could lead to conservative responses, resulting in frequent annotating as unidentifiable. In the study, approximately 100 comments were used for testing to improve the prompt and adjust the temperature parameter. We required GPT-3.5 to provide reasons during annotation to examine potential causes of annotating errors and incorporated more specific definitions into the prompt. Next, we performed automated annotation of all data. To ensure GPT-3.5 annotation reliability, we randomly selected 1000 comments for manual annotation. Cohen κ coefficient for manual annotations was 0.519 (moderate agreement). After discussion, a consensus on the annotation was reached. The consistency between manual and GPT-3.5 annotations was calculated, yielding a Cohen κ coefficient of 0.648 (substantial agreement). Then, to mitigate potential biases introduced by GPT-3.5, we systematically compared manual annotations with GPT-3.5 annotations and checked the comments for keywords that GPT-3.5 may not recognize. We revised 4233 comment annotations in this process. Finally, the data for the 6 categories of comments were summed and linked to their corresponding posts.
Data Analysis
We initially used descriptive statistics to provide an overall overview of the data to answer RQ1. The mean share count was 79.12 (SD 293.14; range 0-4259), with 57 posts without sharing. The mean reaction count was 914.35 (SD 2,582.93; range 13-34,403). The mean comment count, excluding self-replies by the platform manager was 46.61 (SD 212.82; range 0-5092), with 198 posts without commenting. The 1.5% (1010/67,378) comments made by the platform manager supplied full information links or expressed positive feedback to some comments. These comments were excluded because we aimed to understand engagement from regular users. After examining the data distribution with Q-Q plots, we found that share, reaction, and comment count were nonnormally distributed. In this situation, it may be appropriate to transform the dependent variable using a natural logarithm in a linear regression model [Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9,Sabate F, Berbegal-Mirabent J, Cañabate A, Lebherz PR. Factors influencing popularity of branded content in Facebook fan pages. Eur Manag J. Dec 2014;32(6):1001-1011. [CrossRef]59] or to use Poisson or negative binomial regression models [Rus HM, Cameron LD. Health communication in social media: message features predicting user engagement on diabetes-related Facebook pages. Ann Behav Med. Oct 2016;50(5):678-689. [CrossRef] [Medline]44,Pancer E, Chandler V, Poole M, Noseworthy TJ. How readability shapes social media engagement. J Consum Psychol. Nov 07, 2018;29(2):262-270. [CrossRef]56]. We chose to use natural logarithms of the dependent variables as ln(shares+1), ln(reactions), and ln(comments+1). To improve the interpretability of the model, this study also transformed some independent variables using the natural logarithm, including ln(post length) and ln(engagement categories in comments+1). This study considered post length, readability, formal language use, and images as control variables based on prior literature [Pancer E, Chandler V, Poole M, Noseworthy TJ. How readability shapes social media engagement. J Consum Psychol. Nov 07, 2018;29(2):262-270. [CrossRef]56,Linos E, Lasky-Fink J, Larkin C, Moore L, Kirkman E. The formality effect. Nat Hum Behav. Feb 23, 2024;8(2):300-310. [CrossRef] [Medline]60]. For the regression models of 3 dependent variables, observations with studentized residuals that fall outside the mean of +3 or –3 were considered outliers and excluded. Then, regression analysis was performed to confirm the theory’s applicability and answer the RQs.
Ethical Considerations
The study was approved by the National Tsing Hua University Research Ethics Committee (11308HT139). This study used publicly available data and processed and stored the data in compliance with the Personal Data Protection Act. As the data were publicly accessible, informed consent was not required.
Results
Overview
To answer RQ1 regarding user engagement trends with misinformation correction during the COVID-19 pandemic, we aggregated 3 years of data and analyzed various types of user engagement. For RQ2, we performed hierarchical multiple regression to examine the relationship between the identified attributes and user engagement in sharing and reactions. For RQ3, we considered the content in comments as additional information on the post, incorporating 6 categories of user engagement in commenting as independent variables in the models for RQ2.
Trends Related to Post Content and User Engagement During the COVID-19 Pandemic
First, this study organized the data to understand the data profile. We merged the MOHW and TFC datasets to obtain an overview of the situation. We categorized the data by themes based on a quarterly distribution, as shown in Figure 2, in which the distribution of misinformation correction themes aligned with the development trajectory of the COVID-19 pandemic. The peak of misinformation correction occurred in the second quarter of 2021, coinciding with Taiwan’s level 3 alert and a vaccine shortage. The most common theme was health care, food safety, and home remedies, followed by understanding and controlling diseases and vaccine safety and policies.

To answer RQ1, we conducted a 1-way ANOVA to analyze user engagement over 3 years, and the results are shown in Table 1. Through homogeneity tests, these counts were found to be heterogeneous, and Welch ANOVA and Games-Howell post hoc comparisons were performed. We found that sharing (Welch F2, 805.887=31.082; P<.001) and reactions (Welch F2, 802.838=40.620; P<.001) declined significantly over time, while comments (Welch F2, 909.388=11.253; P<.001) remained high in the first 2 years, before dropping in the third year. Under the general trends, we further examined the 3-year change patterns of comments based on user engagement categories. The patterns of the knowledge-based engagement (Welch F2, 931.301=19.470; P<.001) and 3 emotional engagements (positive emotional: Welch F2, 741.713=22.636; P<.001; negative emotional: Welch F2, 845.244=11.857; P<.001; neutral emotional: Welch F2, 849.284=21.055; P<.001) in commenting were similar, remaining consistent in the first and second years but significantly declining in the third year. Critical engagement declined significantly over time (Welch F2, 785.780=32.996; P<.001), while bias-based engagement peaked in the second year and then significantly declined in the third year (Welch F2, 922.113=3.120; P=.045).
User engagement | 2020 (n=430), mean (SD) | 2021 (n=580), mean (SD) | 2022 (n=414), mean (SD) | Tests of homogeneity of variances | Robust tests of equality of means | Mean difference between groups (Games-Howell post hoc tests) | ||
Levene statistic | P value | Welch statistic | P value | |||||
Share | 192.65 (496.70) | 38.78 (104.67) | 17.72 (70.42) | 85.546 | <.001 | 31.082 | <.001 |
|
Reaction | 1871.12 (3799.06) | 664.91 (2001.51) | 270.06 (960.27) | 60.753 | <.001 | 40.620 | <.001 |
|
Comment | 54.28 (131.03) | 59.26 (302.90) | 20.91 (92.98) | 10.115 | <.001 | 11.253 | <.001 |
|
Knowledge-based engagement in commenting | 7.04 (13.23) | 5.42 (22.07) | 2.16 (9.98) | 11.194 | <.001 | 19.470 | <.001 |
|
Critical engagement in commenting | 6.80 (13.83) | 3.80 (13.32) | 1.51 (4.39) | 29.049 | <.001 | 32.996 | <.001 |
|
Bias-based engagement in commenting | 11.44 (40.04) | 25.65 (152.21) | 9.00 (44.68) | 13.045 | <.001 | 3.120 | .045 |
|
Positive emotional engagement in commenting | 6.87 (20.27) | 4.18 (17.30) | 1.16 (4.67) | 22.438 | <.001 | 22.636 | <.001 |
|
Negative emotional engagement in commenting | 11.44 (33.40) | 11.71 (63.18) | 3.67 (17.16) | 10.159 | <.001 | 11.857 | <.001 |
|
Neutral emotional engagement in commenting | 5.00 (11.62) | 3.22 (12.38) | 1.27 (5.20) | 16.948 | <.001 | 21.055 | <.001 |
|
We further explored the relative proportions among different categories of user engagement in commenting, as shown in Figure 3. In the first year of the COVID-19 pandemic, the engagement across categories was fairly balanced, with cognitive and emotional engagement each accounting for approximately half. The sum of knowledge-based and critical engagement comments exceeded those of bias-based engagement in 2020 in overall data, suggesting that the 2 beliefs could counterbalance each other. However, bias-based engagement became predominant by the second and third years on both platforms. In overall data, in 2020, bias-based engagement accounted for 23.55% (4920/20,893) of the comments; in 2021, it accounted for 47.51% (14,876/31,309); and in 2022, it accounted for 47.95% (3726/7770). Specifically, based on the MOHW data, in 2020, bias-based engagement was 32.97% (2776/8419), which rose to 52.39% (11,976/22,858) in 2021 and remained at 52.57% (2662/5064) in 2022. On the TFC data, bias-based engagement was 17.19% (2144/12,474) in 2020, 34.32% (2900/8451) in 2021, and 39.32% (1064/2706) in 2022. The discussions were dominated by bias-based engagement in the later stages of the COVID-19 pandemic. When comparing platforms, there was relatively more knowledge-based and critical engagement on the TFC platform, while the MOHW platform had relatively more positive emotional engagement in the first year.

Objective metrics such as shares, reactions, and comments for user engagement showed that interest in health-related misinformation significantly decreased in the third year. However, regarding the number of deaths due to COVID-19, the pandemic was most severe in Taiwan in its third year, with 7 deaths in 2020, 843 in 2021, and 14,636 in 2022 [Statistics table of area, age, sex of death cases - COVID-19 (by month of death). Center for Diseases Control, Ministry of Health and Welfare. URL: https://data.cdc.gov.tw/en/dataset/death-date-statistics-cases-19cov-2 [accessed 2025-01-21] 61].
Post Attributes and User Engagement
In the annotated dataset, of the 1424 posts, 261 (18.33%) contained risk, 868 (60.96%) contained awareness, 527 (37.01%) contained value, 163 (11.45%) contained numeric information, and 768 (53.93%) contained authority. In addition, of the 1424 posts, 202 (14.19%) did not target specific misinformation, 505 (35.46%) provided a simple correction, and 717 (50.35%) offered a detailed correction. Subsequently, concerning RQ2, we focused on the relationship between post attributes and 3 user engagement metrics: shares, reactions, and comments, as shown in Table 2. Model 1 controlled for background variables including post length (transformed using the natural logarithm), official status (converted to a dummy variable, where 1 represents the official platform [MOHW] and 0 represents the nonofficial platform [TFC]), readability, formal tone (where 1 indicates formal and 0 indicates informal), and image (where 1 indicates the attachment of at least 1 image and 0 indicates none). The dependent variables were also transformed using the natural logarithm. These background variables contributed significantly to the regression models for shares (R2=0.334; F5,1412=141.380; P<.001), reactions (R2=0.407; F5,1404=192.994; P<.001), and comments (R2=0.328; F5,1414=137.794; P<.001). All background variables were predictive of these engagement metrics, except for post length, which did not significantly predict comment count.
Predictors | Sharea | Reactiona | Commenta | ||||||||||||
Bb (SEc) | βd | P value | VIFe | B (SE) | β | P value | VIF | B (SE) | β | P value | VIF | ||||
Model 1 | |||||||||||||||
Constant | 3.82 (0.36) | —g | <.001 | — | 6.19 (0.28) | — | <.001 | — | 1.76 (0.36) | — | <.001 | — | |||
Post lengtha | 0.11 (0.05) | .07 | .03 | 2.08 | 0.11 (0.04) | .08 | .005 | 2.07 | 0.09 (0.05) | .06 | .08 | 2.09 | |||
Official platformf | 2.95 (0.14) | .49 | <.001 | 1.15 | 2.80 (0.11) | .56 | <.001 | 1.15 | 3.00 (0.14) | .51 | <.001 | 1.15 | |||
Readability | 0.39 (0.03) | .27 | <.001 | 1.22 | 0.30 (0.03) | .25 | <.001 | 1.21 | 0.16 (0.03) | .11 | <.001 | 1.22 | |||
Formal tonef | 0.53 (0.11) | .15 | <.001 | 1.95 | 0.54 (0.08) | .19 | <.001 | 1.94 | 0.77 (0.10) | .22 | <.001 | 1.95 | |||
Imagef | 0.43 (0.13) | .07 | .001 | 1.09 | 0.24 (0.10) | .05 | .02 | 1.09 | 0.27 (0.13) | .05 | .04 | 1.09 | |||
Model 2 | |||||||||||||||
Constant | 4.40 (0.43) | — | <.001 | — | 7.00 (0.34) | — | <.001 | — | 2.56 (0.43) | — | <.001 | — | |||
Post lengtha | –0.04 (0.06) | –.02 | .56 | 3.21 | –0.05 (0.05) | –.04 | .29 | 3.20 | –0.07 (0.06) | –.04 | 0.26 | 3.21 | |||
Official platform | 2.88 (0.14) | .48 | <.001 | 1.26 | 2.84 (0.11) | .57 | <.001 | 1.26 | 3.11 (0.14) | .52 | <.001 | 1.26 | |||
Readability | 0.40 (0.03) | .27 | <.001 | 1.26 | 0.31 (0.03) | .26 | <.001 | 1.26 | 0.17 (0.03) | .12 | <.001 | 1.27 | |||
Formal tonef | 0.45 (0.11) | .13 | <.001 | 2.08 | 0.42 (0.08) | .15 | <.001 | 2.07 | 0.63 (0.11) | .18 | <.001 | 2.09 | |||
Imagef | 0.21 (0.14) | .04 | .13 | 1.25 | –0.04 (0.11) | –.01 | .73 | 1.26 | 0.00 (0.14) | .00 | .99 | 1.25 | |||
Riskf | 0.30 (0.09) | .07 | .001 | 1.11 | 0.10 (0.07) | .03 | .15 | 1.11 | –0.07 (0.09) | –.02 | .44 | 1.11 | |||
Awarenessf | 0.30 (0.07) | .09 | <.001 | 1.14 | 0.20 (0.06) | .07 | .001 | 1.14 | 0.21 (0.07) | .06 | .005 | 1.14 | |||
Valuef | 0.14 (0.07) | .04 | .051 | 1.09 | 0.08 (0.06) | .03 | .17 | 1.08 | –0.08 (0.07) | –.02 | .28 | 1.09 | |||
Numericf | 0.15 (0.11) | .03 | .18 | 1.10 | 0.13 (0.09) | .03 | .13 | 1.10 | 0.27 (0.11) | .06 | .02 | 1.10 | |||
Authorityf | 0.10 (0.10) | .03 | .31 | 2.02 | –0.02 (0.08) | –.01 | .74 | 2.04 | –0.02 (0.10) | –.01 | .84 | 2.02 | |||
Facts | 0.30 (0.08) | .14 | <.001 | 2.71 | 0.44 (0.06) | .24 | <.001 | 2.73 | 0.42 (0.08) | .19 | <.001 | 2.72 |
aTransformed using the natural logarithm.
bUnstandardized regression coefficient.
cSE for unstandardized regression coefficient.
dStandardized regression coefficient.
eVIF: variance inflation factor.
fConverted into a dummy variable.
gNot applicable.
Model 2 incorporated gist and verbatim representations. The R2 changes for the 3 models were significant. In the model focused on predicting shares (R2=0.353; F11,1406=69.808; P<.001; ∆ R2=0.020), risk (β=.07; P=.001), awareness (β=.09; P<.001), and facts (β=.14; P<.001) were significant predictors. When predicting reactions (R2=0.435; F11,1398=97.292; P<.001; ∆ R2=0.028), awareness (β=.07; P=.001) and facts (β=.24; P<.001) were found to be significant factors. Similarly, when predicting comments (R2=0.351; F11,1408=69.158; P<.001; ∆ R2=0.023), awareness (β=.06; P=.005), numeric (β=.06; P=.02), and facts (β=.19; P<.001) played significant roles. This finding suggests that 2 of the gist representations, risk and awareness, can enhance post endorsement and sharing, but only awareness was predictive of reactions and comments. Moreover, facts positively influenced the 3 metrics in the verbatim dimension, indicating that detailed clarifications enhanced shares, reactions, and comments.
Post Attributes and User Engagement: Considering Comment Content
For RQ3, we incorporated comments generated by 6 categories of user engagement into model 3, as presented in Table 3. To ensure the validity of our findings, we used the variance inflation factor to diagnose multicollinearity, which was within acceptable limits (<10). In the model predicting shares (R2=0.668; F17,1400=165.904; P<.001; Δ R2=0.315), the 3 gist representations (risk: β=.08; P<.001, awareness: β=.08; P<.001, and value: β=.06; P<.001) were significant predictors, while facts (β=.02; P=.36) no longer predicted it. Then, when considering user engagement in commenting, in the cognitive aspect, only bias-based engagement (β=–.11; P=001) negatively predicted shares, and knowledge-based and critical engagement positively predicted shares (knowledge-based: β=.28; P<.001 and critical: β=.15; P<.001). In the emotional aspect, positive and negative emotional engagement both predicted shares, while neutral did not predict it (positive: β=.29; P<.001, negative: β=.09; P=.02, and neutral: β=.05; P=.15). Similarly, in the model predicting reactions (R2=0.772; F17,1392=277.743; P<.001; Δ R2=0.337), the 3 gist representations (risk: β=.04; P=.007, awareness: β=.06; P<.001, and value: β=.05; P<.001) significantly predicted the reaction count, and facts (β=.12; P<.001) also predicted it. All comment categories, except for bias-based engagement (β=–.04; P=.15), significantly predicted reactions (knowledge-based: β=.19; P<.001, critical: β=.14; P<.001, positive: β=.26; P<.001, negative: β=.15; P=<.001, and neutral: β=.08; P=.002).
Predictors | Sharea | Reactiona | |||||||||||||||
Bb (SEc) | βd | P value | VIFe | B (SE) | β | P value | VIF | ||||||||||
Model 3 | |||||||||||||||||
Constant | 2.34 (0.32) | —f | <.001 | 5.26 (0.22) | — | <.001 | — | ||||||||||
Post lengtha | 0.08 (0.05) | .05 | .07 | 3.27 | 0.05 (0.03) | .03 | .13 | 3.27 | |||||||||
Official platformg | 0.72 (0.14) | .12 | <.001 | 2.18 | 0.91 (0.09) | .18 | <.001 | 2.18 | |||||||||
Readability | 0.24 (0.03) | .17 | <.001 | 1.32 | 0.18 (0.02) | .15 | <.001 | 1.31 | |||||||||
Formal toneg | 0.02 (0.08) | .01 | .76 | 2.14 | 0.03 (0.05) | .01 | .58 | 2.13 | |||||||||
Imageg | 0.23 (0.10) | .04 | .02 | 1.25 | –0.02 (0.07) | .00 | .79 | 1.26 | |||||||||
Riskg | 0.31 (0.07) | .08 | <.001 | 1.11 | 0.12 (0.05) | .04 | .007 | 1.11 | |||||||||
Awarenessg | 0.25 (0.05) | .08 | <.001 | 1.16 | 0.15 (0.04) | .06 | <.001 | 1.15 | |||||||||
Valueg | 0.19 (0.05) | .06 | <.001 | 1.11 | 0.15 (0.04) | .05 | <.001 | 1.10 | |||||||||
Numericg | 0.01 (0.08) | .00 | .92 | 1.11 | 0.00 (0.06) | .00 | .93 | 1.11 | |||||||||
Authorityg | 0.07 (0.07) | .02 | .28 | 2.02 | –0.04 (0.05) | –.01 | .46 | 2.05 | |||||||||
Facts | 0.05 (0.06) | .02 | .36 | 2.78 | 0.22 (0.04) | .12 | <.001 | 2.80 | |||||||||
Knowledge-based engagementa | 0.42 (0.04) | .28 | <.001 | 3.67 | 0.24 (0.03) | .19 | <.001 | 3.68 | |||||||||
Critical engagementa | 0.23 (0.05) | .15 | <.001 | 3.77 | 0.18 (0.03) | .14 | <.001 | 3.78 | |||||||||
Bias-based engagementa | –0.13 (0.04) | –.11 | .001 | 4.29 | –0.04 (0.03) | –.04 | .15 | 4.26 | |||||||||
Positive emotional engagementa | 0.45 (0.05) | .29 | <.001 | 4.36 | 0.34 (0.03) | .26 | <.001 | 4.35 | |||||||||
Negative emotional engagementa | 0.12 (0.05) | .09 | .02 | 6.93 | 0.17 (0.04) | .15 | <.001 | 6.84 | |||||||||
Neutral emotional engagementa | 0.08 (0.05) | .05 | .15 | 4.26 | 0.11 (0.04) | .08 | .002 | 4.27 |
aTransformed using the natural logarithm.
bUnstandardized regression coefficient.
cSE for unstandardized regression coefficient.
dStandardized regression coefficient.
eVIF: variance inflation factor.
fNot applicable.
gConverted into a dummy variable.
Discussion
Principal Findings
First, in response to RQ1, all metrics unsurprisingly decreased, and it is noteworthy that the proportion of bias-based engagement in comments increased on both platforms. At the outbreak of the COVID-19 pandemic, despite the potential chaos in information, the comment categories remained relatively balanced and were not dominated by any category. The increase in bias-based engagement in the second year was likely due to the first COVID-19 community transmission in Taiwan and the rise of COVID-19 vaccine–related issues. The result aligns with findings from previous studies, which show that comments often contain inaccuracies and are predominantly made by uninformed users [Bizzotto N, Schulz PJ, de Bruijn GJ. The "Loci" of misinformation and its correction in peer- and expert-led online communities for mental health: content analysis. J Med Internet Res. Sep 18, 2023;25:e44656. [FREE Full text] [CrossRef] [Medline]22,Bode L, Vraga EK. Correction experiences on social media during COVID-19. Soc Media Soc. Apr 12, 2021;7(2):205630512110088. [CrossRef]52].
Then, we compared our findings with similar studies that explored the relationship between post attributes and user engagement. Regarding background variables, according to model 1, readability aligned with previous research findings [Pancer E, Chandler V, Poole M, Noseworthy TJ. How readability shapes social media engagement. J Consum Psychol. Nov 07, 2018;29(2):262-270. [CrossRef]56], which explained why readability influences social media engagement through fluency. Posts using a formal tone also showed higher user engagement, consistent with the literature [Linos E, Lasky-Fink J, Larkin C, Moore L, Kirkman E. The formality effect. Nat Hum Behav. Feb 23, 2024;8(2):300-310. [CrossRef] [Medline]60], in which formality was a heuristic for credibility and importance. However, we did not control for the topics or timing associated with using a formal tone; more severe events may be more likely to use a formal tone. Posts containing images also positively impacted user engagement, a finding supported and explained in the literature by the concept of media type richness [Sabate F, Berbegal-Mirabent J, Cañabate A, Lebherz PR. Factors influencing popularity of branded content in Facebook fan pages. Eur Manag J. Dec 2014;32(6):1001-1011. [CrossRef]59] or an approach-motivated perspective [Rus HM, Cameron LD. Health communication in social media: message features predicting user engagement on diabetes-related Facebook pages. Ann Behav Med. Oct 2016;50(5):678-689. [CrossRef] [Medline]44].
Drawing on the FTT and literature on misinformation, we identified 3 key gist representations and tested their effectiveness. The theory suggests that a message with a clear core and causal explanation is more likely to resonate with people, leading to behaviors such as sharing [Broniatowski DA, Hosseini P, Porter EV, Wood TJ. The role of mental representation in sharing misinformation online. J Exp Psychol Appl. Jul 25, 2024. [CrossRef] [Medline]30]. Moreover, this study suggests that gist representations provide cues that enhance the probability of users finding the respective information relatable or practical. Risk may involve terms such as danger or harm. Awareness often describes tips and tricks and is relatively neutral. Value sometimes refers to reassurance and tends to have more positive terms. However, only awareness is a significant predictor in each model. When considering the content of comments, all gist representations significantly predict reactions, indicating that risk and value retain their value within social media posts that include comments. In addition, value was the only gist of predicting comments negatively, although not significantly. Whether value can further reduce bias-based engagement remains to be examined in future studies. Overall, providing gist information increased shares, reactions, and comments, consistent with previous research showing that gist information enhances article sharing [Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the disneyland measles outbreak. Vaccine. Jun 14, 2016;34(28):3225-3228. [FREE Full text] [CrossRef] [Medline]14]. Awareness of misinformation is the most crucial factor in this study, as a previous study has emphasized awareness of misinformation as the foundational concept of citizen resilience to misinformation [Rodríguez-Pérez C, Canel MJ. Exploring European citizens’ resilience to misinformation: media legitimacy and media trust as predictive variables. Media Commun. Apr 28, 2023;11(2):30-41. [CrossRef]35].
Regarding the verbatim dimension, we found that numeric information cannot significantly predict shares and reactions but can predict comments. Statistics also failed to predict sharing in another study based on the FTT [Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the disneyland measles outbreak. Vaccine. Jun 14, 2016;34(28):3225-3228. [FREE Full text] [CrossRef] [Medline]14]. According to discussions related to the FTT, individuals’ numerical abilities influence the numeric factor [Reyna VF, Brainerd CJ. Numeracy, gist, literal thinking and the value of nothing in decision making. Nat Rev Psychol. May 18, 2023;2(7):1-19. [FREE Full text] [CrossRef] [Medline]38], and users might engage in discussions based on different perceptions of numeric information. However, statistics as a factor of source transparency was one of the predictors of shares and comments, suggesting that numerical information in news articles may increase the audience’s perceived accuracy [Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9]. A previous study indicated that participants engaged more with corrections even without statistical information due to the simplicity of the material, which made it easier to assess credibility. Then, they put less cognitive effort when the information included statistical evidence [Song Y, Wang S, Xu Q. Fighting misinformation on social media: effects of evidence type and presentation mode. Health Educ Res. May 24, 2022;37(3):185-198. [FREE Full text] [CrossRef] [Medline]20]. We offered a different perspective through FTT, suggesting that numeric information did not significantly affect user engagement in sharing and reacting, as these forms of engagement relied more on top-down meaning. However, numeric information increased commenting, which might rely on detailed information and higher cognitive engagement. Furthermore, this study identified authority as verbatim because the names of experts or institutes as references were detailed. Nevertheless, the attribute still conveyed a sense of credibility. The concept was similar to source transparency but with slight differences, and a previous study showed that using official reports increased shares and comments, and references to laws and news articles also increased comments [Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]9]. In this study, this variable did not significantly predict any metrics. This could also be due to both platforms being trustworthy sources. Finally, facts significantly predicted all 3 metrics in model 2, indicating that posts with detailed and complete content generated higher user engagement, which is consistent with past experimental results [van der Meer TG, Jin Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. May 2020;35(5):560-575. [CrossRef] [Medline]12,Zeng HK, Lo SY, Li SS. Credibility of misinformation source moderates the effectiveness of corrective messages on social media. Public Underst Sci. Jul 31, 2024;33(5):587-603. [CrossRef] [Medline]13,Martel C, Mosleh M, Rand DG. You’re definitely wrong, maybe: correction style has minimal effect on corrections of misinformation online. Media Commun. Feb 03, 2021;9(1):120-133. [CrossRef]18]. Detailed corrections might contain alternative interpretations and context information for understanding and discussion, generating user engagement.
Finally, we considered that readers might refer to comment content to decide whether to share or react in model 3. An increase in knowledge-based and critical engagement comments, which reflect scientific beliefs, led to higher shares and reactions. Conversely, comments reflecting bias-based engagement negatively predicted shares, suggesting that these comments, much like uncivil comments [Weber P, Prochazka F, Schweiger W. Why user comments affect the perceived quality of journalistic content: the role of judgment processes. J Media Psychol. Jan 2019;31(1):24-34. [CrossRef]47], may diminish the quality of discussion threads and reduce share counts. Comments with positive and negative emotional engagement both predicted shares and reactions. Although comment categories could predict shares and reactions, all 3 gist representations still play an important role, indicating that gist communication can still be effective.
In summary, this study defined post attributes based on the FTT and considered a more diversified array of user engagement types to explore their relationship with post attributes. The main theoretical contribution of our study is that it presents a framework applied to misinformation issues validated using real social media data. It examined user engagement in commenting to better reflect the content users are most likely to encounter. The main practical contribution is the introduction of gist representations—risk, awareness, and value—that serve as simple but important cues when publishing correction messages, helping users grasp and address misinformation issues and encouraging content sharing.
Implications, Limitations, and Future Directions
FTT emphasizes the mutually supportive relationship between our cognition and social values and recommends using reasoned approaches to guide decision-making and emotional responses, rather than emotional appeals, to counter misinformation [Reyna VF. A scientific theory of gist communication and misinformation resistance, with implications for health, education, and policy. Proc Natl Acad Sci U S A. Apr 13, 2021;118(15):e1912441117. [FREE Full text] [CrossRef] [Medline]17]. Building on this foundation, we identified 3 rational gist representations that connect with social values. Among these, awareness of misinformation proved to be a highly effective gist when comment content was not considered. In our data, this concept included elements such as increasing awareness of misinformation and fraud, encouraging verification of sources, promoting cautious sharing, and identifying characteristics and sources of misinformation. Mentioning the risks associated with misinformation can increase share counts. The data for this study referenced health risks, social panic, legal risks, and information security risks. Mentioning the value of health promotion is effective when considering comment content. The data for this study referenced health promotion and health literacy, a positive attitude toward the COVID-19 pandemic prevention, media or eHealth literacy, public trust, psychological well-being, and social confidence. These attributes serve as guidelines for designing content in misinformation correction and science communication.
This study had some limitations. First, this study examined data from 3 years during the COVID-19 pandemic, during which attention to health misinformation was heightened. There were also some health crises before and after the COVID-19 pandemic. Whether inferences based on the results of this study remain valid requires further examination. Second, we used GPT-3.5 to code comments efficiently, followed by a manual review. Cultural characteristics may have caused GPT-3.5 to misinterpret specific comments. For example, the colors blue and green sometimes represent political parties, and GPT-3.5 may not always accurately interpret this from the context, leading to some classification inaccuracies. Nevertheless, GPT-3.5 offers several advantages over previous machine learning methods, such as cross-language capabilities and emoji interpretation. Third, authority, as one of the independent variables in this study, has implications for credibility in some types of research. In the heuristic-systematic model, it is viewed as having an intuitive role, and in the elaboration likelihood model, it is considered a peripheral cue, suggesting the credibility of a message. We cannot confirm whether the dual roles cause construct confusion or whether the platform symbolizes knowledge authority. Authority did not significantly predict the outcomes in models 2 and 3; however, this does not imply that it is unimportant. Finally, we considered whether there was an attached image but did not examine its content or whether the images contained gist messages. Images attract user attention more quickly and should be considered in future research.
In addition to addressing the abovementioned limitations, future research could consider simplifying the 6 comment categories we used. For instance, knowledge-based and critical engagement could be combined, as well as positive and neutral emotional engagement. In addition, some comments directly responded to the content design of certain corrections. We observed that some posts were criticized for lacking focus, while others were praised for being practical. If these comments can be accurately extracted and systematically analyzed, it would also help assess whether this type of gist communication is well received. We found that posts containing numerical information, such as vaccine adverse reaction rates, received more comments, as users may interpret and engage more with rates and numbers. This area requires more systematic analysis and effective framing of numeric information in correction design to prevent misinterpretation on social media, enhancing the application of FTT. Finally, this study used regression analysis to explore the impact of 6 attributes on metrics; however, a single post often contains multiple attributes, and it may be possible to identify an optimal combination. Future studies could further explore this area.
Conclusions
This study enriches the theoretical understanding of the relationship between user engagement and the effectiveness of web-based communication strategies in conveying corrections for health misinformation. It provides evidence-based web content for delivering health misinformation corrections, and FTT explains user engagement with specific post attributes. The theoretical contribution strengthens the link between FTT and multiple engagement metrics beyond sharing and discusses post attributes in terms of gist and verbatim dimensions. In addition, by incorporating comment content, gist communication increases shares and reactions effectively. Finally, these findings provide a foundation for designing more effective content strategies to combat health misinformation, ultimately contributing to more resilient public health communication.
Acknowledgments
The authors are grateful for the funding support from Academia Sinica in Taiwan (AS-HLGC-111-03). The authors acknowledge the contributions of the 3 human annotators. This study used generative an artificial intelligence tool, GPT-3.5, for large language model–based annotation of comment data.
Data Availability
The datasets generated during and analyzed during this study are available from the corresponding author on reasonable request.
Conflicts of Interest
None declared.
Multimedia Appendix 1
Examples of health misinformation correction, coding schemes, prompts for GPT-3.5, and generated output examples.
DOCX File , 33 KBReferences
- Ecker UK, Lewandowsky S, Cook J, Schmid P, Fazio LK, Brashier N, et al. The psychological drivers of misinformation belief and its resistance to correction. Nat Rev Psychol. Jan 12, 2022;1(1):13-29. [CrossRef]
- Gallotti R, Valle F, Castaldo N, Sacco P, De Domenico M. Assessing the risks of 'infodemics' in response to COVID-19 epidemics. Nat Hum Behav. Dec 2020;4(12):1285-1293. [CrossRef] [Medline]
- Cinelli M, De Francisci Morales G, Galeazzi A, Quattrociocchi W, Starnini M. The echo chamber effect on social media. Proc Natl Acad Sci U S A. Mar 02, 2021;118(9):e2023301118. [FREE Full text] [CrossRef] [Medline]
- Domenico GD, Sit J, Ishizaka A, Nunan D. Fake news, social media and marketing: a systematic review. J Bus Res. Jan 2021;124:329-341. [CrossRef]
- Yoon HY, You KH, Kwon JH, Kim JS, Rha SY, Chang YJ, et al. Understanding the social mechanism of cancer misinformation spread on YouTube and lessons learned: infodemiological study. J Med Internet Res. Nov 14, 2022;24(11):e39571. [FREE Full text] [CrossRef] [Medline]
- Bode L, Vraga EK. See something, say something: correction of global health misinformation on social media. Health Commun. Sep 2018;33(9):1131-1140. [CrossRef] [Medline]
- Chu WM, Shieh GJ, Wu SL, Sheu WH. Use of Facebook by academic medical centers in Taiwan during the COVID-19 pandemic: observational study. J Med Internet Res. Nov 20, 2020;22(11):e21501. [FREE Full text] [CrossRef] [Medline]
- Xue H, Gong X, Stevens H. COVID-19 vaccine fact-checking posts on Facebook: observational study. J Med Internet Res. Jun 21, 2022;24(6):e38423. [FREE Full text] [CrossRef] [Medline]
- Kim HS, Suh YJ, Kim EM, Chong E, Hong H, Song B, et al. Fact-checking and audience engagement: a study of content analysis and audience behavioral data of fact-checking coverage from news media. Digital Journalism. Feb 08, 2022;10(5):781-800. [CrossRef]
- Lewandowsky S, Ecker UK, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interest. Dec 17, 2012;13(3):106-131. [CrossRef] [Medline]
- Koh HK, Rudd RE. The arc of health literacy. JAMA. 2015;314(12):1225-1226. [CrossRef] [Medline]
- van der Meer TG, Jin Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. May 2020;35(5):560-575. [CrossRef] [Medline]
- Zeng HK, Lo SY, Li SS. Credibility of misinformation source moderates the effectiveness of corrective messages on social media. Public Underst Sci. Jul 31, 2024;33(5):587-603. [CrossRef] [Medline]
- Broniatowski DA, Hilyard KM, Dredze M. Effective vaccine communication during the disneyland measles outbreak. Vaccine. Jun 14, 2016;34(28):3225-3228. [FREE Full text] [CrossRef] [Medline]
- Reyna VF, Brainerd CJ. Fuzzy-trace theory: an interim synthesis. Learn Individ Differ. Jan 1995;7(1):1-75. [CrossRef]
- Blalock SJ, DeVellis RF, Chewning B, Sleath BL, Reyna VF. Gist and verbatim communication concerning medication risks/benefits. Patient Educ Couns. Jun 2016;99(6):988-994. [FREE Full text] [CrossRef] [Medline]
- Reyna VF. A scientific theory of gist communication and misinformation resistance, with implications for health, education, and policy. Proc Natl Acad Sci U S A. Apr 13, 2021;118(15):e1912441117. [FREE Full text] [CrossRef] [Medline]
- Martel C, Mosleh M, Rand DG. You’re definitely wrong, maybe: correction style has minimal effect on corrections of misinformation online. Media Commun. Feb 03, 2021;9(1):120-133. [CrossRef]
- Wang Y. Debunking misinformation about genetically modified food safety on social media: can heuristic cues mitigate biased assimilation? Sci Commun. Jun 18, 2021;43(4):460-485. [CrossRef]
- Song Y, Wang S, Xu Q. Fighting misinformation on social media: effects of evidence type and presentation mode. Health Educ Res. May 24, 2022;37(3):185-198. [FREE Full text] [CrossRef] [Medline]
- Sui Y, Zhang B. Determinants of the perceived credibility of rebuttals concerning health misinformation. Int J Environ Res Public Health. Feb 02, 2021;18(3):1345. [FREE Full text] [CrossRef] [Medline]
- Bizzotto N, Schulz PJ, de Bruijn GJ. The "Loci" of misinformation and its correction in peer- and expert-led online communities for mental health: content analysis. J Med Internet Res. Sep 18, 2023;25:e44656. [FREE Full text] [CrossRef] [Medline]
- McGuire WJ. Resistance to persuasion conferred by active and passive prior refutation of the same and alternative counterarguments. J Abnorm Soc Psychol. Sep 1961;63(2):326-332. [CrossRef]
- Petty RE, Cacioppo JT. The elaboration likelihood model of persuasion. In: Berkowitz L, editor. Advances in Experimental Social Psychology. New York, NY. Academic Press; 1986:123-205.
- Chaiken S. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J Pers Soc Psychol. Nov 1980;39(5):752-766. [CrossRef]
- Edelson SM, Reyna VF. Who makes the decision, how, and why: a fuzzy-trace theory approach. Med Decis Making. Aug 26, 2024;44(6):614-616. [CrossRef] [Medline]
- Broniatowski DA, Reyna VF. To illuminate and motivate: a fuzzy-trace model of the spread of information online. Comput Math Organ Theory. Dec 2020;26:431-464. [FREE Full text] [CrossRef] [Medline]
- Reyna VF, Brainerd CJ. Dual processes in decision making and developmental neuroscience: a fuzzy-trace model. Dev Rev. Sep 2011;31(2-3):180-206. [FREE Full text] [CrossRef] [Medline]
- Reyna VF, Edelson S, Hayes B, Garavito D. Supporting health and medical decision making: findings and insights from fuzzy-trace theory. Med Decis Making. Aug 23, 2022;42(6):741-754. [FREE Full text] [CrossRef] [Medline]
- Broniatowski DA, Hosseini P, Porter EV, Wood TJ. The role of mental representation in sharing misinformation online. J Exp Psychol Appl. Jul 25, 2024. [CrossRef] [Medline]
- Ecker UK, O'Reilly Z, Reid JS, Chang EP. The effectiveness of short-format refutational fact-checks. Br J Psychol. Feb 2020;111(1):36-54. [FREE Full text] [CrossRef] [Medline]
- Blalock SJ, Reyna VF. Using fuzzy-trace theory to understand and improve health judgments, decisions, and behaviors: a literature review. Health Psychol. Aug 2016;35(8):781-792. [FREE Full text] [CrossRef] [Medline]
- Reyna VF. Risk perception and communication in vaccination decisions: a fuzzy-trace theory approach. Vaccine. May 28, 2012;30(25):3790-3797. [FREE Full text] [CrossRef] [Medline]
- Dryhurst S, Schneider CR, Kerr J, Freeman AL, Recchia G, van der Bles AM, et al. Risk perceptions of COVID-19 around the world. J Risk Res. May 05, 2020;23(7-8):994-1006. [CrossRef]
- Rodríguez-Pérez C, Canel MJ. Exploring European citizens’ resilience to misinformation: media legitimacy and media trust as predictive variables. Media Commun. Apr 28, 2023;11(2):30-41. [CrossRef]
- Rimal RN, Lapinski MK. Why health communication is important in public health. Bull World Health Organ. Apr 2009;87(4):247. [FREE Full text] [CrossRef] [Medline]
- Pan W, Liu D, Fang J. An examination of factors contributing to the acceptance of online health misinformation. Front Psychol. 2021;12:630268. [FREE Full text] [CrossRef] [Medline]
- Reyna VF, Brainerd CJ. Numeracy, gist, literal thinking and the value of nothing in decision making. Nat Rev Psychol. May 18, 2023;2(7):1-19. [FREE Full text] [CrossRef] [Medline]
- Looi JC, Allison S, Bastiampillai T, Maguire PA. Clinical update on managing media exposure and misinformation during COVID-19: recommendations for governments and healthcare professionals. Australas Psychiatry. Feb 08, 2021;29(1):22-25. [CrossRef] [Medline]
- Bautista JR, Zhang Y, Gwizdka J. Healthcare professionals' acts of correcting health misinformation on social media. Int J Med Inform. Apr 2021;148:104375. [CrossRef] [Medline]
- Chan MP, Jones CR, Hall Jamieson K, Albarracín D. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol Sci. Nov 2017;28(11):1531-1546. [FREE Full text] [CrossRef] [Medline]
- Moorhead SA, Hazlett DE, Harrison L, Carroll JK, Irwin A, Hoving C. A new dimension of health care: systematic review of the uses, benefits, and limitations of social media for health communication. J Med Internet Res. Apr 23, 2013;15(4):e85. [FREE Full text] [CrossRef] [Medline]
- Kim C, Yang SU. Like, comment, and share on Facebook: how each behavior differs from the other. Public Relat Rev. Jun 2017;43(2):441-449. [CrossRef]
- Rus HM, Cameron LD. Health communication in social media: message features predicting user engagement on diabetes-related Facebook pages. Ann Behav Med. Oct 2016;50(5):678-689. [CrossRef] [Medline]
- Jiang S, Tay J, Ngien A, Basnyat I. Social media health promotion and audience engagement: the roles of information dissemination, organization-audience interaction, and action confidence building. Health Commun. Apr 25, 2024;39(1):4-14. [CrossRef] [Medline]
- Bhattacharya S, Srinivasan P, Polgreen P. Social media engagement analysis of U.S. Federal health agencies on Facebook. BMC Med Inform Decis Mak. Apr 21, 2017;17(1):49. [FREE Full text] [CrossRef] [Medline]
- Weber P, Prochazka F, Schweiger W. Why user comments affect the perceived quality of journalistic content: the role of judgment processes. J Media Psychol. Jan 2019;31(1):24-34. [CrossRef]
- Mehmet M, Heffernan T, Algie J, Forouhandeh B. Harnessing social listening to explore consumer cognitive bias: implications for upstream social marketing. J Soc Mark. Oct 02, 2021;11(4):575-596. [CrossRef]
- Brugnoli E, Cinelli M, Quattrociocchi W, Scala A. Recursive patterns in online echo chambers. Sci Rep. Dec 27, 2019;9(1):20118. [FREE Full text] [CrossRef] [Medline]
- Colliander J. “This is fake news”: investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media. Comput Human Behav. Aug 2019;97:202-215. [CrossRef]
- Hong S, Cameron GT. Will comments change your opinion? The persuasion effects of online comments and heuristic cues in crisis communication. J Conting Crisis Manag. Nov 24, 2017;26(1):173-182. [CrossRef]
- Bode L, Vraga EK. Correction experiences on social media during COVID-19. Soc Media Soc. Apr 12, 2021;7(2):205630512110088. [CrossRef]
- Russell JA, Barrett LF. Core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. J Pers Soc Psychol. May 1999;76(5):805-819. [CrossRef] [Medline]
- Chinese 8,000-word list. National Chinese Quiz Promotion Working Committee. URL: https://tocfl.edu.tw/tocfl/index.php/exam/download [accessed 2025-01-21]
- Third level seven-word list. National Academy for Educational Research. URL: https://coct.naer.edu.tw/file/files/TBCL%E8%A9%9E%E8%AA%9E%E8%A1%A8_14420.pdf [accessed 2025-01-21]
- Pancer E, Chandler V, Poole M, Noseworthy TJ. How readability shapes social media engagement. J Consum Psychol. Nov 07, 2018;29(2):262-270. [CrossRef]
- Dale E, Chall JS. A formula for predicting readability: instructions. Educ Res Bull. 1948;27(2):37-54. [FREE Full text]
- Feng K, Huang L, Wang K, Wei W, Zhang R. Prompt-based learning framework for zero-shot cross-lingual text classification. Eng Appl Artif Intell. Jul 2024;133:108481. [CrossRef]
- Sabate F, Berbegal-Mirabent J, Cañabate A, Lebherz PR. Factors influencing popularity of branded content in Facebook fan pages. Eur Manag J. Dec 2014;32(6):1001-1011. [CrossRef]
- Linos E, Lasky-Fink J, Larkin C, Moore L, Kirkman E. The formality effect. Nat Hum Behav. Feb 23, 2024;8(2):300-310. [CrossRef] [Medline]
- Statistics table of area, age, sex of death cases - COVID-19 (by month of death). Center for Diseases Control, Ministry of Health and Welfare. URL: https://data.cdc.gov.tw/en/dataset/death-date-statistics-cases-19cov-2 [accessed 2025-01-21]
Abbreviations
FTT: fuzzy-trace theory |
MOHW: Ministry of Health and Welfare |
RQ: research question |
TFC: Taiwan Fact-Checking Center |
Edited by A Mavragani; submitted 21.08.24; peer-reviewed by JE Ibarra-Esquer, SB Park, L Shang; comments to author 01.10.24; revised version received 05.11.24; accepted 27.11.24; published 23.01.25.
Copyright©Hsin-Yu Kuo, Su-Yen Chen. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 23.01.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.