Review
Abstract
Background: Misinformation in health and health care contexts threatens public health by undermining initiatives, spreading dangerous behaviors, and influencing decision-making. Given its reach on online platforms and social media, there is growing demand for interventions addressing misinformation. Literature highlights the importance of theoretical underpinnings (frameworks and models) to guide the development of educational interventions targeting both the features of misinformation and the human traits that increase susceptibility.
Objective: This review examines literature on online interventions targeting health misinformation to mitigate adverse public health impacts. It explores intervention types, population demographics, susceptibility-related human attributes, and misinformation characteristics addressed. It also identifies the theoretical underpinnings used and gaps in the literature.
Methods: The review followed a methodological framework and adhered to PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. A search strategy combining Medical Subject Headings (MeSH) and keywords was used to search five databases for studies published between 2018 and 2024. Identified studies underwent deduplication, title and abstract screening using predefined eligibility criteria, full-text screening, and data extraction.
Results: The initial search yielded 513 citations; 30 (5.8%) studies were included after screening. Of these, 19 (63%) focused on COVID-19 misinformation, 11 (37%) on other health contexts, and 1 (3%) addressed misinformation conceptually. Regarding intervention type, 22 (73%) used educational courses, 7 (23%) employed counter speech, and 1 (3%) used inoculation games, with some overlap. Sixteen (53%) interventions targeted characteristics of misinformation, categorized as content and presentation tactics, cognitive and psychological biases, social and cultural influences, and dissemination strategies. Seven (23%) interventions focused on specific demographics, while 14 (47%) addressed human attributes that heighten susceptibility. These attributes were grouped into knowledge and processing, emotional and psychological factors, and trust and social dynamics. Theoretical underpinnings guided intervention development in 23 (77%) studies, often overlapping in categories including inoculation and correction, education and cognition, motivation and emotion, behavior and persuasion, trust and belief, and learning design.
Conclusions: Online interventions targeting health misinformation often share outcome goals and use overlapping strategies such as educational courses, counter speech, and inoculation games. Many adopt multifaceted approaches to address misinformation’s complexity. However, gaps remain in tailoring interventions to misinformation characteristics that could improve specificity and impact. Few studies focus on human attributes contributing to belief in and spread of misinformation, particularly among vulnerable groups. While theoretical models are commonly cited, clearer reporting and stronger connections to intervention design are needed. Collaboration among intervention developers, theorists, and psychologists is recommended to enhance future interventions.
International Registered Report Identifier (IRRID): RR2-10.31219/osf.io/mfujb
doi:10.2196/69618
Keywords
Introduction
Background
Misinformation is defined as “false information that is spread, regardless of whether there is intent to mislead” []. Misinformation in health and health care contexts has emerged as a significant and growing concern, particularly in the digital age where the spread of information is rapid and often unchecked. According to the World Health Organization (WHO), health misinformation is a global threat, and due to its rapid online dissemination, it undermines public health initiatives and propagates dangerous health-related behaviors []. This issue has become increasingly pronounced during public health crises, such as the COVID-19 pandemic, when individuals are more likely to seek information online []. The WHO further asserts that, in times of a public health crisis, the risks and consequences resulting from misinformation can be equivalent to those of the health crisis itself, primarily due to confusion and misinformed decision-making [].
Both historically and in the present, the significance of health misinformation has been a growing cause for concern about public health outcomes; for example, misinformation about vaccination conspiracies and alleged links to autism has negatively affected vaccine uptake and contributed to localized measles outbreaks []. During the Ebola outbreaks, misinformation suggesting that the disease was not real led to reduced acceptance of formal health care services, including vaccination and hospital attendance, thereby facilitating further spread of the virus []. More recently, COVID-19 misinformation placed an increased burden on public health, leading individuals to disregard social distancing rules and mask-wearing precautions and instead rely on non–evidence-based treatments [].
Many factors can cause a person to fall prey to misinformation. Misinformation often possesses intrinsic characteristics, specific qualities, or features embedded into the content that increase the audience’s belief in it and their tendency to disseminate it further. These characteristics include, but are not limited to, emotional appeal, repetition, use of fake experts, and vagueness []. Research has shown that individuals are also prone to misinformation due to human attributes, namely natural cognitive tendencies, knowledge gaps, and psychological shortcuts, which misinformation exploits, thereby impacting their belief systems and decision-making processes [].
Misinformation is disseminated through a multitude of channels, including social media, online communication groups, traditional media, and word of mouth. With the increasing use of the internet and social media, the speed of dissemination has risen due to the underlying architecture that enables coordinated efforts or viral patterns to drive spread []. Its reach is faster and broader on social media, given its sensationalist and emotional nature, which increases individuals’ susceptibility and likelihood of uptake.
Health misinformation shares some attributes with other categories of misinformation. Certain attributes are ubiquitous and must be targeted in all intervention designs. These attributes include those stemming from scientific skepticism, influenced by human attributes such as political ideology and sociocultural world views; repetition and amplification online; and the spread of false conspiracies and their impacts on decision-making []. However, other characteristics and human susceptibility attributes are unique to health misinformation and must be considered in intervention development. These include the lack of trust in health professionals, government agencies, and pharmaceutical companies; the polarizing use of emotions; and religious framing [].
Interventions to address health misinformation include peer-to-peer debunking, seminars, educational courses about news literacy, and tools to identify fake news []. A subcategory of these are online interventions, which have a broad and diverse reach compared to conventional or face-to-face approaches, allowing for the rapid dissemination of information to all populations, including those that typically do not engage with traditional health care systems. With the growing use of the internet for health-related information, online interventions are particularly beneficial in addressing the root causes of misinformation []. Given the widespread nature of misinformation in digital spaces, online interventions are adaptable resources that have been used to deliver real-time corrections based on news alerts, provide health updates based on public health agency announcements, implement algorithmic corrections, and promote digital health literacy through educational courses in an environment familiar to users [].
Despite the growing body of research exploring factors that contribute to the spread and acceptance of misinformation in health and health care contexts, there remains a knowledge gap in how to design interventions to combat these factors. Research has demonstrated that individuals are prone to misinformation due to intrinsic factors (eg, cognitive biases) or extrinsic factors (eg, social influences and cultural norms), ultimately impacting belief systems and decision-making processes. Misinformation also has characteristics that enhance its spread and impact, such as emotional appeal, oversimplification, and false authority. Despite increasing efforts to design interventions targeting health misinformation, many do not adequately discuss or address the characteristics of misinformation that put audiences at risk of believing and disseminating it; for example, an online before-and-after intervention study by Duarte et al [] focused on fact-checking claims circulating in Brazil about the health benefits of coconut oil consumption. While the intervention addressed factual accuracy, it did not address intrinsic features of the misinformation, such as the use of false experts, celebrity endorsements, repetition, or cultural beliefs.
Similarly, other studies do not discuss how their interventions address the human attributes that make it easy for individuals to be targeted by misinformation; for example, an intervention by Pennycook et al [] assessed how misinformation is more believable and easier to disseminate when presented on social media platforms but did not address the human factors that make individuals susceptible to it, such as cultural norms, literacy levels, or system 1 thinking. Moreover, these interventions often lack theoretical guidance in their design and implementation; as noted by Kreuter et al [], theoretical frameworks and models should serve as fundamental guiding elements to develop and implement successful interventions. Without a deeper understanding of these shortcomings, there is a risk that future interventions will continue to fall short in their effectiveness against the rapidly evolving threats posed by health misinformation.
This scoping review aims to synthesize an understanding of the characteristics used in the development of online interventions to address misinformation in health and health care contexts. We have mapped existing literature to understand the theoretical underpinnings that guide the development of interventions targeting both the characteristics of misinformation and the human attributes that make audiences susceptible to it. This scoping review will be useful for health care professionals, policy makers, and public health organizations, as it will provide insights into effective strategies for countering misinformation. By understanding the theoretical underpinnings and characteristics that drive both misinformation and susceptibility to it, stakeholders can design effective interventions that reach diverse populations, particularly those who rely on the internet for health-related information.
Rationale
Current literature and reviews predominantly focus on interventions addressing misinformation related to specific health issues, such as COVID-19, autism, vaccine hesitance, and human papillomavirus [,]. However, there is a lack of reviews assessing interventions targeting misinformation broadly across diverse health contexts. Given the widespread impact of misinformation, a holistic approach is needed to guide the development of interventions addressing this root dilemma in health communication. While other scoping reviews have focused on interventions targeting misinformation on social media platforms [,], these represent only a subset of online misinformation. This scoping review aims to explore the full range of interventions addressing misinformation across all online platforms and contexts, providing a more comprehensive understanding of their impact. Although extensive theoretical and psychological analyses have explored how misinformation is formed, perceived, and perpetuated, particularly in relation to cognitive biases and social media dynamics, there is a notable absence of holistic reviews examining the scope of implemented interventions []. Consequently, there is insufficient insight into how theoretical frameworks, characteristics of misinformation, and demographic factors influence the development of these interventions. This scoping review aims to address these gaps by mapping the range of interventions targeting health misinformation and charting the integration of theoretical underpinnings and human attributes into their design.
Our rationale behind choosing a scoping review methodology is to examine the breadth of literature on this topic, understand the themes and theoretical underpinnings (including frameworks and models) used to develop online interventions, and enable conceptual mapping []. As our objective is to identify concepts and frameworks guiding intervention development rather than to assess study quality or outcome efficacy, a scoping review is the appropriate methodology.
Objectives
This scoping review aims to identify existing literature on online interventions implemented to address health misinformation. The review will collate the theoretical underpinnings used in the development and design of interventions and examine how they address both the characteristics of misinformation and the human attributes of target populations, with the aim of preventing or mitigating the detrimental consequences of misinformation. This review addresses the following research questions:
- What online interventions have been implemented to address misinformation in health and health care contexts to mitigate adverse effects on public health outcomes? (1.1) What types of interventions have been implemented? (1.2) What are the demographics of the target populations? (1.3) What human attributes are targeted in the intervention design? (1.4) What characteristics, if any, of misinformation do the interventions address?
- What are the theoretical underpinnings used to develop and design these interventions?
- What gaps in the literature are identified through this review?
Methods
Scoping Review Methodology
This scoping review followed the five-stage methodological framework provided by Arksey and O’Malley []: (1) identifying the research question; (2) identifying relevant studies; (3) study selection; (4) charting the data; and (5) collating, summarizing, and reporting the results. We report our results using the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. An in-depth protocol detailing the search strategies, study selection, and screening was published in 2024 [].
Information Sources, Search, Selection of Sources of Evidence, and Eligibility Criteria
The search strategy began with a preliminary scan of Google Scholar to identify relevant concepts in both gray and peer-reviewed literature, after which a detailed list of keywords and Medical Subject Headings (MeSH) terms was developed with the support of a library technical expert (). A search strategy was then created to target 3 key concepts: “health,” “misinformation,” and “online intervention.” This strategy was first tested on PubMed and refined using benchmark studies. Search strings were generated using the keywords, Boolean operators, and limiters to ensure sensitivity, precision, and relevance. These search strings were then implemented for an electronic bibliographic search in 5 databases: PubMed, MEDLINE, PsycInfo, Scopus, and the Cochrane Library. An initial search was completed at the time of protocol development [] in June 2023; however, to ensure comprehensiveness and relevance, the search was repeated in June 2024.
The search included English-language studies published between 2018 and 2024. This time frame was used to maintain temporal relevance to the recent trends and topography of misinformation and interventions in health and health care contexts. By 2018, online platforms, including social media, had gained significant traction as sources of health information, marking a turning point for studying misinformation trends []. Prepandemic trends were relevant to the review, as misinformation (eg, antivaccine sentiments and medical skepticism) was already increasing before the pandemic []. In addition, artificial intelligence–driven tools and online algorithms were rapidly evolving during this period, exponentially contributing to the misinformation infodemic, which required targeted interventions []. To capture a baseline of how misinformation interventions evolved before, during, and after the acute phase of the pandemic across different health contexts, we selected a publication window from 2018 to 2024.
All relevant citations from the databases were uploaded to Zotero (version 6.0.2; Corporation for Digital Scholarship), where duplicates were removed. The deduplicated citations were then exported to the Rayyan platform (Rayyan Systems Inc) for title and abstract screening by two independent reviewers. Articles were excluded if they were undetected duplicates or did not meet the eligibility criteria (). Any discrepancies were resolved through discussion and with input from a third reviewer to achieve consensus. To be included in full-text screening and data extraction, the citations had to meet the predefined eligibility criteria.
Inclusion criteria
- Interventions presented, delivered, and disseminated on online platforms
- Studies directly addressing misinformation in health and health care contexts
- Interventional studies with an experimental study design and methodology
- Papers published in English
Exclusion criteria
- Studies using offline, face-to-face, or in-person forms of intervention presentation or delivery
- Studies addressing misinformation unrelated to health (eg, climate change or politics)
- Noninterventional, nonexperimental studies (eg, observational or theoretical studies)
- Papers published in languages other than English
Peer-reviewed journal articles, published conference papers, and work-in-progress papers were eligible for data extraction. Systematic and scoping reviews identified in the search were excluded.
Data-Charting Process
After title and abstract screening, two reviewers independently screened the full-text papers for inclusion based on the predefined criteria. Any discrepancies were addressed through discussion and consensus, with input from a third reviewer. This process yielded 30 papers. Data were extracted from each paper by one reviewer, with a second reviewer independently extracting data from a subset of the papers (6/30, 20%) for alignment. Data extraction was performed using Microsoft Forms () and included the following: intervention type, intervention content, mode of delivery, demographics of the target populations, human attributes targeted, theoretical underpinnings used to develop the intervention, and characteristics of the misinformation targeted.
The types of intervention were categorized into 5 groups based on preliminary research and screening ( [-]). This categorization was represented as a drop-down menu in the extraction form.
- Educational courses: a structured and systematic approach designed to impart knowledge, skills, attitudes, or behaviors to a specific target audience, typically in the format of written or video learning modules or courses, fact-checking training, peer-driven learning and teaching, or integrated discussion platforms; aimed at improving understanding, changing perceptions, or promoting specific actions [,]
- Inoculation games: game-based delivery of inoculation, that is, exposure to “weakened” doses of misinformation, along with guidance on identification, correction, and information, to “prebunk” or preemptively educate individuals against future encounters with misinformation; with elements of gamification, including, but not limited to, debate, simulation, identification, narrative, and cognitive bias training games []
- Counter speech: a strategy used to combat misinformation by directly addressing and refuting it, involving providing accurate information, promoting critical thinking, and offering alternative perspectives to undermine the credibility and influence of misinformation; examples include, but are not limited to, myth debunking, fact checking, expert testimonials, personal stories, humor, satire, call to action, questioning source credibility, encouraging critical thinking, and highlighting consequences []
- Online intervention distribution blockers: technology-based tools designed to prevent or limit the spread of specific types of content (namely misinformation) online; examples include, but are not limited, to advertisement blockers, content filters, social media blockers, email spam blockers, content-blocking domain name system servers, and comment moderation tools []
- Other: interventions that do not fit into the aforementioned categories, including, but not limited to, those based on artificial intelligence, virtual reality, augmented reality, and online community-based discussion networks
Synthesis of Results
Charted data were exported to Microsoft Excel for preliminary refining and cleaning in preparation for further analysis. For numerical and binary data, results were tabulated using Excel formulas to calculate frequency counts and percentages.
The data were also qualitatively analyzed to identify themes that would support the differentiation and categorization of long, descriptive-type responses. MAXQDA (version 24.4.0; VERBI Software GmbH) was used for thematic qualitative analysis, paralleling the methodology formulated by Braun and Clarke []. Long responses from the data-charting table were exported and systematically analyzed by assigning word codes to recurring ideas, concepts, and themes. The reviewers grouped common themes based on the research questions, grouped them into categories and named them with assistance from generative artificial intelligence (ChatGPT with GPT-4, OpenAI []), and exported them to Excel for frequency analysis. As recommended by Braun and Clarke [], a flexible approach incorporating both deductive and inductive coding was used for thematic analysis.
Results
Selection of Sources of Evidence
Searches across the 5 databases yielded 513 citations; after the removal of duplicates, 418 (81.5%) articles underwent title and abstract screening. Of these 418 articles, 52 (12.4%) were deemed eligible for combined full-text screening and data extraction. After full-text screening, 30 (58%) of the 52 articles were included, and 22 (42%) were excluded. The PRISMA-ScR is diagram is shown in . Reasons for exclusion were as follows: (1) not interventional (21/22, 95%) and (2) not online (1/22, 5%). A summary of study features, intervention types, and targeted misinformation characteristics is presented in [,,-].
presents a list of the included studies along with their titles.

| Study | Study title |
| Geana et al [] | “A friendly conversation.” Developing an eHealth intervention to increase COVID-19 testing and vaccination literacy among women with criminal and legal system involvement |
| Powell et al [] | A web-based public health intervention for addressing vaccine misinformation: protocol for analyzing learner engagement impacts on the hesitancy to vaccinate |
| Scales et al [] | Addressing antivaccine sentiment on public social media forums through web-based conversations based on motivational interviewing techniques: observational study |
| Steffens et al [] | Addressing myths and vaccine hesitancy: a randomised trial |
| Del Pozo et al [] | Can touch this: training to correct police officer beliefs about overdose from incidental contact with fentanyl |
| Domgaard and Park [] | Combating misinformation: the effects of infographics in verifying false vaccine news |
| Ecker et al [] | Correcting vaccine misinformation: a failure to replicate familiarity or fear-driven backfire effects |
| Winters et al [] | Debunking highly prevalent health misinformation using audio dramas delivered by WhatsApp: evidence from a randomised controlled trial in Sierra Leone |
| Lu et al [] | Does public fear that bats spread COVID-19 jeopardize bat conservation? |
| Alsaad and AlDossary [] | Educational video intervention to improve health misinformation identification on WhatsApp among Saudi Arabian population: pre-post intervention study |
| Ugarte and Young [] | Effects of an online community peer-support intervention on COVID-19 vaccine misinformation among essential workers: mixed-methods analysis |
| Iles et al [] | Effects of narrative messages on key COVID-19 protective responses: findings from a randomized online experiment |
| Abroms et al [] | Empathic engagement with the COVID-19 vaccine hesitant in private Facebook groups: a randomized trial |
| Koban [] | Empathic engagement with the vaccine hesitant in online spaces |
| Beleites et al [] | Evaluating the impact of short animated videos on COVID-19 vaccine hesitancy: an online randomized controlled trial |
| Paynter et al [] | Evaluation of a template for countering misinformation—real-world autism treatment myth debunking |
| Pennycook et al [] | Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention |
| Ma et al [] | Fighting COVID-19 misinformation through an online game based on the inoculation theory: analyzing the mediating effects of perceived threat and persuasion knowledge |
| Gavin et al [] | Fighting the spread of COVID-19 misinformation in Kyrgyzstan, India, and the United States: how replicable are accuracy nudge interventions? |
| Sundstrom et al [] | HPV vaccination champions: evaluating a technology-mediated intervention for parents |
| Jiang et al [] | Inoculation works and health advocacy backfires: building resistance to COVID-19 vaccine misinformation in a low political trust context |
| Agley et al [] | Intervening on trust in science to reduce belief in COVID-19 misinformation and increase COVID-19 preventive behavioral intentions: randomized controlled trial |
| van Stekelenburg et al [] | Investigating and improving the accuracy of US citizens’ beliefs about the COVID-19 pandemic: longitudinal survey study |
| Duarte et al [] | Misinformation in nutrition through the case of coconut oil: an online before-and-after study |
| MacFarlane et al [] | Refuting spurious COVID-19 treatment claims reduces demand and misinformation sharing |
| van der Meer and Jin [] | Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source |
| Piltch-Loeb et al [] | Testing the efficacy of attitudinal inoculation videos to enhance COVID-19 vaccine acceptance: quasi-experimental intervention trial |
| Vandormael et al [] | The effect of a wordless, animated, social media video intervention on COVID-19 prevention: online randomized controlled trial |
| Vraga et al [] | The effects of a news literacy video and real-time corrections to video misinformation related to sunscreen and skin cancer |
| Tanemura and Chiba [] | The usefulness of a checklist approach-based confirmation scheme in identifying unreliable COVID-19-related health information: a case study in Japan |
Characteristics of Sources of Evidence
Of the 30 studies published between 2018 and 2024, the period between 2018 and 2020 saw 3 (10%) studies published; 11 (37%) studies were published in 2021; in 2022, a total of 6 (20%) studies were published; 2023 saw the publication of 8 (27%) studies; and 2 (7%) studies were published in 2024.
Geographic variation was evident. Of the 30 studies, 2 (7%) were conducted in >1 country; moreover, 20 (67%) were conducted in the United States [,,,,-,-], 2 (7%) in Australia [,], 2 (7%) in the United Kingdom [,], and 1 (3%) each in China [], Brazil [], Sierra Leone [], Hong Kong [], India [], Kyrgyzstan [], Japan [], Saudi Arabia [], Germany [], Spain [], and Mexico []; and United Arab Emirates [].
Study Designs
All included studies were interventional in nature. The most common design was randomized controlled trials (20/30, 67%) [,,-,,,-,-,,]. Of the 30 studies, 1 (3%) described its design as a quasi-experimental intervention trial []. In addition, 2 (7%) of the 30 studies used single-arm posttest-only designs [,], and 7 (23%) used single-arm pretest-posttest designs [,,,,,,].
Types of Interventions
Of the 30 included studies, 22 (73%) used educational courses [,,,-,,,-,-]; among these, 2 (9%) incorporated counter speech in ways that could not be considered mutually exclusive from the educational component [,]. Counter speech alone was used in 7 (23%) of the 30 studies [,-,,,], while 1 (3%) study used an inoculation game as the intervention [].
Content and Delivery
Misinformation Targeted
All studies targeted misinformation in health and health care contexts. Of the 30 studies, 19 (63%) addressed COVID-19–related misinformation [,-,-,,,-,,,]; 11 (37%) addressed other health misinformation, including claims about sunscreen and skin cancer, non–COVID-19 vaccines, drug overdose, malaria, typhoid, human papillomavirus, and autism [,-,,,,,]; and 1 (3%) targeted misinformation in general, focusing on techniques for identifying misinformation on social media [].
Digital Content
The intervention content primarily included text and multimedia elements such as audio, videos, images, and graphics. Of the 30 interventions, 9 (30%) used multimedia only [,,,,,,,,]; 9 (30%) used text only [,,,,,,,,]; and 12 (40%) used >1 type of digital content, such as combinations of text, multimedia, infographics, focus groups, and online discussion platforms [,,,,,,-,,].
Mode of Delivery
Websites were used for delivery in 22 (73%) of the 30 studies [,,,,,,,,,-,-]. Of these 22 studies, 1 (5%) used an online course development platform [], 1 (5%) used a web-based eHealth application [], and 15 (68%) used web-based or online questionnaire or survey platforms (eg, Qualtrics and Cross Marketing Inc) [,,,,,,,,,-,,]. Of the 22 studies, 1 (5%) used Zoom [], 1 (5%) used emailing platforms alongside social media [], and 7 (32%) used social media [,,,,,,]. Of these 7 interventions, 5 (71%) used Facebook [,,,,], 1 (14%) used WhatsApp [], and 1 (14%) used WeChat [].
Targeted Characteristics of Misinformation
A range of characteristics of misinformation was addressed across the studies. Of the 30 studies, 15 (50%) [,-,-,-,,,,] reported that their interventions targeted specific characteristics of misinformation, while the remaining studies (n=14, 47%) either did not include these elements or did not report them. Of the 16 studies, 9 (56%) reported that the intervention targeted >1 characteristic [,,,,,,,,].
These targeted characteristics were grouped into four broad categories: (1) content and presentation tactics, (2) psychological and cognitive biases, (3) social and cultural influences, and (4) tactics facilitating dissemination ().

Content and presentation tactics refer to elements that influence the audience’s perception, interpretation, and emotional response to misinformation, with the goal of enhancing the persuasiveness or impact of the content. These included the use of emotional language (5/16, 31%) [,,,,]; sensationalization (2/16, 13%) [,]; reliance on fake experts (4/16, 25%) [,,,]; and the presence of grammatical and syntax manipulation (1/16, 6%) [], vagueness (1/16, 6%) [], and decontextualized verbatim factual information (1/16, 6%) [].
Psychological and cognitive biases refer to systematic patterns of deviation from norms or rationality in judgment that affect perception, memory, thought, and decision-making abilities. These biases often unconsciously influence beliefs, attitudes, and behaviors through heuristic shortcuts biases and are exacerbated by emotional and social influences. Reported examples included repetition (2/16, 13%) [,], polarization (2/16, 13%) [,], and the illusion of causality (1/16, 6%) [], alongside appeals to morality (1/16, 6%) [] and nature (1/16, 6%) [].
Social and cultural influences refer to how the manipulation of societal norms, cultural beliefs, and community behaviors affects individuals’ perceptions, attitudes, and actions, shaping how they interpret misinformation, form opinions, and make decisions in health and health care contexts. These influences encompassed medical conspiracies (9/16, 56%) [,,,,,,,,]; efforts to discredit credible sources (2/16, 13%) [,]; and the spread of false knowledge (1/16, 6%) [], often through stigmatization (1/16, 6%) [] and politicization (1/16, 6%) [].
Tactics facilitating dissemination are strategies that enhance the spread and reach of misinformation—making it easier for content to be shared widely and rapidly across various platforms—by exploiting the characteristics of digital media, social networks, and human psychology. These tactics included ease of dissemination (1/16, 6%) [], partisan alignment or distraction (1/16, 6%) [], celebrity endorsement (1/16, 6%) [], and grammatical and spelling errors (1/16, 6%) [].
The study by Tanemura and Chiba [] specifically postulated the use of a 5-step framework of confirmation theory, incorporating characteristics such as “source credibility, scientific evidence, consistency, bias identification, transparency,” which parallel elements found in other interventions.
The study by Ma et al [] addressed the 6 characteristics of misinformation proposed by Roozenbeek and van der Linden [] (ie, conspiracy theories, polarization, impersonation, emotion, discrediting opponents, and trolling), adding an additional characteristic: “the spreading of false knowledge.”
Population Characteristics Reported
The following subsections describe the demographic characteristics of the populations in the included studies as well as the specific attributes or behaviors being addressed by interventions targeting misinformation.
Demographics
Most of the studies (22/30, 73%) focused on the general public or did not report specific characteristics of their target populations [,,,,-,,,,,,-,-]. A subset of this group (2/22, 9%) included university students who were not directly targeted by the interventions but were part of a convenience sample [,]. Specific demographics targeted in other studies included educators or teachers (2/30, 7%) [,]; health care workers (2/30, 7%) [,]; police officers (1/30, 3%) []; carers (1/30, 3%) []; and parents (2/30, 7%)—1 (50%) of these 2 studies focused on parents of children aged 0 to 5 years [,]. In addition, the study by Geana et al [] targeted women involved with the criminal and legal justice system, while the study by Abroms et al [] focused on individuals who were not vaccinated [].
Human Attributes
Of the 30 studies, 14 (47%) reported that their interventions targeted human attributes through design features [,,,-,,,,-,]. These attributes—diverse factors influencing susceptibility to and perpetuation of misinformation across different contexts and populations—can be grouped into three categories: (1) knowledge and information processing, (2) psychological and emotional influences, and (3) trust and social dynamics. Of the 14 studies, 8 (57%) targeted >1 human attribute through their intervention design [,,,,,,,] ().

Studies that targeted factors related to knowledge and information processing addressed health literacy levels (4/14, 29%) [,,,], preintervention knowledge levels (1/14, 7%) [], the influence of preexisting beliefs (1/14, 7%) [], and system 1 thinking (2/14, 14%) [].
Psychological and emotional influences encompassed emotive manipulation (3/14, 21%) [,,], familiarity bias (3/14, 21%) [,,], resistance to authoritative messages and sources (2/14, 7%) [,], the concept of illusory truth (3/14, 21%) [,,], repeated exposure (2/14, 7%) [,], and various cognitive biases (3/14, 21%) [,,].
Trust and social dynamics were related to the continued influence effect (3/14, 21%) [,,], eroding trust in social institutions and experts (1/14, 7%) [], trust in science (1/14, 7%) [], and behaviors such as rumor spreading (1/14, 7%) [].
Geana et al [] explicitly stated that the purpose and design of their intervention was to target the social determinants of health among women with criminal and legal system involvement, which made them more susceptible to misinformation.
The study by Tanemura and Chiba [] targeted health literacy levels specifically among individuals in Japan with no medical background, a susceptibility factor associated with increased risk of misinformation uptake and dissemination.
Theoretical Underpinnings of Interventions
Overview
Of the 30 studies, 23 (77%) reported the use of theoretical underpinnings to develop the content or learning and design component of the intervention [,-,-,,-,-,-,,]. Of these 23 studies, 12 (52%) used >1 theoretical underpinning [,,,,,,,,,,,]. The findings from these 23 studies, with considerable overlap, are grouped into the categories outlined in the following subsections ().

Content
The category “inoculation and correction” included inoculation theory (6/23, 26%) [,,,,,] and debunking, refutation, and correction (6/23, 26%) [,,,,,,]. The number of studies using some form of inoculation alongside correction techniques was substantial.
The category “education and cognition” encompassed news literacy (1/23, 4%) [], promoting critical thinking (3/23, 13%) [,,], highlighting differences between false and factual information (1/23, 4%) [], cognitive function (1/23, 4%) [], and dual-processing theory (1/23, 4%) [].
The category “motivation and emotion” covered motivational interviewing (1/23, 4%) [], the use of narratives (2/23, 9%) [,], appealing to fear and emotions (1/23, 4%) [], and empathetic engagement (1/23, 4%) [].
The category “behavior and persuasion” included nudge theory (2/23, 9%) [,], and boosting approach (1/23, 4%) [].
The category “trust and belief” included fuzzy-trace theory (2/23, 9%) [,], familiarity boosting (1/23, 4%) [], gateway belief model (1/23, 4%) [], trust in science and scientists (1/23, 4%) [], confirmation scheme (1/23, 4%) [], plausible alternative framework (1/23, 4%) [], avoiding misinformation framework (1/23, 4%) [], and inattention-based account of misinformation sharing (1/23, 4%) [].
Learning Design
Of the 24 studies that referenced theoretical underpinnings, 4 (17%) also reported the use of those that contributed to the learning and design aspect of the intervention [,,,]. These included reflective models of aesthetic experience (1/4, 25%) [], the cognitive theory of multimedia learning (2/4, 50%) [,], social identity theory (1/4, 25%) [], gamification theory (1/4, 25%) [], and processing fluency theory (1/4, 25%) [].
Discussion
Principal Findings
This review identified 30 studies describing online interventions implemented to address health misinformation between 2018 and 2024. The publication time frames of the included studies varied considerably but closely aligned with the COVID-19 pandemic and the associated infodemic. The period from 2021 to 2023 was marked by a significant surge in misinformation awareness, the dissemination of misinformation, and the urgency of the public health crisis. Despite the prominence of COVID-19–related studies, this review highlights a broader body of literature addressing misinformation in various health-related contexts. Notably, 40% (12/30) of the included papers focused on misinformation either before or after the COVID-19 pandemic. Although there was geographic variation across the included studies, the majority were conducted in the United States (20/30, 67%), indicating limited geographic representation overall.
Some of the interventions (14/30, 47%) considered a broad spectrum of human attributes, which we categorized into three groups based on cognitive and behavioral similarities: (1) knowledge and information processing, (2) psychological and emotional influences, and (3) trust and social dynamics. However, 16 (53%) of the 30 studies did not report or consider specific human attributes in their intervention designs. Furthermore, many of the interventions (23/30, 77%) lacked specificity in targeting population demographics, often defaulting to “general population” approaches. These gaps highlight limitations in designing population-specific interventions.
A varied selection of intervention types was identified in the review. Educational interventions addressing health misinformation included diverse formats, such as courses, videos, textual content, game-based learning modules, and infographics. A frequently used intervention strategy was counter speech (9/30, 30%), which involves directly addressing misinformation by highlighting inaccuracies and providing correct information. Inoculation games were used independently in some of the studies (1/30, 3%); however, the technique of inoculation (ie, preemptively exposing individuals to weakened forms of misinformation to build resistance) was often embedded within both educational interventions and counter-speech strategies. Several other studies highlighted the significance and potential of online intervention distribution blockers in mitigating health misinformation; however, these were not implemented in interventional or experimental study designs []. The interventions reviewed in this study addressed various characteristics of misinformation, with slightly more than half of the studies (16/30, 53%) identifying specific traits. While these characteristics varied, studies informed by similar theoretical underpinnings tended to focus on overlapping aspects. The interventions targeted either the intrinsic qualities of misinformation that increase its believability or the presentation tactics that enhance its spread, aiming to improve public literacy on these aspects. Approximately half of the interventions (14/30, 47%) did not report the specific characteristics of misinformation they targeted. Many of the studies (23/30, 77%) highlighted the use of theoretical underpinnings to develop their interventions, ranging from models focused on misinformation content to theories of learning and education. The principally used theoretical underpinnings were inoculation theory (building resistance to misinformation through methods such as promoting alternative explanations, fostering critical thinking, and enhancing digital literacy), motivational techniques, persuasive techniques, and frameworks addressing trust in medical information and professionals. Of the 30 studies, 6 (20%) did not report using theoretical underpinnings, raising questions about whether they were underreported or omitted entirely.
The publication time frames and intervention content frequently paralleled the trajectory of the COVID-19 pandemic, underscoring how health crises, despite their detrimental impacts, have accelerated the development and deployment of initiatives aimed at combating misinformation. However, 40% (12/30) of the papers targeted misinformation in broader health-related contexts, highlighting the persistent nature of misinformation and its dissemination both before and after the pandemic [], driven by the ease of dissemination through online platforms and social media [] as well as inherent human susceptibilities to misleading information []. The geographic representation among the studies overall was limited, which is significant because health misinformation varies across regions due to sociocultural influences that affect both the characteristics of misinformation and the ways populations engage with and disseminate information []. The WHO and the Center for Infectious Disease Research and Policy have postulated the critical role of cultural insights in developing effective public health strategies, highlighting the need for socioculturally appropriate strategies to influence health behaviors and perceptions [,]. As echoed in other systematic reviews, this could be a fruitful area of future research to better understand regional variations and tailor interventions accordingly [].
The interventions used across the studies were diverse, including educational interventions, counter speech, inoculation games, and online distribution blockers. The educational interventions comprised courses, videos, text, or infographics. Although these were sometimes used independently, some of the studies (2/22, 9%) adopted a combined design that integrated both educational and counter-speech components, making it difficult to clearly distinguish the effects of each intervention type. Counter speech involves addressing misinformation by highlighting inaccuracies and providing correct information through techniques such as debunking, refuting, and correction frameworks []. Similarly, although inoculation games were sometimes used independently, they were often integrated into educational interventions and counter-speech strategies. This integration aligns with current literature, which often combines multiple intervention types to reinforce their impact and address the multifaceted nature of misinformation, providing benefits in timely correction of facts, while also enabling user literacy, education, and efficacy in recognizing misinformation in the long term.
Online intervention distribution blockers were defined as a distinct category during data extraction because of the notable rise in recent years in platform-led interventions and “technocognition” approaches such as algorithmic downranking, content moderation, advertisement blocking, and redirection. This review found that such strategies were not implemented in intervention designs. This gap is echoed in other systematic reviews that state the need for assessing the “effectiveness and safety of computer-driven corrective and interventional measures against health misinformation” [], suggesting a fertile area for future research [].
A gap highlighted in this review was the lack of interventions targeting specific populations, especially those considered more vulnerable and susceptible to misinformation, with most of the studies (22/30, 73%) defaulting to a “general population” target. A variety of human attributes were considered in the design and development of the interventions, categorized into three groups: (1) knowledge and information processing, (2) psychological and emotional influences, and (3) trust and social dynamics. These categories are neither exclusive nor entirely distinct due to the interconnected nature of human cognition, psychology, and behavior. Multiple studies (8/14, 57%) targeted several of these human attributes simultaneously. However, 16 (53%) of the 30 studies did not report or consider specific human attributes in the design of their interventions. Current literature [] underscores the crucial role of these factors in combating misinformation, especially in public health crises, and studies reiterate the role of cognition, knowledge, biases, and emotional responses in the uptake and spread of misinformation; hence, understanding and addressing these human attributes is essential for designing effective interventions.
This generalization may be beneficial in some cases, considering the broad target populations of public health interventions. However, it is well noted in the literature that considering the interplay of human attributes that make populations susceptible to misinformation, such as emotional, educational, or social influence, is vital for intervention design. Understanding these factors, prioritizing interventions tailored to specific population groups, and comparing the impact of targeting different human attributes would provide insight into the most critical factors to address in intervention design, particularly for developing effective methodologies for populations experiencing more vulnerability.
The interventions broadly addressed either the intrinsic characteristics of misinformation that increase its believability or the presentation tactics that enhance its online dissemination, holistically aiming to improve audience digital literacy regarding the topic of misinformation. Commonly targeted elements included emotional language, medical conspiracies, reliance on fake experts, sensationalization, repetition, polarization, and discrediting credible sources. This aligns with the “characteristics of misinformation” framework proposed by Roozenbeek and van der Linden [], although only 1 (3%) of the 30 included studies [] implemented an intervention covering all these components. Interventionalists could benefit from a comprehensive approach that simultaneously addresses multiple characteristics of misinformation, given the current lack of studies exploring the interconnected nature of these factors.
Approximately half of the interventions (14/30, 47%) reviewed did not report the specific characteristics of misinformation they targeted. This raises questions about whether these characteristics were considered in the development of the interventions or whether the interventions were designed without such input. When characteristics are not explicitly addressed, interventions tend to focus on countering specific instances of misinformation rather than addressing the underlying characteristics that make misinformation believable. This approach may provide a short-term solution but is less effective in addressing the long-term problem. Current literature emphasizes the importance of fostering long-term strategies that foster critical thinking and resilience to misinformation through enhancing media literacy and cognitive processing skills in audiences []. By targeting intrinsic characteristics of misinformation and human attributes, interventions can be more sustainable and effective in the long term, which is a vital area with scope for further research, as supported by recent scoping reviews [].
Definitions of misinformation, its intrinsic characteristics, and the population attributes that increase susceptibility and dissemination often lack clarity and uniformity across the reviewed studies. There is inconsistency in how misinformation is defined. This is reflected in how it is addressed through interventions and has been noted in other systematic reviews, which highlight that inconsistent definitions adversely affect the outcomes of interventional studies about health misinformation []. Many studies do not specify which characteristics of misinformation are the most problematic or which population attributes need to be addressed, limiting the specificity of intervention development. Although some of these factors are indirectly targeted through counter speech or educational courses, the lack of clear definitions reduces meaningful insight for interventionalists looking to develop more effective and specific interventions.
A significant proportion of the reviewed studies (23/30, 77%) highlighted the use of theoretical underpinnings to develop their interventions, ranging from models focused on misinformation content to learning and education theories. However, 6 (20%) of the 30 studies did not report any theoretical underpinnings. This omission raises concerns about whether they were excluded or not used at all, which is problematic, given the importance of such underpinnings in shaping effective interventions tailored to audience behavior and knowledge gaps, as noted in existing literature on digital interventions.
Inoculation theory was frequently used (6/30, 20%), involving preemptive exposure to weakened forms of misinformation to build resistance. This technique, known as “prebunking,” often included alerting participants to the fallacies underlying misinformation and drawing attention to its defining characteristics or digital patterns. Interventions incorporating debunking, correcting, and refutational strategies were also prominent (6/30, 20%). These methods were used either as didactic techniques, where misinformation was directly refuted, or in combination with inoculation strategies. The literature indicates significant variation in these approaches: traditional debunking involves underlining the bottom line or providing corrective explanations, while refutational debunking focuses on explaining why misinformation is incorrect, arguing that traditional debunking may not be as effective in addressing persistent misinformation. This theoretical debate is mirrored in noninterventional literature [], which explores how psychological and cognitive mechanisms influence the correction of misinformation during infodemics, and continues to be an area for further research to determine which “debunking” technique is the most beneficial in the short and long terms.
Building future resistance to misinformation was a central theme in many of the included studies (15/30, 50%). Techniques such as promoting alternative explanations, fostering critical thinking, and enhancing digital literacy are supported by a broader body of research. Despite this, the interventions varied in their application, with some emphasizing psychological engagement and memory retention, while others focused on immediate behavioral changes. Many of the interventions (11/30, 37%) incorporated broader theoretical models, such as motivational techniques (eg, narratives and motivational interviewing to encourage behavior change), persuasive techniques (eg, nudge theory to promote mindful behavior regarding misinformation), and frameworks that addressed trust in medical information and professionals. Several of the studies lacked detailed explanations on how these theories were applied, implying a gap in the understanding of implementation science behind these frameworks.
In interventions that cited theoretical underpinnings for the learning and design components, the emphasis was on enhancing memory, cognitive processing, and engaging with digital content. While existing literature strongly supports the effectiveness of these strategies in educational contexts [], their inconsistent reporting across the included studies highlights a broader issue: the need for systematic integration of theoretical underpinnings (including frameworks and models) to optimize misinformation interventions.
Limitations
This scoping review has certain limitations. We only searched 5 databases, potentially missing relevant studies available elsewhere. Gray literature was excluded, potentially missing unpublished but implemented interventions. The review did not assess the outcomes or effectiveness of the interventions. Although our focus was on the design of the interventions, understanding their effectiveness could provide insights into successful strategies for combating misinformation in future research. Studies discussing online distribution blockers and relevant subcategories were excluded, as the review was limited to studies that specifically implemented these interventions. This may have resulted in missing information on the theoretical underpinnings and target characteristics or populations of these interventions. By limiting the review to online interventions, we may have overlooked effective offline strategies, especially important in regions with limited internet access. Finally, we included only English-language papers, unintentionally limiting the geographic scope and potentially introducing bias toward English-speaking contexts.
Conclusions
This review offers valuable insights for health care professionals, policy makers, and public health organizations to design theory-informed interventions that counter health misinformation. The reviewed studies used various types of interventions, mainly educational courses (22/30, 73%), and they often combined counter speech and inoculation games, emphasizing the value of a multifaceted approach to combating health misinformation. Only a few studies (16/30, 53%) targeted specific misinformation traits, and gaps remain in identifying key contributors to the spread of misinformation in interventional studies, highlighting the need for future research to enhance intervention effectiveness. Many of the interventions (22/30, 73%) overlooked demographic targeting, limiting insight into susceptibility and missing opportunities to reach those who might benefit the most. Human attributes crucial in intervention designs were frequently underreported and more often reported in theoretical rather than practical contexts, indicating the need for collaboration between intervention developers, psychologists, and theorists. Theoretical underpinnings varied, with inconsistent reporting and interpretation, underscoring the need for consensus on definitions and the use of comprehensive frameworks. The review has highlighted inconsistencies in reporting the developmental principles and theoretical underpinnings of online interventions. More effective reporting is required, with explicit connections between the features of misinformation and outcomes in terms of intervention development and design. At present, the reporting gaps limit the ability to holistically analyze how development approaches affect the performance and efficacy of such interventions.
Acknowledgments
The authors thank the Mohammed Bin Rashid University library technicians for their support in providing resources and guidance, including assistance with Medical Subject Headings (MeSH) subheadings and key terms for their search strategy. The authors acknowledge the use of generative AI (ChatGPT with GPT-4 []) to assist in grouping the thematic concepts identified during the data-charting process into categories presented in the Results section: “Human Attributes,” “Targeted Characteristics of Misinformation,” and “Theoretical Underpinnings of Misinformation.”
Data Availability
The datasets extracted and analyzed in this review are available in .
Conflicts of Interest
None declared.
Sample electronic search strategies.
DOCX File , 14 KBData extraction form.
DOCX File , 116 KBSummary of study features, intervention types, and targeted misinformation characteristics.
DOCX File , 84 KBHealth misinformation ScR dataset.
XLSX File (Microsoft Excel File), 58 KBPreferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist.
DOCX File , 108 KBReferences
- "Misinformation". Dictionary.com. URL: https://www.dictionary.com/browse/misinformation [accessed 2024-09-17]
- Infodemics and misinformation negatively affect people’s health behaviours, new WHO review finds. World Health Organization. Sep 1, 2022. URL: https://www.who.int/azerbaijan/news/item/01-09-2022-infodemics-and-misinformation-negatively-affect-people-s-health-behaviours--new-who-review-finds [accessed 2024-08-06]
- Rodrigues F, Newell R, Rathnaiah Babu G, Chatterjee T, Sandhu NK, Gupta L. The social media infodemic of health-related misinformation and technical solutions. Health Policy Technol. Jun 2024;13(2):100846. [CrossRef]
- Fighting misinformation in the time of COVID-19, one click at a time. World Health Organization. Apr 27, 2021. URL: https://www.who.int/news-room/feature-stories/detail/fighting-misinformation-in-the-time-of-covid-19-one-click-at-a-time [accessed 2024-08-06]
- Rao TS, Andrade C. The MMR vaccine and autism: sensation, refutation, retraction, and fraud. Indian J Psychiatry. May 2011;53(2):95-96. [FREE Full text] [CrossRef] [Medline]
- Vinck P, Pham PN, Bindu KK, Bedford J, Nilles EJ. Institutional trust and misinformation in the response to the 2018-19 Ebola outbreak in North Kivu, DR Congo: a population-based survey. Lancet Infect Dis. May 2019;19(5):529-536. [CrossRef] [Medline]
- Ferreira Caceres MM, Sosa JP, Lawrence JA, Sestacovschi C, Tidd-Johnson A, Rasool MH, et al. The impact of misinformation on the COVID-19 pandemic. AIMS Public Health. 2022;9(2):262-277. [FREE Full text] [CrossRef] [Medline]
- Omar B, Apuke OD, Nor ZM. The intrinsic and extrinsic factors predicting fake news sharing among social media users: the moderating role of fake news awareness. Curr Psychol. Mar 21, 2023;43(2):1-13. [FREE Full text] [CrossRef] [Medline]
- Vosoughi S, Roy D, Aral S. The spread of true and false news online. Science. Mar 09, 2018;359(6380):1146-1151. [FREE Full text] [CrossRef] [Medline]
- Rutjens BT, van der Linden S, van der Lee R. Science skepticism in times of COVID-19. Group Process Intergr Relat. Mar 04, 2021;24(2):276-283. [CrossRef]
- Yadav V, Dhadwal Y, Kanozia R, Pandey SR, Kumar A. Unraveling the web of health misinformation: exploring the characteristics, emotions, and motivations of misinformation during the COVID-19 pandemic. Asian J Public Opinion Res. 2024;12(1):53-74. [FREE Full text] [CrossRef]
- Czerniak K, Pillai R, Parmar A, Ramnath K, Krocker J, Myneni S. A scoping review of digital health interventions for combating COVID-19 misinformation and disinformation. J Am Med Inform Assoc. Mar 16, 2023;30(4):752-760. [CrossRef] [Medline]
- Walter N, Brooks JJ, Saucier CJ, Suresh S. Evaluating the impact of attempts to correct health misinformation on social media: a meta-analysis. Health Commun. Dec 06, 2021;36(13):1776-1784. [CrossRef] [Medline]
- Duarte AC, Spiazzi BF, Merello EN, Amazarray CR, Sulzbach de Andrade L, Socal MP, et al. Misinformation in nutrition through the case of coconut oil: an online before-and-after study. Nutr Metab Cardiovasc Dis. Jul 2022;32(6):1375-1384. [CrossRef] [Medline]
- Pennycook G, McPhetres J, Zhang Y, Lu JG, Rand DG. Fighting COVID-19 misinformation on social media: experimental evidence for a scalable accuracy-nudge intervention. Psychol Sci. Jul 30, 2020;31(7):770-780. [FREE Full text] [CrossRef] [Medline]
- Kreuter MW, Farrell DW, Brennan LK, Olevitch LR. Tailoring Health Messages: Customizing Communication With Computer Technology. New York, NY. Routledge; 1999.
- Janmohamed K, Walter N, Nyhan K, Khoshnood K, Tucker JD, Sangngam N, et al. Interventions to mitigate COVID-19 misinformation: a systematic review and meta-analysis. J Health Commun. Dec 02, 2021;26(12):846-857. [CrossRef] [Medline]
- Ruggeri K, Vanderslott S, Yamada Y, Argyris YA, Većkalov B, Boggio PS, et al. Behavioural interventions to reduce vaccine hesitancy driven by misinformation on social media. BMJ. Jan 16, 2024;384:e076542. [FREE Full text] [CrossRef] [Medline]
- Gwiaździński P, Gundersen AB, Piksa M, Krysińska I, Kunst JR, Noworyta K, et al. Psychological interventions countering misinformation in social media: a scoping review. Front Psychiatry. 2022;13:974782. [FREE Full text] [CrossRef] [Medline]
- Munn Z, Peters MD, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. Dec 19, 2018;18(1):143. [FREE Full text] [CrossRef] [Medline]
- Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. Feb 2005;8(1):19-32. [CrossRef]
- Grover H, Nour R, Powell L. Online interventions addressing health misinformation: protocol for a scoping review. OSF Preprints. Preprint posted online on January 04, 2024. [FREE Full text] [CrossRef]
- Health misinformation: introduction. University of Minnesota. URL: https://libguides.umn.edu/c.php?g=1276679&p=9367535 [accessed 2025-03-23]
- Monteith S, Glenn T, Geddes JR, Whybrow PC, Achtyes E, Bauer M. Artificial intelligence and increasing misinformation. Br J Psychiatry. Mar 2024;224(2):33-35. [CrossRef] [Medline]
- Somasundaram UV, Egan TM. Training and development: an examination of definitions and dependent variables. U.S. Department of Education. 2024. URL: https://eric.ed.gov/?id=ED492440 [accessed 2025-07-31]
- Zamiri M, Esmaeili A. Methods and technologies for supporting knowledge sharing within learning communities: a systematic literature review. Admin Sci. Jan 20, 2024;14(1):17. [FREE Full text] [CrossRef]
- How long does gamified psychological inoculation protect people against misinformation? American Psychological Association. Aug 12, 2022. URL: https://www.apa.org/pubs/highlights/spotlight/issue-243 [accessed 2024-08-06]
- Courchesne L, Ilhardt J, Shapiro JN. Review of social science research on the impact of countermeasures against influence operations. Harvard Kennedy School (HKS) Misinformation Review. Sep 13, 2021. URL: https://misinforeview.hks.harvard.edu/article/review-of-social-science-research-on-the-impact-of-countermeasures-against-influence-operations/ [accessed 2024-08-06]
- Tools that fight disinformation online. RAND. URL: https://www.rand.org/research/projects/truth-decay/fighting-disinformation/search.html [accessed 2024-08-06]
- Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. Jan 2006;3(2):77-101. [CrossRef]
- ChatGPT homepage. ChatGPT. URL: https://chatgpt.com/ [accessed 2025-08-07]
- Geana MV, Liu P, Pei J, Anderson S, Ramaswamy M. "A friendly conversation." Developing an eHealth intervention to increase COVID-19 testing and vaccination literacy among women with criminal and legal system involvement. J Health Commun. Feb 18, 2024;29(2):131-142. [CrossRef] [Medline]
- Powell L, Nour R, Zidoun Y, Kaladhara S, Al Suwaidi H, Zary N. A web-based public health intervention for addressing vaccine misinformation: protocol for analyzing learner engagement and impacts on the hesitancy to vaccinate. JMIR Res Protoc. May 30, 2022;11(5):e38034. [FREE Full text] [CrossRef] [Medline]
- Scales D, Hurth L, Xi W, Gorman S, Radhakrishnan M, Windham S, et al. Addressing antivaccine sentiment on public social media forums through web-based conversations based on motivational interviewing techniques: observational study. JMIR Infodemiology. Nov 14, 2023;3:e50138. [FREE Full text] [CrossRef] [Medline]
- Steffens MS, Dunn AG, Marques MD, Danchin M, Witteman HO, Leask J. Addressing myths and vaccine hesitancy: a randomized trial. Pediatrics. Nov 2021;148(5):e2020049304. [CrossRef] [Medline]
- Del Pozo B, Sightes E, Kang S, Goulka J, Ray B, Beletsky LA. Can touch this: training to correct police officer beliefs about overdose from incidental contact with fentanyl. Health Justice. Nov 24, 2021;9(1):34. [FREE Full text] [CrossRef] [Medline]
- Domgaard S, Park M. Combating misinformation: the effects of infographics in verifying false vaccine news. Health Educ J. Aug 14, 2021;80(8):974-986. [CrossRef]
- Ecker UK, Sharkey CX, Swire-Thompson B. Correcting vaccine misinformation: a failure to replicate familiarity or fear-driven backfire effects. PLoS One. Apr 12, 2023;18(4):e0281140. [CrossRef] [Medline]
- Winters M, Oppenheim B, Sengeh P, Jalloh MB, Webber N, Pratt SA, et al. Debunking highly prevalent health misinformation using audio dramas delivered by WhatsApp: evidence from a randomised controlled trial in Sierra Leone. BMJ Glob Health. Nov 10, 2021;6(11):e006954. [FREE Full text] [CrossRef] [Medline]
- Lu M, Wang X, Ye H, Wang H, Qiu S, Zhang H, et al. Does public fear that bats spread COVID-19 jeopardize bat conservation? Biol Conserv. Feb 2021;254:108952. [FREE Full text] [CrossRef] [Medline]
- Alsaad E, AlDossary S. Educational video intervention to improve health misinformation identification on WhatsApp among Saudi Arabian population: pre-post intervention study. JMIR Form Res. Jan 17, 2024;8:e50211. [FREE Full text] [CrossRef] [Medline]
- Ugarte DA, Young S. Effects of an online community peer-support intervention on COVID-19 vaccine misinformation among essential workers: mixed-methods analysis. West J Emerg Med. Feb 27, 2023;24(2):264-268. [FREE Full text] [CrossRef] [Medline]
- Iles IA, Gaysynsky A, Sylvia Chou WY. Effects of narrative messages on key COVID-19 protective responses: findings from a randomized online experiment. Am J Health Promot. Jul 11, 2022;36(6):934-947. [FREE Full text] [CrossRef] [Medline]
- Abroms LC, Koban D, Krishnan N, Napolitano M, Simmens S, Caskey B, et al. Empathic engagement with the COVID-19 vaccine hesitant in private Facebook groups: a randomized trial. Health Educ Behav. Feb 30, 2024;51(1):10-20. [CrossRef] [Medline]
- Koban DD. Empathic engagement with the vaccine hesitant in online spaces. George Washington University. May 21, 2023. URL: https://scholarspace.library.gwu.edu/downloads/sq87bv547?disposition=inline&locale=en [accessed 2025-08-01]
- Beleites F, Adam M, Favaretti C, Hachaturyan V, Kühn T, Bärnighausen T, et al. Evaluating the impact of short animated videos on COVID-19 vaccine hesitancy: an online randomized controlled trial. Internet Interv. Mar 2024;35:100694. [FREE Full text] [CrossRef] [Medline]
- Paynter J, Luskin-Saxby S, Keen D, Fordyce K, Frost G, Imms C, et al. Evaluation of a template for countering misinformation-real-world autism treatment myth debunking. PLoS One. 2019;14(1):e0210746. [FREE Full text] [CrossRef] [Medline]
- Ma J, Chen Y, Zhu H, Gan Y. Fighting COVID-19 misinformation through an online game based on the inoculation theory: analyzing the mediating effects of perceived threat and persuasion knowledge. Int J Environ Res Public Health. Jan 05, 2023;20(2):980. [FREE Full text] [CrossRef] [Medline]
- Gavin L, McChesney J, Tong A, Sherlock J, Foster L, Tomsa S. Fighting the spread of COVID-19 misinformation in Kyrgyzstan, India, and the United States: how replicable are accuracy nudge interventions? Technol Mind Behav. Sep 21, 2022;3(3). [CrossRef]
- Sundstrom B, Cartmell KB, White AA, Russo N, Well H, Pierce JY, et al. HPV vaccination champions: evaluating a technology-mediated intervention for parents. Front Digit Health. 2021;3:636161. [FREE Full text] [CrossRef] [Medline]
- Jiang LC, Sun M, Chu TH, Chia SC. Inoculation works and health advocacy backfires: building resistance to COVID-19 vaccine misinformation in a low political trust context. Front Psychol. Oct 25, 2022;13:976091. [FREE Full text] [CrossRef] [Medline]
- Agley J, Xiao Y, Thompson EE, Chen X, Golzarri-Arroyo L. Intervening on trust in science to reduce belief in COVID-19 misinformation and increase COVID-19 preventive behavioral intentions: randomized controlled trial. J Med Internet Res. Oct 14, 2021;23(10):e32425. [FREE Full text] [CrossRef] [Medline]
- van Stekelenburg A, Schaap G, Veling H, Buijzen M. Investigating and improving the accuracy of US citizens' beliefs about the COVID-19 pandemic: longitudinal survey study. J Med Internet Res. Jan 12, 2021;23(1):e24069. [FREE Full text] [CrossRef] [Medline]
- MacFarlane D, Tay LQ, Hurlstone MJ, Ecker U. Refuting spurious COVID-19 treatment claims reduces demand and misinformation sharing. PsyArXiv Preprints. Preprint posted online on September 30, 2020. [FREE Full text] [CrossRef]
- van der Meer TG, Jin Y. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. May 14, 2020;35(5):560-575. [FREE Full text] [CrossRef] [Medline]
- Piltch-Loeb R, Su M, Hughes B, Testa M, Goldberg B, Braddock K, et al. Testing the efficacy of attitudinal inoculation videos to enhance COVID-19 vaccine acceptance: quasi-experimental intervention trial. JMIR Public Health Surveill. Jun 20, 2022;8(6):e34615. [FREE Full text] [CrossRef] [Medline]
- Vandormael A, Adam M, Greuel M, Gates J, Favaretti C, Hachaturyan V, et al. The effect of a wordless, animated, social media video intervention on COVID-19 prevention: online randomized controlled trial. JMIR Public Health Surveill. Jul 27, 2021;7(7):e29060. [FREE Full text] [CrossRef] [Medline]
- Vraga EK, Bode L, Tully M. The effects of a news literacy video and real-time corrections to video misinformation related to sunscreen and skin cancer. Health Commun. Nov 12, 2022;37(13):1622-1630. [CrossRef] [Medline]
- Tanemura N, Chiba T. The usefulness of a checklist approach-based confirmation scheme in identifying unreliable COVID-19-related health information: a case study in Japan. Humanit Soc Sci Commun. Aug 15, 2022;9(1):270. [FREE Full text] [CrossRef] [Medline]
- Roozenbeek J, van der Linden S. Fake news game confers psychological resistance against online misinformation. Palgrave Commun. Jun 25, 2019;5(1):570-580. [CrossRef]
- Pulido CM, Redondo-Sama G, Sordé-Martí T, Flecha R. Social impact in social media: a new method to evaluate the social impact of research. PLoS One. Aug 29, 2018;13(8):e0203117. [FREE Full text] [CrossRef] [Medline]
- Southwell BG, Thorson EA. The prevalence, consequence, and remedy of misinformation in mass media systems. J Commun. Jul 28, 2015;65(4):589-595. [FREE Full text] [CrossRef]
- Soucheray S. COVID, other misinformation varies by topic, country on social media. CIDRAP. May 17, 2024. URL: https://www.cidrap.umn.edu/covid-19/covid-other-misinformation-varies-topic-country-social-media [accessed 2024-08-06]
- Smith R, Chen K, Winner D, Friedhoff S, Wardle C. A systematic review of COVID-19 misinformation interventions: lessons learned. Health Aff (Millwood). Dec 2023;42(12):1738-1746. [CrossRef] [Medline]
- Borges do Nascimento IJ, Pizarro AB, Almeida JM, Azzopardi-Muscat N, Gonçalves MA, Björklund M, et al. Infodemics and health misinformation: a systematic review of reviews. Bull World Health Organ. Sep 01, 2022;100(9):544-561. [FREE Full text] [CrossRef] [Medline]
- Vraga EK, Ecker UK, Žeželj I, Lazić A, Azlan AA. To debunk or not to debunk? Correcting (mis)information. In: Purnat TD, Nguyen T, Briand S, editors. Managing Infodemics in the 21st Century: Addressing New Public Health Challenges in the Information Ecosystem. Cham, Switzerland. Springer; 2023.
- Mayer RE. How multimedia can improve learning and instruction. In: Dunlosky J, Rawson KA, editors. The Cambridge Handbook of Cognition and Education. Cambridge, UK. Cambridge University Press; 2019.
Abbreviations
| MeSH: Medical Subject Headings |
| PRISMA-ScR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews |
| WHO: World Health Organization |
Edited by A Mavragani; submitted 04.12.24; peer-reviewed by C Wardle, J Sidani; comments to author 12.02.25; revised version received 20.05.25; accepted 29.05.25; published 04.09.25.
Copyright©Hiya Grover, Radwa Nour, Nabil Zary, Leigh Powell. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 04.09.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

