Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/64225, first published .
Navigating the Maze of Social Media Disinformation on Psychiatric Illness and Charting Paths to Reliable Information for Mental Health Professionals: Observational Study of TikTok Videos

Navigating the Maze of Social Media Disinformation on Psychiatric Illness and Charting Paths to Reliable Information for Mental Health Professionals: Observational Study of TikTok Videos

Navigating the Maze of Social Media Disinformation on Psychiatric Illness and Charting Paths to Reliable Information for Mental Health Professionals: Observational Study of TikTok Videos

1Department of Psychiatry and Addictology, Université de Montréal, Montréal, QC, Canada

2Department of Psychiatry, Institut national de psychiatrie légale Philippe-Pinel, Montréal, QC, Canada

3Department of Psychiatry, Institut universitaire en santé mentale de Montréal, 7401 Rue Hochelaga, Montréal, QC, Canada

4Centre de recherche de l'Institut universitaire en santé mentale de Montréal, Montréal, QC, Canada

Corresponding Author:

Alexandre Hudon, BEng, MD, PhD


Background: Disinformation on social media can seriously affect mental health by spreading false information, increasing anxiety, stress, and confusion in vulnerable individuals, as well as perpetuating stigma. This flood of misleading content can undermine trust in reliable sources and heighten feelings of isolation and helplessness among users.

Objective: This study aimed to explore the phenomenon of disinformation about mental health on social media and provide recommendations to mental health professionals that would use social media platforms to create educational videos about mental health topics.

Methods: A comprehensive analysis conducted on 1000 TikTok videos from more than 16 countries, available in English, French, and Spanish, covering 26 mental health topics. The data collection was conducted using a framework on disinformation and social media. A multilayered perceptron algorithm was used to identify factors predicting disinformation. Recommendations to health professionals about the creation of informative mental health videos were designed as per the data collected.

Results: Disinformation was predominantly found in videos about neurodevelopment, mental health, personality disorders, suicide, psychotic disorders, and treatment. A machine learning model identified weak predictors of disinformation, such as an initial perceived intent to disinform and content aimed at the general public rather than a specific audience. Other factors, including content presented by licensed professionals such as a counseling resident, an ear-nose-throat surgeon, or a therapist, and country-specific variables from Ireland, Colombia, and the Philippines, as well as topics such as adjustment disorder, addiction, eating disorders, and impulse control disorders, showed a weak negative association with disinformation. In terms of engagement, only the number of favorites was significantly associated with a reduction in disinformation. Five recommendations were made to enhance the quality of educational videos about mental health on social media platforms.

Conclusions: This study is the first to provide specific, data-driven recommendations to mental health providers globally, addressing the current state of disinformation on social media. Further research is needed to assess the implementation of these recommendations by health professionals, their impact on patient health, and the quality of mental health information on social networks.

J Med Internet Res 2025;27:e64225

doi:10.2196/64225

Keywords



Mental health is a prevalent topic worldwide on social media platforms [1]. However, fake news and erroneous information on social media platforms are serious issues that considerably impact public trust and health [2,3]. The purposeful spread of incorrect or misleading information with the objective to mislead or influence the public is known as disinformation [4]. This kind of information is frequently disseminated to sway public opinion, hide the truth, or accomplish commercial, military, or political objectives [5]. Disinformation differs from misinformation and is defined as false information accidentally disseminated, without malicious intent [4]. This is different from misinformation that implies a deceptive aim. These falsehoods can take various forms, such as promoting untested remedies or treatments, voicing inaccurate health claims, and propagating conspiracy theories about illnesses, climate change, and vaccines [6]. Due to the viral nature of social media, false information can spread rapidly and widely, potentially influencing people’s attitudes and behaviors related to health [7,8]. This is particularly dangerous during health emergencies, such as pandemics, when accurate information is important for everyone’s safety [8,9].

Health-related disinformation that is widely spread may lead to treatment reluctance, unhealthy lifestyle choices, and mistrust of medical professionals and scientific institutions [10]. Various strategies have been used to attempt to counter these repercussions including fact-checking campaigns, promoting evidence-based content, and forming partnerships between health organizations and social media platforms to highlight credible sources and flag or remove misleading information [11,12]. Despite these efforts, the fight against false health information on social media continues, emphasizing the need for the public to develop critical media literacy skills to navigate and assess the reliability of health-related information on the web [13,14].

Disinformation related to mental health on social media presents specific risks since it feeds stigma, dispels myths, and erodes public awareness of mental health concerns [15]. Myths concerning the etiology of mental diseases and inaccurate information regarding the efficacy of treatments are examples of false information that frequently sensationalizes or minimizes these conditions [16,17]. In addition to maintaining stigma, this may deter people from getting professional assistance or pursuing evidence-based therapies [15]. Such disinformation may have a significant negative impact by exacerbating feelings of loneliness, anxiousness, and depression in populations that are already vulnerable [18]. Social media platforms, mental health experts, and the general public must work together to address disinformation about mental health to generate truthful and accessible information, conduct candid discussions about mental health, and offer assistance to individuals who are impacted by these problems [19]. While ensuring reliable access to science-backed mental health resources and education is important, there are currently no clear recommendations, nor study focusing specifically on this phenomenon that could lead to promoting a better informed and supportive web-based community in the field of mental health.

Many studies are presented in the literature to highlight the impact of disinformation in the social medias on the general public. For example, a systematic review highlighted that social media use is extremely common among youth, and exposure to disinformation can exacerbate mental health issues such as anxiety and depression [20]. Furthermore, the spread of inaccurate information about specific disorders, such as attention-deficit/hyperactivity disorder (ADHD), has been prevalent. A study analyzing the top 100 ADHD-related videos on TikTok found that less than half adhered to clinical guidelines, raising concerns about disinformation leading to self-diagnosis among teenagers [21]. The motivations behind disseminating mental health disinformation on social media are multifaceted. A bibliometric analysis identified various intervention strategies to address disinformation sharing, emphasizing the need for proactive measures to decrease the spread of inaccurate information [22]. Additionally, the misuse of psychological terminology on social media platforms has diluted the meaning of important concepts. Terms such as “gaslighting,” “boundaries,” and “toxic relationships” are often used inaccurately, leading to misunderstandings and minimizing the experiences of those truly affected by such issues [23,24]. Another concerning trend is “sadfishing,” where individuals exaggerate emotional problems on the web to garner sympathy. This behavior can undermine genuine mental health struggles and expose vulnerable users to cyberbullying and exploitation [25]. This explorative study aims to investigate the phenomenon of disinformation related to mental health on social media. The specific objectives are to (1) identify key factors that can predict whether a social media video contains misleading or false information for viewers and listeners, (2) estimate the prevalence of different mental health topics addressed in these videos and determine which topics are most frequently associated with disinformation, and (3) develop recommendations for best practices in the creation and dissemination of psychoeducational videos on mental health aimed at the general public.

Given the limited empirical research on the spread of mental health disinformation on social media, this study adopts an exploratory approach. Nonetheless, we propose several working hypotheses to guide the analysis. First, we hypothesize that disinformation about mental illnesses is prevalent within mental health–related content on social media platforms. Second, we anticipate that certain mental health topics, such as ADHD and depression, are disproportionately represented in videos containing inaccurate or misleading information. Third, we hypothesize that specific content features, including emotional tone, the use of clinical language, and the presence of personal anecdotes, may serve as significant predictors of disinformation. Through this exploratory investigation, the study aims to generate foundational insights that can support the development of evidence-based guidelines and interventions to establish more accurate and supportive mental health communication in digital environments.


This observational study analyzed 1000 publicly available TikTok videos across 26 mental health topics in 3 languages using a disinformation coding framework and machine learning. A multilayered perceptron model was applied to identify content features and predictors associated with disinformation.

Participants and Recruitment

This study did not actively recruit participants or use their data. Instead, it involved the collection and analysis of TikTok videos, which are publicly accessible. TikTok was chosen as the primary source of information due to its widespread use as a social media platform and the public nature of its content [26]. The research team identified a total of 1000 videos based on the following inclusion criteria: (1) the video must address a mental health topic (directly reference psychological or psychiatric conditions, mental health, mental well-being, or treatment or diagnostic processes); (2) it must contain psychoeducational material or be intended to be psychoeducational; (3) the language must be French, English, or Spanish (as these are the languages the coders are fluent with); (4) the video must not be part of a multipart series (this was applied to ensure the independence and self-containment of each unit of analysis); and (5) the video must be accessible without requiring a TikTok account.

It is important to note that videos were not collected via keyword searches alone, but we did identify candidate videos by monitoring popular hashtags and search trends associated with mental health (eg, #ADHD, #mentalhealth, #therapy, #anxiety, #depression, and #mentalhealthawareness). The For You page was then explored to curate a broader sample. Importantly, no filters were applied based on the source of the video. That is, videos created by verified organizations (eg, World Health Organization and Centers for Disease Control and Prevention) and individual creators (eg, influencers, lay users, and self-identified professionals) were all considered, provided they met the inclusion criteria. Accounts clearly operating as bots or spam (eg, repetitive posting with identical content, commercial-only links, or suspicious usernames) were flagged and excluded during a manual screening process. Video length was not used as an inclusion or exclusion criterion; however, excessively short videos (<5 seconds) that did not contain any identifiable mental health–related content were removed.

Given that web users represent approximately 65.1% of the global population, we used this proportion as a reference point to estimate an appropriate sample size for our exploratory analysis. Assuming a confidence level of 95% and a 5% margin of error, parameters commonly used in social science research, a minimum sample size of 385 videos was determined to be sufficient to provide statistically meaningful insights into the broader population of mental health content encountered on social media. This calculation was conducted using G*Power 3.1, based on a proportion test (z test) for goodness of fit to ensure adequate representation and to inform descriptive and inferential components of the study [27]. The platform’s “For You” tab was used, which curates content using TikTok’s in-house recommendation algorithm based on popular topics, user engagement, and relevance, to gather the sample of 1000 TikTok videos. This strategy was chosen to represent the kind of content that users are most likely to come across naturally. We sought to obtain a genuine and ecologically valid representation of commonly seen mental health–related information by using the algorithmically curated For You feed instead of just hashtags or keyword searches. To reduce personalization bias and get a wide variety of videos from different topics and producers, the research team used freshly made, neutral accounts with no past engagement history for several sampling sessions.

Ethical Considerations

The ethics committee of the Centre de recherche de l’Institut universitaire en santé mentale de Montréal was consulted to determine the need for ethics approval for this study’s objectives and methodology. According to the Canadian Tri-Council Policy Statement on Ethical Conduct for Research Involving Humans, no ethics board approval was required, as the TikTok videos analyzed belong to the public domain. This study was registered and can be found in Multimedia Appendix 1, as well as the dataset used.

Data Collection

A dataset was collaboratively constructed by all the authors, identifying key elements from 3 different categories for each analyzed video: video characteristics, quality of the information extracted, and clinical elements related to mental health. The data collected in this study are shown in Table 1. Weighted numbers for likes, comments, shares, and favorites were obtained by dividing their total amount by the number of days elapsed since the upload date.

Table 1. Categories and variables collected for each identified TikTok video.
CategoriesElements
Video characteristics
  • Video upload date
  • Country
  • Professional title of the presenter (self-reported title)
  • Language (language)
  • Lengths (in minutes)
  • Weighted number of likes
  • Weighted number of comments
  • Weighted number of shares
  • Weighted number of favorites
  • Target audience (general or specific audience)
  • Hashtags used
Quality of the information
  • Content (opinion-based, fact-based, and mixed)
  • Intent (disinformation, misinformation, clickbait, satire, and other)
  • Authenticity (rumor, propaganda, hoax, conspiracy, framing, reference-based, and other)
Clinical mental health data
  • Topic (main topic)
  • Reference used

The elements related to the quality of the information are inspired by the work of Aïmeur and colleagues [28], who introduced formal definitions of content, intent, and authenticity in the context of fake news. In this study, content can be classified as opinion-based, fact-based, or a mix of both (mixed). The intent refers to the perceived intent of the video as analyzed by the researcher. It can be categorized as disinformation (providing false information or information not supported by the scientific community), misinformation (information that is partially correct according to the scientific community), clickbait (with the main intent to sell a product or promote a service), satire (using humor or sketches to convey information), or other (when the intent appears to inform, educate, or provide a personal experience, and the information does not fall into disinformation or misinformation). Authenticity refers to the source of information conveyed in the video. The sources can be classified as a rumor, propaganda, a hoax, a conspiracy, framing (using part of the information out of context or using humor or sketches as the source of information), reference-based (mentioning or citing explicit sources of information), or other.

The members of the research team (n=8) received training in order to uniformize the data collection. Each video was counter-validated by another member of the research team so that all videos have been assessed at least twice. Prior to full coding, the team conducted a pilot test on a random subsample of 50 videos to calibrate interpretation and resolve discrepancies. Interrater reliability was calculated using Cohen κ for categorical decisions (eg, inclusion and topic classification), and overall agreement reached 87%, indicating a strong level of consistency. Disagreements were resolved through group discussion until consensus was reached.

Data Analysis

Observational descriptive statistics, such as the identified prevalence of each variable, were reported. Correlations between the collected elements and disinformation were established using Spearman ρ, as the data did not follow a normal distribution. To assess the variables most linked to disinformation, a machine learning approach using a multilayered perceptron classifier from the open-source scikit-learn library (Python 3.9) was used to account for the number of variables and multicollinearity [29].

The neural network architecture included 2 hidden layers with 64 and 32 neurons, respectively, using the rectified linear unit activation function. Categorical variables were converted into accessible variables using a one-hot encoding approach. The independent variables were defined as all columns of the dataset except the disinformation column, which was designated as the dependent variable. To train the classifier, the dataset was split into training and testing sets using a standard 80‐20 split to ensure that the model could generalize to unseen data. The model was then cross-validated using the 10-fold algorithm from scikit-learn to ensure consistency in the reported performance [29]. Model accuracy was assessed using the F1-score, and odds ratios, P values, and CIs were reported for each independent variable. The F1-score, commonly used in text classification, provides a balanced measure of theme classification accuracy by combining both precision and recall into a single metric [30]. A P value of less than .05 was considered significant.

Finally, recommendations were established based on the elements that were found to be significant and strongly correlated with either the presence or the absence of disinformation. The recommendations were also driven by relevant literature. Considering the observational nature of this study, the STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) statement was used to report the findings.


A total of 1000 TikTok videos were analyzed. The vast majority were in English (n=830), of unspecified geographic origin (n=618), and presented by individuals whose titles were not provided (n=471). Disinformation was most frequent in English language videos in absolute terms; however, when adjusted for sample size, French language videos showed a proportionally higher presence of disinformation. Topic prevalence also varied by language: English videos more commonly addressed ADHD, anxiety, and depression, while French videos focused on psychotherapy and personality disorders, and Spanish videos often covered emotional regulation and general mental well-being. Although English videos received the highest engagement overall, French language videos flagged for disinformation showed relatively high engagement within their language group, suggesting that misinformation in less represented languages may have a concentrated impact. These findings underscore the importance of considering linguistic and cultural dimensions when addressing mental health disinformation on social media platforms. Notably, among the professionals, most of the videos were created by psychiatrists and psychologists. The videos were primarily aimed at the general public rather than a specific audience. The video-related elements are summarized in Table 2.

Table 2. Video-related variables of identified TikTok videos regarding sociodemographic characteristics.
VariableValues, n (%)
Language
English830 (83.00)
French108 (10.8)
Spanish62 (6.2)
Country
Not specified618 (61.8)
United States236 (23.6)
United Kingdom41 (4.1)
Canada41 (4.1)
France27 (2.7)
Australia18 (1.8)
Mexico3 (0.3)
Sweden3 (0.3)
Australia2 (0.2)
India2 (0.2)
Philippines2 (0.2)
Ecuador2 (0.2)
Peru1 (0.1)
Belgium1 (0.1)
Colombia1 (0.1)
Ireland1 (0.1)
Spain1 (0.1)
Presented by (professional title)
Not specified471 (47.1)
Psychiatrist131 (13.1)
Psychologist73 (7.3)
Therapist56 (5.6)
Coach25 (2.5)
Nurse24 (2.4)
Medical resident19 (1.9)
Doctor15 (1.5)
Influencer14 (1.4)
Family doctor7 (0.7)
Pediatrician7 (0.7)
Other158 (15.8)
Targeted audience
General public896 (89.6)
Specific audience104 (10.4)

When assessing the topics of the videos and their engagement metrics, it was observed that most of the identified videos were about personality disorders, autism, depression, and ADHD. The longest videos focused on personality disorders, while the shortest video addressed dissociative identity disorder. Videos on addiction and autism received the highest engagement, as indicated by the average weighted number of likes, comments, favorites, and shares. It is to be noted that the mental health topic included any mental health–related subject matter that was not specific to a diagnosis or treatment. The topics and engagement metrics are summarized in Table 3.

Table 3. Topics and engagement metrics for each mental health topic identified in the TikTok videos.
Mental health topicValues, n (%)Average video length (minutes)Average weighted number of likesAverage weighted number of commentsAverage weighted number of favoritesAverage weighted number of shares
Personality disorders207 (20.7)1.8311233120686
Autism90 (9.00)1.7354061069791081
Depression90 (9.00)0.92369189526560
ADHDa90 (9.00)1.3718971725174
Psychotic disorders82 (8.2)1.77428115325
Anxiety81 (8.1)1.1588011130112
Mental health76 (7.6)1.04177425284179
Treatment69 (6.9)1.30200174024
Bipolar disorders46 (4.6)1.3021832810
Trauma34 (3.4)1.2330484162394
OCDb20 (2.00)0.9021832418
Psychotherapy15 (1.5)1.669031216389
Psychiatry14 (1.4)1.5345055313
Eating disorders14 (1.4)1.1199464815
Suicide13 (1.3)1.3943275611
Tourette syndrome11 (1.1)1.0335322
Impulse control disorders9 (0.9)1.6617143
Somatization8 (0.8)1.0863396
Neurocognitive disorders8 (0.8)1.315010
Neurodevelopmental5 (0.5)1.13124242022
Addiction5 (0.5)1.7817,19633718691148
Adjustment disorder4 (0.4)1.368583321
Sleep disorders4 (0.4)0.6712132121698
Catatonia2 (0.2)1.4328030
Dissociative identity disorder2 (0.2)0.3315142012233
Paraphilia1 (0.1)1.334000

aADHD: attention-deficit/hyperactivity disorder.

bOCD: obsessive-compulsive disorder.

Across the various mental health topics, most videos were opinion-based, with 6.30% (63/1000) containing disinformation and 15.70% (157/1000) containing misinformation. Reference-based videos represented 20.70% (207/1000) of the analyzed content. Neurodevelopmental disorders, suicide, and mental health were the topics for which the videos were predominantly opinion-based, followed closely by dissociative identity disorder, autism, and depression. Videos on neurodevelopment, mental health, personality disorders, suicide, psychotic disorders, and treatment conveyed the highest amounts of disinformation. In videos discussing neurodevelopmental disorders, particularly ADHD and autism, disinformation often took the form of oversimplified self-diagnosis criteria (eg, “If you forget your keys, you definitely have ADHD”) or misleading statements about cures through dietary supplements. In the mental health category, disinformation included generalized claims such as “mental illness is just a mindset” or “you don’t need therapy, just positive thinking,” which risk minimizing serious conditions. Videos addressing personality disorders frequently portrayed traits of borderline or narcissistic personality disorder using inaccurate or stigmatizing descriptions, often lacking any clinical basis. In content related to suicide, misinformation included videos that romanticized suicidal ideation or presented recovery without professional intervention as universally effective. For psychotic disorders, we found cases where hallucinations were portrayed as spiritual awakenings or entirely controllable through willpower. Finally, videos on treatment sometimes promoted unverified therapies or discouraged the use of psychiatric medication with claims such as “antidepressants only make things worse” or “therapy is a scam.” The content, intent, and authenticity for each topic are shown in Multimedia Appendix 1.

Correlations of all variables regarding disinformation are shown in Multimedia Appendix 2. Overall, a weak positive correlation exists between opinion-based content and disinformation, as well as sources listed as propaganda, rumor, hoax, or conspiracy. Certain professional titles, such as brain health or brainspotting practitioner, were also weakly correlated with disinformation. On the other hand, videos with the intent to inform or educate, with various sources of information, that are fact-based and presented by psychiatrists were correlated negatively with disinformation.

When modeling the dataset to predict disinformation as per the variables pertaining to quality of the information, only the intent to misinform (odds ratio [OR] 1.07, 95% CI 1.02‐1.11) and content directed to general public (OR 1.04, 95% CI 1.01‐1.07) were significantly associated with disinformation. Other variables, such as presented by a licensed resident in counseling (OR 0.95, 95% CI 0.93‐0.98), an ear-nose-throat surgeon (OR 0.96, 95% CI 0.93‐0.98), and a therapist (OR 0.96, 95% CI 0.93‐0.98); country-specific variables such as Ireland (OR 0.98, 95% CI 0.95‐1.00), Colombia (OR 0.97, 95% CI 0.95‐0.99), and Philippines (OR 0.97, 95% CI 0.94‐0.99); and specific topics such as adjustment disorder (OR 0.97, 95% CI 0.94‐0.99), addiction (OR 0.97, 95% CI 0.94‐1.00), eating disorders (OR 0.97, 95% CI 0.95‐0.99), and impulse control disorders (OR 0.97, 95% CI 0.95‐1.00) were weakly inversely associated with disinformation. As for engagement, only the number of favorites was significantly associated negatively with disinformation (OR 0.97, 95% CI 0.95‐1.00). The F1-score of the model was 89.19%, with an adjusted R2 score of 0.79. Coefficients for each variable used in the model are shown in Multimedia Appendix 3.


Principal Results

The aim of this study was to explore the phenomenon of disinformation about mental health on social media. A total of 1000 TikTok videos, coming from more than 16 countries, in 3 languages (English, French, and Spanish) and encompassing a total of 26 topics were thoroughly analyzed. Disinformation was mostly found in videos discussing neurodevelopment, mental health, personality disorders, suicide, psychotic disorders, and treatment. A machine learning model allowed identification of weak predictors of disinformation, such as an initial perceived intent to misinform or information provided to the general public rather than a specific audience. Other factors, such as content presented by a licensed counseling resident, an ear-nose-throat surgeon, or a therapist, as well as country-specific variables like those from Ireland, Colombia, and the Philippines, and specific topics, such as adjustment disorder, addiction, eating disorders, and impulse control disorders, showed a weak negative association with disinformation. Regarding engagement, only the number of favorites was significantly associated with less disinformation.

Misinformation and disinformation are commonly observed on social media platforms [31,32]. This was also observed in this study, considering that 6.30% and 15.70% of the videos were perceived as having an intent to disinform or misinform the viewers. This phenomenon is important to consider because it can have several consequences on viewers, especially those with mental health–related vulnerabilities, even more so in periods of stress [33,34]. Namely, a recent review examining the impact of fake news in the health sector reported that fake news in the context of COVID-19 can lead to psychological disorders and induce panic, fear, depression, and fatigue [2]. Since engaging with social media content (such as likes, comments, and followers) may contribute to poor mental health, information vehiculated by professionals on these platforms should be of high quality [35,36]. Professionals in the mental health field are beginning to use web-based technologies in their work, but opinions on the morality and practicality of doing so in clinical settings still vary considering the heterogeneity of content available to the public on social media [37].

While professional guidelines on the use of social media exist for general health care professionals, no specific recommendations exist regarding the creation of content on social media for mental health professionals [38]. Such guidelines are needed considering many of the observed correlations with disinformation. For instance, opinion-based videos, which were the most prevalent type of videos identified in this study, are to be dealt with carefully. Considering their association with disinformation, mental health professionals should refrain from conveying solely opinion-based content as they may be misinterpreted by the viewers. The lack of reference-based content observed in this study also aligns with observations made in the literature regarding general health social media content. The following recommendations are based on the content that has been analyzed in the context of this study. Mental health professionals who wish to limit disinformation and its potential adverse effects may consider them when creating social media content.

Theoretical and Practical Contributions

This study offers both theoretical insight and practical guidance in an area that has received limited empirical attention. On a theoretical level, it contributes to the understanding of how disinformation circulates in the context of mental health, a topic that presents particular challenges due to stigma, emotional sensitivity, and the complexity of psychiatric diagnoses. By examining a large, multilingual sample of social media content and applying a machine learning approach to identify patterns associated with disinformation, the study provides empirical support for previously assumed associations such as the influence of audience targeting and professional credibility. This adds to the current literature by grounding these assumptions in observational data rather than anecdotal or speculative claims. From a practical standpoint, a set of evidence-informed recommendations for mental health professionals who use social media to engage with the public is found in the following subsection. These guidelines are based directly on observed patterns in the data and aim to help clinicians communicate more clearly, avoid common pitfalls, and reduce the spread of misleading or inaccurate information. By doing so, the study not only highlights gaps in current practices but also offers a starting point for improving the quality of mental health content on the web.

Recommendations to Mental Health Professionals Creating Social Media Content

Target a Specific Audience

It is recommended to state the intended audience at the beginning of the video to ensure that its content can be interpreted correctly by the viewers [39,40]. Videos may be customized, as well as the material conveyed, to meet the unique requirements, interests and preferences of the specific audience. This increases the audience’s relevance and engagement with your communications, which raises the possibility of a favorable response while limiting interpretation biases.

Use and Cite Relevant Sources

As observed in the correlation analysis, fact-based resources and intent to inform were less associated with disinformation. Videos will be more credible if reliable sources are properly credited [41]. Doing so not only demonstrates that the presented material is supported by scientific studies and validated by professionals but also allows to build trust with the audience and permits fact-checking by interested viewers. It is also a way to provide the audience with a mean to understand potential biases in the information presented.

State Your Credentials

Given the heterogeneity in the demographics of social media content creators and that many professional titles had a correlation with disinformation in this study, it is important to provide users with your credentials [42]. This can include the country from which the information is presented, your professional title, and a summary of your experience. This is important to take into consideration, since several guidelines for various topics in mental health may differ depending on the country or the jurisdiction in which they were developed. This provides helpful information to the viewers to better understand the context in which the information is provided.

State Your Intent and Use Fact-Based Content

Explicitly stating the intent of the video and relying on fact-based content rather than opinion are less associated with disinformation [43,44]. To avoid confusion of the viewership or misinterpretation, opinion-based content should be stated as such so that the viewers can critically appreciate the information vehiculated.

Avoid Oversimplifying Complex Topics

Topics such as autism, personality disorders, and psychotic disorders were correlated with disinformation, and it can be hypothesized that this is because they are complex topics. Generalization of these subjects should be avoided to limit misinterpretation of the information transmitted to the viewers [45,46]. Breaking down complex topics into smaller units could be a way to better convey the information.

As an example, to support the implementation of these recommendations, mental health professionals can leverage specific tools and strategies already available on social media platforms. Creators could use captioning tools or video intros to clearly state their target audience and credentials within the first few seconds of a video. Platforms such as TikTok also allow creators to include link-in-bio tools or descriptions where references and source material can be cited. Institutions or professional associations can contribute by providing media literacy training or tool kits tailored for clinicians, outlining how to communicate responsibly on social media. Collaborative efforts with science communicators or digital health experts may further help translate evidence-based practices into accessible video formats. Additionally, integrating these best practices into professional guidelines or continuing education programs may normalize and incentivize responsible content creation among mental health providers.

Limitations

This study has a few limitations. As with many machine learning models, challenges arise when interpreting the meaning of the correlations identified for each variable. For example, some variables that might be expected to correlate strongly with disinformation, such as fact-based content, did not show statistically significant associations in the model. This can be attributed to the small number of fact-based videos identified, potentially introducing biases during model training. Therefore, the interpretation of the model should be cautiously based only on the significant variables and also the fact that the data were solely extracted from TikTok. Model overfitting is commonly observed in multilayered classifiers, and it is hypothesized that the significance of certain variables could vary as the dataset grows. Important steps, such as data cleaning and standardization, were undertaken to mitigate overfitting as much as possible for this dataset. The intent and authenticity of the videos were assessed based on the evaluators’ judgment, as contacting individual content creators was beyond the scope of this study. However, 2 independent evaluations of each video were done in order to limit potential erroneous interpretations of its content.

Conclusions

This study aimed to explore the characteristics and prevalence of disinformation in TikTok videos related to mental health and to identify factors associated with this phenomenon. By analyzing 1000 publicly available videos across multiple countries and languages, the study offers a comprehensive and data-driven view of how disinformation manifests in social media content related to psychiatric topics. Important patterns were identified, including topics and content types most vulnerable to disinformation, and video characteristics (such as lack of references or unspecified professional titles) that may contribute to the spread of misleading information.

Beyond describing these patterns, the study also contributes theoretically by applying an observational analytic framework and machine learning model to a largely unexamined area: mental health disinformation in user-generated content. This methodological approach allowed for the identification of a few weak, yet, statistically significant predictors of disinformation, advancing empirical understanding of how various factors interact in web-based health communication. These findings respond directly to gaps in the existing literature, where disinformation in mental health contexts has often been discussed anecdotally or in isolated case studies, without systematic or scalable analysis.

From a practical standpoint, this study is among the first to translate such findings into specific, actionable recommendations for mental health professionals. These guidelines are grounded in empirical observation and are intended to help clinicians communicate more clearly, reduce misinterpretation, and contribute to a more informed web-based discourse. Given the growing role of social media in contributing to the public understanding of mental health, these contributions are both timely and necessary.

Although this study focused on TikTok, the insights generated could be applicable to other social media platforms that rely on short-form content and algorithm-driven visibility. As such, the findings support the development of broader communication guidelines and professional training initiatives, with the aim of strengthening the quality and reliability of mental health information on the web. Offering both methodological innovation and applied recommendations, this study brings a new and valuable perspective to the ongoing challenge of health-related disinformation in the digital age.

Acknowledgments

This study was funded indirectly by La Fondation de l’Institut universitaire en santé mentale de Montréal.

Data Availability

All data generated or analyzed during this study are included in this published article as an external link in Multimedia Appendix 1.

Authors' Contributions

AH and GE were involved in conceptualization and formal analysis and contributed to methodology and validation. Data curation was done by all the authors. AH was involved in funding acquisition, investigation, project administration, and supervision, and contributed to resources. All the authors were involved in original draft and writing, and review and editing.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Content, intent, and authenticity count percentages per topic.

DOCX File, 20 KB

Multimedia Appendix 2

Summary of correlations of all variables in relation with disinformation.

DOCX File, 23 KB

Multimedia Appendix 3

Machine learning classifier’s variables and their coefficients for each variable to predict disinformation.

DOCX File, 54 KB

Checklist 1

STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) checklist.

DOCX File, 33 KB

  1. Naslund JA, Bondre A, Torous J, Aschbrenner KA. Social media and mental health: benefits, risks, and opportunities for research and practice. J Technol Behav Sci. Sep 2020;5(3):245-257. [CrossRef] [Medline]
  2. Rocha YM, de Moura GA, Desidério GA, de Oliveira CH, Lourenço FD, de Figueiredo Nicolete LD. The impact of fake news on social media and its influence on health during the COVID-19 pandemic: a systematic review. Z Gesundh Wiss. Oct 9, 2021;PMID:1-10. [CrossRef] [Medline]
  3. Muhammed T S, Mathew SK. The disaster of misinformation: a review of research in social media. Int J Data Sci Anal. 2022;13(4):271-285. [CrossRef] [Medline]
  4. Office of the Surgeon General (OSG). Confronting Health Misinformation: The US Surgeon General’s Advisory on Building a Healthy Information Environment. US Department of Health and Human Services; 2021. URL: https://www.ncbi.nlm.nih.gov/books/NBK572166 [Accessed 2025-06-11]
  5. Sample C, Jensen MJ, Scott K, et al. Interdisciplinary lessons learned while researching fake news. Front Psychol. 2020;11:537612. [CrossRef] [Medline]
  6. Lwin MO, Sheldenkar A, Tng PL. You must be myths-taken: examining belief in falsehoods during the COVID-19 health crisis. PLoS One. 2024;19(3):e0294471. [CrossRef] [Medline]
  7. Suarez-Lledo V, Alvarez-Galvez J. Prevalence of health misinformation on social media: systematic review. J Med Internet Res. Jan 20, 2021;23(1):e17187. [CrossRef] [Medline]
  8. Ferreira Caceres MM, Sosa JP, Lawrence JA, et al. The impact of misinformation on the COVID-19 pandemic. AIMS Public Health. 2022;9(2):262-277. [CrossRef] [Medline]
  9. Jafar Z, Quick JD, Larson HJ, et al. Social media for public health: reaping the benefits, mitigating the harms. Health Promot Perspect. 2023;13(2):105-112. [CrossRef] [Medline]
  10. Jaiswal J, LoSchiavo C, Perlman DC. Disinformation, misinformation and inequality-driven mistrust in the time of COVID-19: lessons unlearned from AIDS denialism. AIDS Behav. Oct 2020;24(10):2776-2780. [CrossRef] [Medline]
  11. Neylan JH, Patel SS, Erickson TB. Strategies to counter disinformation for healthcare practitioners and policymakers. World Med Health Policy. Jun 2022;14(2):423-431. [CrossRef] [Medline]
  12. Kington RS, Arnesen S, Chou WYS, Curry SJ, Lazer D, Villarruel AM. Identifying credible sources of health information in social media: principles and attributes. NAM Perspect. 2021;2021:10. [CrossRef] [Medline]
  13. Ventola CL. Social media and health care professionals: benefits, risks, and best practices. P T. Jul 2014;39(7):491-520. [Medline]
  14. Fitzpatrick PJ. Improving health literacy using the power of digital communications to achieve better health outcomes for patients and practitioners. Front Digit Health. 2023;5:1264780. [CrossRef] [Medline]
  15. Jabbour D, Masri JE, Nawfal R, Malaeb D, Salameh P. Social media medical misinformation: impact on mental health and vaccination decision among university students. Ir J Med Sci. Feb 2023;192(1):291-301. [CrossRef] [Medline]
  16. Starvaggi I, Dierckman C, Lorenzo-Luaces L. Mental health misinformation on social media: review and future directions. Curr Opin Psychol. Apr 2024;56:101738. [CrossRef] [Medline]
  17. Sharma MK, Anand N, Vishwakarma A, et al. Mental health issues mediate social media use in rumors: Implication for media based mental health literacy. Asian J Psychiatr. Oct 2020;53:102132. [CrossRef] [Medline]
  18. Hammad MA, Alqarni TM. Psychosocial effects of social media on the Saudi society during the coronavirus disease 2019 pandemic: a cross-sectional study. PLoS One. 2021;16(3):e0248811. [CrossRef] [Medline]
  19. Jeyaraman M, Ramasubramanian S, Kumar S, et al. Multifaceted role of social media in healthcare: opportunities, challenges, and the need for quality control. Cureus. May 2023;PMID(5):e39111. [CrossRef]
  20. Keles B, McCrae N, Grealish A. A systematic review: the influence of social media on depression, anxiety and psychological distress in adolescents. Int J Adolesc Youth. Dec 31, 2020;25(1):79-93. [CrossRef]
  21. Yeung A, Ng E, Abi-Jaoude E. TikTok and attention-deficit/hyperactivity disorder: a cross-sectional study of social media content quality. Can J Psychiatry. Dec 2022;67(12):899-906. [CrossRef] [Medline]
  22. Zainudin J, Mohamad Ali N, Smeaton AF, Taha Ijab M. Intervention strategies for misinformation sharing on social media: a bibliometric analysis. IEEE Access. 2024;12:140359-140379. [CrossRef]
  23. Sweet PL. The sociology of gaslighting. Am Sociol Rev. Oct 2019;84(5):851-875. [CrossRef]
  24. Klein W, Li S, Wood S. A qualitative analysis of gaslighting in romantic relationships. Pers Relatsh. Dec 2023;30(4):1316-1340. [CrossRef]
  25. Vidal C, Lhaksampa T, Miller L, Platt R. Social media use and depression in adolescents: a scoping review. Int Rev Psychiatry. May 2020;32(3):235-253. [CrossRef] [Medline]
  26. McCashin D, Murphy CM. Using TikTok for public and youth mental health—a systematic review and content analysis. Clin Child Psychol Psychiatry. Jan 2023;28(1):279-306. [CrossRef] [Medline]
  27. Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. May 2007;39(2):175-191. [CrossRef] [Medline]
  28. Aïmeur E, Amri S, Brassard G. Fake news, disinformation and misinformation in social media: a review. Soc Netw Anal Min. 2023;13(1):30. [CrossRef] [Medline]
  29. Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: machine learning in Python. J Mach Learn Res. 2011;12:2825-2830.
  30. Hicks SA, Strümke I, Thambawita V, et al. On evaluation metrics for medical applications of artificial intelligence. Sci Rep. Apr 8, 2022;12(1):5979. [CrossRef] [Medline]
  31. Adebesin F, Smuts H, Mawela T, Maramba G, Hattingh M. The role of social media in health misinformation and disinformation during the COVID-19 pandemic: bibliometric analysis. JMIR Infodemiol. Sep 20, 2023;3:e48620. [CrossRef] [Medline]
  32. Gaysynsky A, Senft Everson N, Heley K, Chou WYS. Perceptions of health misinformation on social media: cross-sectional survey study. JMIR Infodemiol. Apr 30, 2024;4:e51127. [CrossRef] [Medline]
  33. Ulvi O, Karamehic-Muratovic A, Baghbanzadeh M, Bashir A, Smith J, Haque U. Social media use and mental health: a global analysis. Epidemiologia (Basel). Jan 11, 2022;3(1):11-25. [CrossRef] [Medline]
  34. Pantic I. Online social networking and mental health. Cyberpsychol Behav Soc Netw. Oct 2014;17(10):652-657. [CrossRef] [Medline]
  35. Karim F, Oyewande AA, Abdalla LF, Chaudhry Ehsanullah R, Khan S. Social media use and its connection to mental health: a systematic review. Cureus. Jun 15, 2020;12(6):e8627. [CrossRef] [Medline]
  36. Pretorius C, McCashin D, Coyle D. Mental health professionals as influencers on TikTok and Instagram: what role do they play in mental health literacy and help-seeking? Internet Interv. Dec 2022;30:100591. [CrossRef] [Medline]
  37. Deen SR, Withers AMY, Hellerstein DJ. Mental health practitioners’ use and attitudes regarding the internet and social media. J Psychiatr Pract. 2013;19(6):454-463. [CrossRef]
  38. Hennessy CM, Smith CF, Greener S, Ferns G. Social media guidelines: a review for health professionals and faculty members. Clin Teach. Oct 2019;16(5):442-447. [CrossRef] [Medline]
  39. Giroux CM, Kim S, Sikora L, Bussières A, Thomas A. Social media as a mechanism of dissemination and knowledge translation among health professions educators: a scoping review. Adv Health Sci Educ Theory Pract. Jul 2024;29(3):993-1023. [CrossRef] [Medline]
  40. Narayanaswami P, Gronseth G, Dubinsky R, et al. The impact of social media on dissemination and implementation of clinical practice guidelines: a longitudinal observational study. J Med Internet Res. Aug 13, 2015;17(8):e193. [CrossRef] [Medline]
  41. Gurler D, Buyukceran I. Assessment of the medical reliability of videos on social media: detailed analysis of the quality and usability of four social media platforms (Facebook, Instagram, Twitter, and YouTube). Healthcare (Basel). Sep 22, 2022;10(10):1836. [CrossRef] [Medline]
  42. von Muhlen M, Ohno-Machado L. Reviewing social media use by clinicians: Table 1. J Am Med Inform Assoc. Sep 2012;19(5):777-781. [CrossRef]
  43. Mittal T, Chowdhury S, Guhan P, Chelluri S, Manocha D. Towards determining perceived audience intent for multimodal social media posts using the theory of reasoned action. Sci Rep. May 8, 2024;14(1):10606. [CrossRef] [Medline]
  44. Singh T, Olivares S, Cohen T, et al. Pragmatics to reveal intent in social media peer interactions: mixed methods study. J Med Internet Res. Nov 17, 2021;23(11):e32167. [CrossRef] [Medline]
  45. Stuart H. Reducing the stigma of mental illness. Glob Ment Health (Camb). 2016;3:e17. [CrossRef] [Medline]
  46. Srivastava K, Chaudhury S, Bhat PS, Mujawar S. Media and mental health. Ind Psychiatry J. 2018;27(1):1-5. [CrossRef] [Medline]


ADHD: attention-deficit/hyperactivity disorder
STROBE: Strengthening the Reporting of Observational Studies in Epidemiology


Edited by Amaryllis Mavragani; submitted 11.07.24; peer-reviewed by Ann-Christin Haag, Cristiane Melchior, Gemma Sharp, Sandeepa Kaur; final revised version received 21.04.25; accepted 29.04.25; published 18.06.25.

Copyright

© Alexandre Hudon, Keith Perry, Anne-Sophie Plate, Alexis Doucet, Laurence Ducharme, Orielle Djona, Constanza Testart Aguirre, Gabrielle Evoy. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 18.6.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.