@Article{info:doi/10.2196/68560, author="Yun, Hye Sun and Bickmore, Timothy", title="Online Health Information--Seeking in the Era of Large Language Models: Cross-Sectional Web-Based Survey Study", journal="J Med Internet Res", year="2025", month="Mar", day="31", volume="27", pages="e68560", keywords="online health information--seeking; large language models; eHealth; internet; consumer health information", abstract="Background: As large language model (LLM)--based chatbots such as ChatGPT (OpenAI) grow in popularity, it is essential to understand their role in delivering online health information compared to other resources. These chatbots often generate inaccurate content, posing potential safety risks. This motivates the need to examine how users perceive and act on health information provided by LLM-based chatbots. Objective: This study investigates the patterns, perceptions, and actions of users seeking health information online, including LLM-based chatbots. The relationships between online health information--seeking behaviors and important sociodemographic characteristics are examined as well. Methods: A web-based survey of crowd workers was conducted via Prolific. The questionnaire covered sociodemographic information, trust in health care providers, eHealth literacy, artificial intelligence (AI) attitudes, chronic health condition status, online health information source types, perceptions, and actions, such as cross-checking or adherence. Quantitative and qualitative analyses were applied. Results: Most participants consulted search engines (291/297, 98{\%}) and health-related websites (203/297, 68.4{\%}) for their health information, while 21.2{\%} (63/297) used LLM-based chatbots, with ChatGPT and Microsoft Copilot being the most popular. Most participants (268/297, 90.2{\%}) sought information on health conditions, with fewer seeking advice on medication (179/297, 60.3{\%}), treatments (137/297, 46.1{\%}), and self-diagnosis (62/297, 23.2{\%}). Perceived information quality and trust varied little across source types. The preferred source for validating information from the internet was consulting health care professionals (40/132, 30.3{\%}), while only a very small percentage of participants (5/214, 2.3{\%}) consulted AI tools to cross-check information from search engines and health-related websites. For information obtained from LLM-based chatbots, 19.4{\%} (12/63) of participants cross-checked the information, while 48.4{\%} (30/63) of participants followed the advice. Both of these rates were lower than information from search engines, health-related websites, forums, or social media. Furthermore, use of LLM-based chatbots for health information was negatively correlated with age ($\rho$=--0.16, P=.006). In contrast, attitudes surrounding AI for medicine had significant positive correlations with the number of source types consulted for health advice ($\rho$=0.14, P=.01), use of LLM-based chatbots for health information ($\rho$=0.31, P<.001), and number of health topics searched ($\rho$=0.19, P<.001). Conclusions: Although traditional online sources remain dominant, LLM-based chatbots are emerging as a resource for health information for some users, specifically those who are younger and have a higher trust in AI. The perceived quality and trustworthiness of health information varied little across source types. However, the adherence to health information from LLM-based chatbots seemed more cautious compared to search engines or health-related websites. As LLMs continue to evolve, enhancing their accuracy and transparency will be essential in mitigating any potential risks by supporting responsible information-seeking while maximizing the potential of AI in health contexts. ", issn="1438-8871", doi="10.2196/68560", url="https://www.jmir.org/2025/1/e68560", url="https://doi.org/10.2196/68560" }