%0 Journal Article %@ 1438-8871 %I JMIR Publications %V 27 %N %P e66098 %T Use of Retrieval-Augmented Large Language Model for COVID-19 Fact-Checking: Development and Usability Study %A Li,Hai %A Huang,Jingyi %A Ji,Mengmeng %A Yang,Yuyi %A An,Ruopeng %+ , School of Economics and Management, Shanghai University of Sport, 650 Hengren Road, Yangpu District, Shanghai, 200000, China, 86 13816490872, lihai1107@hotmail.com %K large language model %K misinformation %K disinformation %K fact-checking %K COVID-19 %K artificial intelligence %K ChatGPT %K natural language processing %K machine learning %K SARS-CoV-2 %K coronavirus %K respiratory %K infectious %K pulmonary %K pandemic %K infodemic %K retrieval-augmented generation %K accuracy %D 2025 %7 30.4.2025 %9 Original Paper %J J Med Internet Res %G English %X Background: The COVID-19 pandemic has been accompanied by an “infodemic,” where the rapid spread of misinformation has exacerbated public health challenges. Traditional fact-checking methods, though effective, are time-consuming and resource-intensive, limiting their ability to combat misinformation at scale. Large language models (LLMs) such as GPT-4 offer a more scalable solution, but their susceptibility to generating hallucinations—plausible yet incorrect information—compromises their reliability. Objective: This study aims to enhance the accuracy and reliability of COVID-19 fact-checking by integrating a retrieval-augmented generation (RAG) system with LLMs, specifically addressing the limitations of hallucination and context inaccuracy inherent in stand-alone LLMs. Methods: We constructed a context dataset comprising approximately 130,000 peer-reviewed papers related to COVID-19 from PubMed and Scopus. This dataset was integrated with GPT-4 to develop multiple RAG-enhanced models: the naïve RAG, Lord of the Retrievers (LOTR)–RAG, corrective RAG (CRAG), and self-RAG (SRAG). The RAG systems were designed to retrieve relevant external information, which was then embedded and indexed in a vector store for similarity searches. One real-world dataset and one synthesized dataset, each containing 500 claims, were used to evaluate the performance of these models. Each model’s accuracy, F1-score, precision, and sensitivity were compared to assess their effectiveness in reducing hallucination and improving fact-checking accuracy. Results: The baseline GPT-4 model achieved an accuracy of 0.856 on the real-world dataset. The naïve RAG model improved this to 0.946, while the LOTR-RAG model further increased accuracy to 0.951. The CRAG and SRAG models outperformed all others, achieving accuracies of 0.972 and 0.973, respectively. The baseline GPT-4 model reached an accuracy of 0.960 on the synthesized dataset. The naïve RAG model increased this to 0.972, and the LOTR-RAG, CRAG, and SRAG models achieved an accuracy of 0.978. These findings demonstrate that the RAG-enhanced models consistently maintained high accuracy levels, closely mirroring ground-truth labels and significantly reducing hallucinations. The CRAG and SRAG models also provided more detailed and contextually accurate explanations, further establishing the superiority of agentic RAG frameworks in delivering reliable and precise fact-checking outputs across diverse datasets. Conclusions: The integration of RAG systems with LLMs substantially improves the accuracy and contextual relevance of automated fact-checking. By reducing hallucinations and enhancing transparency by citing retrieved sources, this method holds significant promise for rapid, reliable information verification to combat misinformation during public health crises. %R 10.2196/66098 %U https://www.jmir.org/2025/1/e66098 %U https://doi.org/10.2196/66098