TY - JOUR AU - Walker, Harriet Louise AU - Ghani, Shahi AU - Kuemmerli, Christoph AU - Nebiker, Christian Andreas AU - Müller, Beat Peter AU - Raptis, Dimitri Aristotle AU - Staubli, Sebastian Manuel PY - 2023 DA - 2023/6/30 TI - Reliability of Medical Information Provided by ChatGPT: Assessment Against Clinical Guidelines and Patient Information Quality Instrument JO - J Med Internet Res SP - e47479 VL - 25 KW - artificial intelligence KW - internet information KW - patient information KW - ChatGPT KW - EQIP tool KW - chatbot KW - chatbots KW - conversational agent KW - conversational agents KW - internal medicine KW - pancreas KW - liver KW - hepatic KW - biliary KW - gall KW - bile KW - gallstone KW - pancreatitis KW - pancreatic KW - medical information AB - Background: ChatGPT-4 is the latest release of a novel artificial intelligence (AI) chatbot able to answer freely formulated and complex questions. In the near future, ChatGPT could become the new standard for health care professionals and patients to access medical information. However, little is known about the quality of medical information provided by the AI. Objective: We aimed to assess the reliability of medical information provided by ChatGPT. Methods: Medical information provided by ChatGPT-4 on the 5 hepato-pancreatico-biliary (HPB) conditions with the highest global disease burden was measured with the Ensuring Quality Information for Patients (EQIP) tool. The EQIP tool is used to measure the quality of internet-available information and consists of 36 items that are divided into 3 subsections. In addition, 5 guideline recommendations per analyzed condition were rephrased as questions and input to ChatGPT, and agreement between the guidelines and the AI answer was measured by 2 authors independently. All queries were repeated 3 times to measure the internal consistency of ChatGPT. Results: Five conditions were identified (gallstone disease, pancreatitis, liver cirrhosis, pancreatic cancer, and hepatocellular carcinoma). The median EQIP score across all conditions was 16 (IQR 14.5-18) for the total of 36 items. Divided by subsection, median scores for content, identification, and structure data were 10 (IQR 9.5-12.5), 1 (IQR 1-1), and 4 (IQR 4-5), respectively. Agreement between guideline recommendations and answers provided by ChatGPT was 60% (15/25). Interrater agreement as measured by the Fleiss κ was 0.78 (P<.001), indicating substantial agreement. Internal consistency of the answers provided by ChatGPT was 100%. Conclusions: ChatGPT provides medical information of comparable quality to available static internet information. Although currently of limited quality, large language models could become the future standard for patients and health care professionals to gather medical information. SN - 1438-8871 UR - https://www.jmir.org/2023/1/e47479 UR - https://doi.org/10.2196/47479 UR - http://www.ncbi.nlm.nih.gov/pubmed/37389908 DO - 10.2196/47479 ID - info:doi/10.2196/47479 ER -