TY - JOUR AU - Guo, Eddie AU - Gupta, Mehul AU - Deng, Jiawen AU - Park, Ye-Jean AU - Paget, Michael AU - Naugler, Christopher PY - 2024 DA - 2024/1/12 TI - Automated Paper Screening for Clinical Reviews Using Large Language Models: Data Analysis Study JO - J Med Internet Res SP - e48996 VL - 26 KW - abstract screening KW - Chat GPT KW - classification KW - extract KW - extraction KW - free text KW - GPT KW - GPT-4 KW - language model KW - large language models KW - LLM KW - natural language processing KW - NLP KW - nonopiod analgesia KW - review methodology KW - review methods KW - screening KW - systematic review KW - systematic KW - unstructured data AB - Background: The systematic review of clinical research papers is a labor-intensive and time-consuming process that often involves the screening of thousands of titles and abstracts. The accuracy and efficiency of this process are critical for the quality of the review and subsequent health care decisions. Traditional methods rely heavily on human reviewers, often requiring a significant investment of time and resources. Objective: This study aims to assess the performance of the OpenAI generative pretrained transformer (GPT) and GPT-4 application programming interfaces (APIs) in accurately and efficiently identifying relevant titles and abstracts from real-world clinical review data sets and comparing their performance against ground truth labeling by 2 independent human reviewers. Methods: We introduce a novel workflow using the Chat GPT and GPT-4 APIs for screening titles and abstracts in clinical reviews. A Python script was created to make calls to the API with the screening criteria in natural language and a corpus of title and abstract data sets filtered by a minimum of 2 human reviewers. We compared the performance of our model against human-reviewed papers across 6 review papers, screening over 24,000 titles and abstracts. Results: Our results show an accuracy of 0.91, a macro F1-score of 0.60, a sensitivity of excluded papers of 0.91, and a sensitivity of included papers of 0.76. The interrater variability between 2 independent human screeners was κ=0.46, and the prevalence and bias-adjusted κ between our proposed methods and the consensus-based human decisions was κ=0.96. On a randomly selected subset of papers, the GPT models demonstrated the ability to provide reasoning for their decisions and corrected their initial decisions upon being asked to explain their reasoning for incorrect classifications. Conclusions: Large language models have the potential to streamline the clinical review process, save valuable time and effort for researchers, and contribute to the overall quality of clinical reviews. By prioritizing the workflow and acting as an aid rather than a replacement for researchers and reviewers, models such as GPT-4 can enhance efficiency and lead to more accurate and reliable conclusions in medical research. SN - 1438-8871 UR - https://www.jmir.org/2024/1/e48996 UR - https://doi.org/10.2196/48996 UR - http://www.ncbi.nlm.nih.gov/pubmed/38214966 DO - 10.2196/48996 ID - info:doi/10.2196/48996 ER -