TY - JOUR AU - Herrmann-Werner, Anne AU - Festl-Wietek, Teresa AU - Holderried, Friederike AU - Herschbach, Lea AU - Griewatz, Jan AU - Masters, Ken AU - Zipfel, Stephan AU - Mahling, Moritz PY - 2024 DA - 2024/1/23 TI - Assessing ChatGPT’s Mastery of Bloom’s Taxonomy Using Psychosomatic Medicine Exam Questions: Mixed-Methods Study JO - J Med Internet Res SP - e52113 VL - 26 KW - answer KW - artificial intelligence KW - assessment KW - Bloom’s taxonomy KW - ChatGPT KW - classification KW - error KW - exam KW - examination KW - generative KW - GPT-4 KW - Generative Pre-trained Transformer 4 KW - language model KW - learning outcome KW - LLM KW - MCQ KW - medical education KW - medical exam KW - multiple-choice question KW - natural language processing KW - NLP KW - psychosomatic KW - question KW - response KW - taxonomy AB - Background: Large language models such as GPT-4 (Generative Pre-trained Transformer 4) are being increasingly used in medicine and medical education. However, these models are prone to “hallucinations” (ie, outputs that seem convincing while being factually incorrect). It is currently unknown how these errors by large language models relate to the different cognitive levels defined in Bloom’s taxonomy. Objective: This study aims to explore how GPT-4 performs in terms of Bloom’s taxonomy using psychosomatic medicine exam questions. Methods: We used a large data set of psychosomatic medicine multiple-choice questions (N=307) with real-world results derived from medical school exams. GPT-4 answered the multiple-choice questions using 2 distinct prompt versions: detailed and short. The answers were analyzed using a quantitative approach and a qualitative approach. Focusing on incorrectly answered questions, we categorized reasoning errors according to the hierarchical framework of Bloom’s taxonomy. Results: GPT-4’s performance in answering exam questions yielded a high success rate: 93% (284/307) for the detailed prompt and 91% (278/307) for the short prompt. Questions answered correctly by GPT-4 had a statistically significant higher difficulty than questions answered incorrectly (P=.002 for the detailed prompt and P<.001 for the short prompt). Independent of the prompt, GPT-4’s lowest exam performance was 78.9% (15/19), thereby always surpassing the “pass” threshold. Our qualitative analysis of incorrect answers, based on Bloom’s taxonomy, showed that errors were primarily in the “remember” (29/68) and “understand” (23/68) cognitive levels; specific issues arose in recalling details, understanding conceptual relationships, and adhering to standardized guidelines. Conclusions: GPT-4 demonstrated a remarkable success rate when confronted with psychosomatic medicine multiple-choice exam questions, aligning with previous findings. When evaluated through Bloom’s taxonomy, our data revealed that GPT-4 occasionally ignored specific facts (remember), provided illogical reasoning (understand), or failed to apply concepts to a new situation (apply). These errors, which were confidently presented, could be attributed to inherent model biases and the tendency to generate outputs that maximize likelihood. SN - 1438-8871 UR - https://www.jmir.org/2024/1/e52113 UR - https://doi.org/10.2196/52113 UR - http://www.ncbi.nlm.nih.gov/pubmed/38261378 DO - 10.2196/52113 ID - info:doi/10.2196/52113 ER -