%0 Journal Article %@ 1438-8871 %I JMIR Publications %V 25 %N %P e51603 %T How Can the Clinical Aptitude of AI Assistants Be Assayed? %A Thirunavukarasu,Arun James %+ Oxford University Clinical Academic Graduate School, University of Oxford, John Radcliffe Hospital, Level 3, Oxford, OX3 9DU, United Kingdom, 44 1865 289 467, ajt205@cantab.ac.uk %K artificial intelligence %K AI %K validation %K clinical decision aid %K artificial general intelligence %K foundation models %K large language models %K LLM %K language model %K ChatGPT %K chatbot %K chatbots %K conversational agent %K conversational agents %K pitfall %K pitfalls %K pain point %K pain points %K implementation %K barrier %K barriers %K challenge %K challenges %D 2023 %7 5.12.2023 %9 Viewpoint %J J Med Internet Res %G English %X Large language models (LLMs) are exhibiting remarkable performance in clinical contexts, with exemplar results ranging from expert-level attainment in medical examination questions to superior accuracy and relevance when responding to patient queries compared to real doctors replying to queries on social media. The deployment of LLMs in conventional health care settings is yet to be reported, and there remains an open question as to what evidence should be required before such deployment is warranted. Early validation studies use unvalidated surrogate variables to represent clinical aptitude, and it may be necessary to conduct prospective randomized controlled trials to justify the use of an LLM for clinical advice or assistance, as potential pitfalls and pain points cannot be exhaustively predicted. This viewpoint states that as LLMs continue to revolutionize the field, there is an opportunity to improve the rigor of artificial intelligence (AI) research to reward innovation, conferring real benefits to real patients. %M 38051572 %R 10.2196/51603 %U https://www.jmir.org/2023/1/e51603 %U https://doi.org/10.2196/51603 %U http://www.ncbi.nlm.nih.gov/pubmed/38051572