Search Articles

View query in Help articles search

Search Results (1 to 10 of 114 Results)

Download search results: CSV END BibTex RIS


Enhancing Pulmonary Disease Prediction Using Large Language Models With Feature Summarization and Hybrid Retrieval-Augmented Generation: Multicenter Methodological Study Based on Radiology Report

Enhancing Pulmonary Disease Prediction Using Large Language Models With Feature Summarization and Hybrid Retrieval-Augmented Generation: Multicenter Methodological Study Based on Radiology Report

These findings are further analyzed by the LLM to extract disease-specific features and rank their importance. Then the summarized features are used by the LLM to generates diagnostic questions to construct a logical reasoning pathway. During the prediction phase, the workflow retrieves similar imaging reports via a hybrid RAG framework to refine the LLM’s understanding of disease patterns, ultimately generating comprehensive and precise results for disease prediction.

Ronghao Li, Shuai Mao, Congmin Zhu, Yingliang Yang, Chunting Tan, Li Li, Xiangdong Mu, Honglei Liu, Yuqing Yang

J Med Internet Res 2025;27:e72638

Chatbot for the Return of Positive Genetic Screening Results for Hereditary Cancer Syndromes: Prompt Engineering Project

Chatbot for the Return of Positive Genetic Screening Results for Hereditary Cancer Syndromes: Prompt Engineering Project

Indeed, creating a hybrid chatbot with both rule-based and LLM components can offer a versatile and streamlined user experience by ensuring that key information is covered in the rule-based components of the chatbot and allowing for the LLM component to support complex, open-ended queries that are not covered in the scripted content.

Emma Coen, Guilherme Del Fiol, Kimberly A Kaphingst, Emerson Borsato, Jackilen Shannon, Hadley Smith, Aaron Masino, Caitlin G Allen

JMIR Cancer 2025;11:e65848

Algorithmic Classification of Psychiatric Disorder–Related Spontaneous Communication Using Large Language Model Embeddings: Algorithm Development and Validation

Algorithmic Classification of Psychiatric Disorder–Related Spontaneous Communication Using Large Language Model Embeddings: Algorithm Development and Validation

Therefore, we hypothesize that, given the differences in patterns of speech by individuals across psychiatric disorders, spontaneous use of language will occupy diagnosis-specific subspaces in the LLM embedding space.

Ryan Allen Shewcraft, John Schwarz, Mariann Micsinai Balan

JMIR AI 2025;4:e67369

Using Large Language Models to Enhance Exercise Recommendations and Physical Activity in Clinical and Healthy Populations: Scoping Review

Using Large Language Models to Enhance Exercise Recommendations and Physical Activity in Clinical and Healthy Populations: Scoping Review

Transparent communication about data collection, robust validation mechanisms, and regulatory frameworks prioritizing inclusivity and fairness are crucial to maximize the potential of LLM-driven health interventions while ensuring scientifically grounded and emotionally supportive recommendations [29,31]. One critical yet underexplored aspect of LLM development in ERs and PA is the lack of transparency regarding training data and fine-tuning methodologies.

Xiangxun Lai, Jiacheng Chen, Yue Lai, Shengqi Huang, Yongdong Cai, Zhifeng Sun, Xueding Wang, Kaijiang Pan, Qi Gao, Caihua Huang

JMIR Med Inform 2025;13:e59309

A Comparison of Responses from Human Therapists and Large Language Model–Based Chatbots to Assess Therapeutic Communication: Mixed Methods Study

A Comparison of Responses from Human Therapists and Large Language Model–Based Chatbots to Assess Therapeutic Communication: Mixed Methods Study

Emerging evidence suggests that LLM-based chatbots as social companions can offer positive support and contribute to general psychological wellness [3,4]. There have been numerous studies, especially focused on the AI companion app Replika, that found positive mental health outcomes of using chatbots, such as increased confidence and improved relationships with friends [4,5].

Till Scholich, Maya Barr, Shannon Wiltsey Stirman, Shriti Raj

JMIR Ment Health 2025;12:e69709

Assessing ChatGPT’s Capability as a New Age Standardized Patient: Qualitative Study

Assessing ChatGPT’s Capability as a New Age Standardized Patient: Qualitative Study

Since its introduction in November 2022, sectors spanning from history to entertainment have rapidly adopted the LLM [10]. This advancement in AI has led to the development of virtual SP chatbots. A number of major educational material suppliers and specialized companies are offering chatbot SPs, based on LLMs capable of natural language interactions, for students to practice clinical skills.

Joseph Cross, Tarron Kayalackakom, Raymond E Robinson, Andrea Vaughans, Roopa Sebastian, Ricardo Hood, Courtney Lewis, Sumanth Devaraju, Prasanna Honnavar, Sheetal Naik, Jillwin Joseph, Nikhilesh Anand, Abdalla Mohammed, Asjah Johnson, Eliran Cohen, Teniola Adeniji, Aisling Nnenna Nnaji, Julia Elizabeth George

JMIR Med Educ 2025;11:e63353

From E-Patients to AI Patients: The Tidal Wave Empowering Patients, Redefining Clinical Relationships, and Transforming Care

From E-Patients to AI Patients: The Tidal Wave Empowering Patients, Redefining Clinical Relationships, and Transforming Care

Among LLM users, half reported personal learning as their goal, and 39% sought information about physical or mental health [3]. Patients burdened with life-changing or rare conditions commonly search for the resources that they need to solve problems. As consumer costs of care keep rising and health care is relentlessly hard to navigate, patients and caregivers are gaining skills and intelligence using LLMs across a breadth of topics.

Susan S Woods, Sarah M Greene, Laura Adams, Grace Cordovano, Matthew F Hudson

J Particip Med 2025;17:e75794

Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study

Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study

This raises questions about the underlying mechanisms that prompt an LLM to label certain statements as “more factual.” For example, one possible explanation could be that data-rich or frequently discussed topics in training sets may be perceived as more certain [18], even if this does not translate into clinical accuracy. Additionally, retrieval-augmented generation (RAG) has been proposed to ground LLM outputs in external data, which potentially mitigates hallucinations [19].

Mahmud Omar, Reem Agbareia, Benjamin S Glicksberg, Girish N Nadkarni, Eyal Klang

JMIR Med Inform 2025;13:e66917

Evaluating Generative AI in Mental Health: Systematic Review of Capabilities and Limitations

Evaluating Generative AI in Mental Health: Systematic Review of Capabilities and Limitations

LLMs such as Chat GPT, Claude, and Bard hold great promise in mitigating this stark situation to reduce clinicians’ burden and increase clinician efficiency through LLM-assisted clinical notes writing, formulating differential diagnoses, drafting personalized treatment plans, drawing insights from patient chart data, providing on-demand coaching and companionship, and, ultimately, providing therapy [10,11].

Liying Wang, Tanmay Bhanushali, Zhuoran Huang, Jingyi Yang, Sukriti Badami, Lisa Hightow-Weidman

JMIR Ment Health 2025;12:e70014