Search Articles

View query in Help articles search

Search Results (1 to 10 of 15 Results)

Download search results: CSV END BibTex RIS


A Responsible Framework for Assessing, Selecting, and Explaining Machine Learning Models in Cardiovascular Disease Outcomes Among People With Type 2 Diabetes: Methodology and Validation Study

A Responsible Framework for Assessing, Selecting, and Explaining Machine Learning Models in Cardiovascular Disease Outcomes Among People With Type 2 Diabetes: Methodology and Validation Study

Building trustworthy machine learning models for clinical practice requires consideration of interpretability, explainability, as well as fairness. Interpretability—which refers to how easily a human can comprehend the mechanism by which a model makes predictions—is important in health care settings because of the need for clinicians and patients to understand and trust the Artificial Intelligence (AI)-involved decisions that directly impact patient care [1,2].

Yang Yang, Che-Yi Liao, Esmaeil Keyvanshokooh, Hui Shao, Mary Beth Weber, Francisco J Pasquel, Gian-Gabriel P Garcia

JMIR Med Inform 2025;13:e66200

Knowledge Graph–Enhanced Deep Learning Model (H-SYSTEM) for Hypertensive Intracerebral Hemorrhage: Model Development and Validation

Knowledge Graph–Enhanced Deep Learning Model (H-SYSTEM) for Hypertensive Intracerebral Hemorrhage: Model Development and Validation

We also enhanced the ability of deep learning models by constructing a specialized knowledge graph that also increases the explainability of the decision-making process. All study procedures were approved by the ethical committees of all the medical centers mentioned in this study. The patients involved in the study all signed informed consent forms before admission. All data involved in this study have been anonymized or deidentified. The case data involved in this study are part of a retrospective study.

Yulong Xia, Jie Li, Bo Deng, Qilin Huang, Fenglin Cai, Yanfeng Xie, Xiaochuan Sun, Quanhong Shi, Wei Dan, Yan Zhan, Li Jiang

J Med Internet Res 2025;27:e66055

An Interpretable Model With Probabilistic Integrated Scoring for Mental Health Treatment Prediction: Design Study

An Interpretable Model With Probabilistic Integrated Scoring for Mental Health Treatment Prediction: Design Study

Interpretability refers to how a model arrives at a decision, and explainability refers to why the decision is reached [16]. While explainability in AI has been a topic of interest since the 1980s, the recent accelerations in the widespread use of AI have resulted in guidelines on the responsible use of AI. Responsible AI requires a robust model that conveys the confidence and degree of uncertainty in prediction [16-18].

Anthony Kelly, Esben Kjems Jensen, Eoin Martino Grua, Kim Mathiasen, Pepijn Van de Ven

JMIR Med Inform 2025;13:e64617

Explainable AI for Intraoperative Motor-Evoked Potential Muscle Classification in Neurosurgery: Bicentric Retrospective Study

Explainable AI for Intraoperative Motor-Evoked Potential Muscle Classification in Neurosurgery: Bicentric Retrospective Study

To address interpretability, we applied two complementary explainability techniques: SHAP and Grad-CAM, each offering unique insights into model behavior. SHAP provides a feature attribution approach, assigning precise numerical contributions to each input feature—such as latency or main frequency—to quantify its role in the prediction process.

Qendresa Parduzi, Jonathan Wermelinger, Simon Domingo Koller, Murat Sariyar, Ulf Schneider, Andreas Raabe, Kathleen Seidel

J Med Internet Res 2025;27:e63937

Machine Learning–Based Explainable Automated Nonlinear Computation Scoring System for Health Score and an Application for Prediction of Perioperative Stroke: Retrospective Study

Machine Learning–Based Explainable Automated Nonlinear Computation Scoring System for Health Score and an Application for Prediction of Perioperative Stroke: Retrospective Study

This system addresses nonlinearity assumptions and enhances explainability. In addition, we applied the EACH score to predict perioperative stroke to assess its performance in real-world clinical practice and examined its performance compared with traditional scores and other ML-based scoring systems. Perioperative stroke significantly impacts postoperative morbidity and mortality.

Mi-Young Oh, Hee-Soo Kim, Young Mi Jung, Hyung-Chul Lee, Seung-Bo Lee, Seung Mi Lee

J Med Internet Res 2025;27:e58021

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review

LLMs have been shown to excel in tasks such as text generation and summarization, but their application in sensitive fields such as mental health requires careful consideration of factors such as ethical concerns, privacy, and model explainability. As generative AI becomes increasingly integrated into various tools and applications, its presence in our everyday lives grows.

Dipak Gautam, Philipp Kellmeyer

JMIR Res Protoc 2025;14:e62865

A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study

A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study

A crucial determinant for successfully implementing the human-AI collaboration approach is decision transparency [32,33], which is often referred to as explainability. Explainability is the concept that an ML model’s prediction can be explained in a way that human operators can comprehend and reconstruct the model’s reasoning [33].

Hongbo Chen, Eldan Cohen, Dulaney Wilson, Myrtede Alfred

JMIR Hum Factors 2024;11:e53378

The Role of Artificial Intelligence Model Documentation in Translational Science: Scoping Review

The Role of Artificial Intelligence Model Documentation in Translational Science: Scoping Review

Guidance for ethics and explainability is expected to meet the needs of a broad range of stakeholders, but what constitutes AI ethics and explainability best practices, who determines them, how principles are applied and regulated in practice, and how it is documented still needs to be defined [13,21,22,24,25,42].

Tracey A Brereton, Momin M Malik, Mark Lifson, Jason D Greenwood, Kevin J Peterson, Shauna M Overgaard

Interact J Med Res 2023;12:e45903

Personalized Risk Analysis to Improve the Psychological Resilience of Women Undergoing Treatment for Breast Cancer: Development of a Machine Learning–Driven Clinical Decision Support Tool

Personalized Risk Analysis to Improve the Psychological Resilience of Women Undergoing Treatment for Breast Cancer: Development of a Machine Learning–Driven Clinical Decision Support Tool

Nevertheless, explainability; transparency; and, most importantly, accountability and responsibility do not receive due consideration, in part because it remains difficult to transform these necessary concepts (eg, trust) into actual computational tools or metrics. Understanding why and how a particular model produced the observed predictions is of paramount importance, especially in health care applications.

Georgios C Manikis, Nicholas J Simos, Konstantina Kourou, Haridimos Kondylakis, Paula Poikonen-Saksela, Ketti Mazzocco, Ruth Pat-Horenczyk, Berta Sousa, Albino J Oliveira-Maia, Johanna Mattson, Ilan Roziner, Chiara Marzorati, Kostas Marias, Mikko Nuutinen, Evangelos Karademas, Dimitrios Fotiadis

J Med Internet Res 2023;25:e43838

Prediction of Chronic Stress and Protective Factors in Adults: Development of an Interpretable Prediction Model Based on XGBoost and SHAP Using National Cross-sectional DEGS1 Data

Prediction of Chronic Stress and Protective Factors in Adults: Development of an Interpretable Prediction Model Based on XGBoost and SHAP Using National Cross-sectional DEGS1 Data

SHAP tool (version 0.40.0) was used to assess the explainability the model; that is, to identify factors protecting against chronic stress. In addition to the performance evaluation, this study maximizes the interpretability of the underlying models. It focuses particularly on the explainability of the model, which can serve as an indispensable tool in the era of precision medicine.

Arezoo Bozorgmehr, Birgitta Weltermann

JMIR AI 2023;2:e41868