Search Articles

View query in Help articles search

Search Results (1 to 9 of 9 Results)

Download search results: CSV END BibTex RIS


Noninvasive Oral Hyperspectral Imaging–Driven Digital Diagnosis of Heart Failure With Preserved Ejection Fraction: Model Development and Validation Study

Noninvasive Oral Hyperspectral Imaging–Driven Digital Diagnosis of Heart Failure With Preserved Ejection Fraction: Model Development and Validation Study

We used the SHAP model to explain the best machine learning algorithm [29]. SHAP is a method of interpreting the output of a machine learning model and assigns weights to the optimal indexes using the Shapley values derived from the analysis; we used it to quantify the contribution of different features to the predicted values [30]. The SHAP value allows visual identification of the impact of different features on the model prediction results.

Xiaomeng Yang, Zeyan Li, Lei Lei, Xiaoyu Shi, Dingming Zhang, Fei Zhou, Wenjing Li, Tianyou Xu, Xinyu Liu, Songyun Wang, Quan Yuan, Jian Yang, Xinyu Wang, Yanfei Zhong, Lilei Yu

J Med Internet Res 2025;27:e67256

Improving Risk Prediction of Methicillin-Resistant Staphylococcus aureus Using Machine Learning Methods With Network Features: Retrospective Development Study

Improving Risk Prediction of Methicillin-Resistant Staphylococcus aureus Using Machine Learning Methods With Network Features: Retrospective Development Study

We also discuss the Shapley Additive Explanations (SHAP) technique for understanding feature importance in each model. SHAP [20] is a visual feature-attribution process that has many applications in explainable artificial intelligence. It uses a game-theoretic methodology to measure the influence of each feature on the target variable of a machine learning model. Visual representations such as the one in Figure 2, referred to as a summary plot, are used to show the importance of features.

Methun Kamruzzaman, Jack Heavey, Alexander Song, Matthew Bielskas, Parantapa Bhattacharya, Gregory Madden, Eili Klein, Xinwei Deng, Anil Vullikanti

JMIR AI 2024;3:e48067

Machine Learning for Predicting Risk and Prognosis of Acute Kidney Disease in Critically Ill Elderly Patients During Hospitalization: Internet-Based and Interpretable Model Study

Machine Learning for Predicting Risk and Prognosis of Acute Kidney Disease in Critically Ill Elderly Patients During Hospitalization: Internet-Based and Interpretable Model Study

To better explain the clinical significance of certain features, this study quantified the features’ importance as SHAP values. As shown in Figure 3 A, variables were given a ranking based on their contribution to the risk prediction of AKD, with creatinine on day 3, sepsis, delta BUN, DBP, heart rate, delta creatinine, creatinine on day 1, respiratory rate, p H, and diabetes as the top 10 predictors of developing AKD during hospitalization in the elderly.

Mingxia Li, Shuzhe Han, Fang Liang, Chenghuan Hu, Buyao Zhang, Qinlan Hou, Shuangping Zhao

J Med Internet Res 2024;26:e51354

Development and Validation of a Robust and Interpretable Early Triaging Support System for Patients Hospitalized With COVID-19: Predictive Algorithm Modeling and Interpretation Study

Development and Validation of a Robust and Interpretable Early Triaging Support System for Patients Hospitalized With COVID-19: Predictive Algorithm Modeling and Interpretation Study

under receiver operating characteristic curve; DCA: decision curve analysis; DDRTree: Discriminative dimensionality reduction by learning a tree; DNN: deep neural network; GBM: gradient boosting machine; MLR: multivariable logistic regression; RF: random forest; RF-MDIFI: random forest–based mean decrease in Gini Impurity feature importance method; RF-PFI: random forest–based permutation feature importance method; RF-Shapley: random forest–based Shapley method; ROC: receiver operating characteristic curve; SHAP

Sangwon Baek, Yeon joo Jeong, Yun-Hyeon Kim, Jin Young Kim, Jin Hwan Kim, Eun Young Kim, Jae-Kwang Lim, Jungok Kim, Zero Kim, Kyunga Kim, Myung Jin Chung

J Med Internet Res 2024;26:e52134

Predicting Colorectal Cancer Survival Using Time-to-Event Machine Learning: Retrospective Cohort Study

Predicting Colorectal Cancer Survival Using Time-to-Event Machine Learning: Retrospective Cohort Study

Given the high incidence of CRC and the lack of a reliable study on modeling time-to-event survival data of CRC using ML-based approaches, this study seeks to contribute to the existing body of knowledge by evaluating the performance of time-to-event ML models in predicting CRC-specific survival and by combining ML models with the SHapley Additive ex Planations (SHAP) method [25] to provide transparent predictions for clinical application.

Xulin Yang, Hang Qiu, Liya Wang, Xiaodong Wang

J Med Internet Res 2023;25:e44417

Prediction of Chronic Stress and Protective Factors in Adults: Development of an Interpretable Prediction Model Based on XGBoost and SHAP Using National Cross-sectional DEGS1 Data

Prediction of Chronic Stress and Protective Factors in Adults: Development of an Interpretable Prediction Model Based on XGBoost and SHAP Using National Cross-sectional DEGS1 Data

To overcome this problem, Lundberg [30,31] proposes the SHAP approach for interpreting predictions of complex models created by different techniques; for example, NGBoost, Cat Boost, XGBoost, Light GBM, and scikit-learn tree models. SHAP was initially developed by Shapley in 1953 and is based on the game theory [32]. It explains the prediction of a specific input (X) by calculating the impact of each feature on the prediction.

Arezoo Bozorgmehr, Birgitta Weltermann

JMIR AI 2023;2:e41868

Development and Assessment of Assisted Diagnosis Models Using Machine Learning for Identifying Elderly Patients With Malnutrition: Cohort Study

Development and Assessment of Assisted Diagnosis Models Using Machine Learning for Identifying Elderly Patients With Malnutrition: Cohort Study

In this study, we used Shapley additive explanation (SHAP) values, to interpret feature contributions and assess the clinical significance of predictive models [27,28]. The SHAP value is the measurement of the marginal contribution of each feature in different combinations (Equation 1). Where ϕ0 is the average predicted value of all the samples, known as the base value, ϕj is the SHAP value of the feature, and M is the total number of features.

Xue Wang, Fengchun Yang, Mingwei Zhu, Hongyuan Cui, Junmin Wei, Jiao Li, Wei Chen

J Med Internet Res 2023;25:e42435

Predicting Mortality in Intensive Care Unit Patients With Heart Failure Using an Interpretable Machine Learning Model: Retrospective Cohort Study

Predicting Mortality in Intensive Care Unit Patients With Heart Failure Using an Interpretable Machine Learning Model: Retrospective Cohort Study

All selected variables contained The interpretation of the prediction model is performed by SHAP, which is a unified approach to calculate the contribution and influence of each feature toward the final predictions precisely [26]. The SHAP values can show how much each predictor contributes, either positively or negatively, to the target variable. Besides, each observation in the data set could be interpreted by the particular set of SHAP values.

Jili Li, Siru Liu, Yundi Hu, Lingfeng Zhu, Yujia Mao, Jialin Liu

J Med Internet Res 2022;24(8):e38082

Prognostic Assessment of COVID-19 in the Intensive Care Unit by Machine Learning Methods: Model Development and Validation

Prognostic Assessment of COVID-19 in the Intensive Care Unit by Machine Learning Methods: Model Development and Validation

Moreover, to improve the interpretability of the black box model, we also used SHapley Additive ex Planations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) to explain the prediction model; therefore, the prediction model not only predicts prognostic outcomes but also gives a reasonable explanation for the prediction, which can greatly enhance users’ trust of the model.

Pan Pan, Yichao Li, Yongjiu Xiao, Bingchao Han, Longxiang Su, Mingliang Su, Yansheng Li, Siqi Zhang, Dapeng Jiang, Xia Chen, Fuquan Zhou, Ling Ma, Pengtao Bao, Lixin Xie

J Med Internet Res 2020;22(11):e23128