e.g. mhealth
Search Results (1 to 9 of 9 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 7 Journal of Medical Internet Research
- 2 JMIR AI
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Research Protocols
- 0 JMIR Human Factors
- 0 JMIR Medical Informatics
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 JMIR Formative Research
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

We used the SHAP model to explain the best machine learning algorithm [29]. SHAP is a method of interpreting the output of a machine learning model and assigns weights to the optimal indexes using the Shapley values derived from the analysis; we used it to quantify the contribution of different features to the predicted values [30]. The SHAP value allows visual identification of the impact of different features on the model prediction results.
J Med Internet Res 2025;27:e67256
Download Citation: END BibTex RIS

We also discuss the Shapley Additive Explanations (SHAP) technique for understanding feature importance in each model.
SHAP [20] is a visual feature-attribution process that has many applications in explainable artificial intelligence. It uses a game-theoretic methodology to measure the influence of each feature on the target variable of a machine learning model. Visual representations such as the one in Figure 2, referred to as a summary plot, are used to show the importance of features.
JMIR AI 2024;3:e48067
Download Citation: END BibTex RIS

To better explain the clinical significance of certain features, this study quantified the features’ importance as SHAP values. As shown in Figure 3 A, variables were given a ranking based on their contribution to the risk prediction of AKD, with creatinine on day 3, sepsis, delta BUN, DBP, heart rate, delta creatinine, creatinine on day 1, respiratory rate, p H, and diabetes as the top 10 predictors of developing AKD during hospitalization in the elderly.
J Med Internet Res 2024;26:e51354
Download Citation: END BibTex RIS

under receiver operating characteristic curve; DCA: decision curve analysis; DDRTree: Discriminative dimensionality reduction by learning a tree; DNN: deep neural network; GBM: gradient boosting machine; MLR: multivariable logistic regression; RF: random forest; RF-MDIFI: random forest–based mean decrease in Gini Impurity feature importance method; RF-PFI: random forest–based permutation feature importance method; RF-Shapley: random forest–based Shapley method; ROC: receiver operating characteristic curve; SHAP
J Med Internet Res 2024;26:e52134
Download Citation: END BibTex RIS

Given the high incidence of CRC and the lack of a reliable study on modeling time-to-event survival data of CRC using ML-based approaches, this study seeks to contribute to the existing body of knowledge by evaluating the performance of time-to-event ML models in predicting CRC-specific survival and by combining ML models with the SHapley Additive ex Planations (SHAP) method [25] to provide transparent predictions for clinical application.
J Med Internet Res 2023;25:e44417
Download Citation: END BibTex RIS

To overcome this problem, Lundberg [30,31] proposes the SHAP approach for interpreting predictions of complex models created by different techniques; for example, NGBoost, Cat Boost, XGBoost, Light GBM, and scikit-learn tree models. SHAP was initially developed by Shapley in 1953 and is based on the game theory [32]. It explains the prediction of a specific input (X) by calculating the impact of each feature on the prediction.
JMIR AI 2023;2:e41868
Download Citation: END BibTex RIS

In this study, we used Shapley additive explanation (SHAP) values, to interpret feature contributions and assess the clinical significance of predictive models [27,28].
The SHAP value is the measurement of the marginal contribution of each feature in different combinations (Equation 1).
Where ϕ0 is the average predicted value of all the samples, known as the base value, ϕj is the SHAP value of the feature, and M is the total number of features.
J Med Internet Res 2023;25:e42435
Download Citation: END BibTex RIS

All selected variables contained
The interpretation of the prediction model is performed by SHAP, which is a unified approach to calculate the contribution and influence of each feature toward the final predictions precisely [26]. The SHAP values can show how much each predictor contributes, either positively or negatively, to the target variable. Besides, each observation in the data set could be interpreted by the particular set of SHAP values.
J Med Internet Res 2022;24(8):e38082
Download Citation: END BibTex RIS

Moreover, to improve the interpretability of the black box model, we also used SHapley Additive ex Planations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) to explain the prediction model; therefore, the prediction model not only predicts prognostic outcomes but also gives a reasonable explanation for the prediction, which can greatly enhance users’ trust of the model.
J Med Internet Res 2020;22(11):e23128
Download Citation: END BibTex RIS