Search Articles

View query in Help articles search

Search Results (1 to 3 of 3 Results)

Download search results: CSV END BibTex RIS


Human-AI Teaming in Critical Care: A Comparative Analysis of Data Scientists’ and Clinicians’ Perspectives on AI Augmentation and Automation

Human-AI Teaming in Critical Care: A Comparative Analysis of Data Scientists’ and Clinicians’ Perspectives on AI Augmentation and Automation

Involving clinicians in the co-design of interpretable rather than fully transparent systems could thus be a solution to solving the explainable AI conundrum [41]. Given the high safety risks for patients, these considerations are particularly important for “diagnostic decision-making,” “prescribing medication or treatment,” and “analyzing medical data.” Also, both stakeholder groups agreed that at levels 2 and 3, both the control over and responsibility for system outcomes must reside with clinicians.

Nadine Bienefeld, Emanuela Keller, Gudela Grote

J Med Internet Res 2024;26:e50130

Explainable AI Method for Tinnitus Diagnosis via Neighbor-Augmented Knowledge Graph and Traditional Chinese Medicine: Development and Validation Study

Explainable AI Method for Tinnitus Diagnosis via Neighbor-Augmented Knowledge Graph and Traditional Chinese Medicine: Development and Validation Study

The first experiment was performed to compare the proposed method with similar graph algorithms, while the second experiment was performed to compare the proposed method with other common explainable ML methods. The evaluation metrics of the algorithm are accuracy, precision, sensitivity, specificity, F1-score, area under receiver operating characteristic curve (AUC), etc.

Ziming Yin, Zhongling Kuang, Haopeng Zhang, Yu Guo, Ting Li, Zhengkun Wu, Lihua Wang

JMIR Med Inform 2024;12:e57678

A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study

A Machine Learning Approach with Human-AI Collaboration for Automated Classification of Patient Safety Event Reports: Algorithm Development and Validation Study

Moreover, we enhanced the explainability of the ML classifiers by using an explainable AI technique. Furthermore, we have investigated the ML classifier’s performance under 2 conditions, differentiated by whether the explanation is valid for the predicted event type. Based on this analysis, we offer recommendations for optimizing human-AI collaboration in the context of PSE report classification.

Hongbo Chen, Eldan Cohen, Dulaney Wilson, Myrtede Alfred

JMIR Hum Factors 2024;11:e53378