Published on in Vol 23, No 11 (2021): November

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/29386, first published .
The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study

The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study

The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study

Journals

  1. Funnell E, Spadaro B, Benacek J, Martin-Key N, Metcalfe T, Olmert T, Barton-Owen G, Bahn S. Learnings from user feedback of a novel digital mental health assessment. Frontiers in Psychiatry 2022;13 View
  2. Papagni G, de Pagter J, Zafari S, Filzmoser M, Koeszegi S. Artificial agents’ explainability to support trust: considerations on timing and context. AI & SOCIETY 2023;38(2):947 View
  3. Kopka M, Schmieding M, Rieger T, Roesler E, Balzer F, Feufel M. Determinants of Laypersons’ Trust in Medical Decision Aids: Randomized Controlled Trial. JMIR Human Factors 2022;9(2):e35219 View
  4. Schmieding M, Kopka M, Schmidt K, Schulz-Niethammer S, Balzer F, Feufel M. Triage Accuracy of Symptom Checker Apps: 5-Year Follow-up Evaluation. Journal of Medical Internet Research 2022;24(5):e31810 View
  5. Stepin I, Alonso-Moral J, Catala A, Pereira-Fariña M. An empirical study on how humans appreciate automated counterfactual explanations which embrace imprecise information. Information Sciences 2022;618:379 View
  6. Shen Y, Xu W, Liang A, Wang X, Lu X, Lu Z, Gao C. Online health management continuance and the moderating effect of service type and age difference: A meta-analysis. Health Informatics Journal 2022;28(3) View
  7. You Y, Tsai C, Li Y, Ma F, Heron C, Gui X. Beyond Self-diagnosis: How a Chatbot-based Symptom Checker Should Respond. ACM Transactions on Computer-Human Interaction 2023;30(4):1 View
  8. Love C. “Just the Facts Ma’am”: Moral and Ethical Considerations for Artificial Intelligence in Medicine and its Potential to Impact Patient Autonomy and Hope. The Linacre Quarterly 2023;90(4):375 View
  9. Wang X, Luo R, Liu Y, Chen P, Tao Y, He Y. Revealing the complexity of users’ intention to adopt healthcare chatbots: A mixed-method analysis of antecedent condition configurations. Information Processing & Management 2023;60(5):103444 View
  10. Subramanian H, Canfield C, Shank D. Designing explainable AI to improve human-AI team performance: A medical stakeholder-driven scoping review. Artificial Intelligence in Medicine 2024;149:102780 View
  11. Wetzel A, Koch R, Koch N, Klemmt M, Müller R, Preiser C, Rieger M, Rösel I, Ranisch R, Ehni H, Joos S. ‘Better see a doctor?’ Status quo of symptom checker apps in Germany: A cross-sectional survey with a mixed-methods design (CHECK.APP). DIGITAL HEALTH 2024;10 View
  12. Aissaoui Ferhi L, Ben Amar M, Choubani F, Bouallegue R. Enhancing diagnostic accuracy in symptom-based health checkers: a comprehensive machine learning approach with clinical vignettes and benchmarking. Frontiers in Artificial Intelligence 2024;7 View
  13. Wetzel A, Preiser C, Müller R, Joos S, Koch R, Henking T, Haumann H. “Do I really not need to go to the doctor?” - Unveiling usage patterns and explaining usage of Symptom Checker Apps: an explorative longitudinal mixed-method study. (Preprint). Journal of Medical Internet Research 2023 View
  14. Kim T, Im I. Understanding Users’ AI Manipulation Intention: An Empirical Investigation of the Antecedents in the Context of AI Recommendation Algorithms. Information & Management 2024:104061 View

Books/Policy Documents

  1. Rezaeian O, Bayrak A, Asan O. HCI International 2024 Posters. View