%0 Journal Article %@ 1438-8871 %I JMIR Publications %V 27 %N %P e65567 %T Trust and Acceptance Challenges in the Adoption of AI Applications in Health Care: Quantitative Survey Analysis %A Kauttonen,Janne %A Rousi,Rebekah %A Alamäki,Ari %+ Digital Transition and AI, Haaga-Helia University of Applied Sciences, Ratapihantie 13, Helsinki, 00520, Finland, 358 400 230 404, janne.kauttonen@haaga-helia.fi %K artificial intelligence %K AI %K health care technology %K technology adoption %K predictive modeling %K user trust %K user acceptance %D 2025 %7 21.3.2025 %9 Original Paper %J J Med Internet Res %G English %X Background: Artificial intelligence (AI) has potential to transform health care, but its successful implementation depends on the trust and acceptance of consumers and patients. Understanding the factors that influence attitudes toward AI is crucial for effective adoption. Despite AI’s growing integration into health care, consumer and patient acceptance remains a critical challenge. Research has largely focused on applications or attitudes, lacking a comprehensive analysis of how factors, such as demographics, personality traits, technology attitudes, and AI knowledge, affect and interact across different health care AI contexts. Objective: We aimed to investigate people’s trust in and acceptance of AI across health care use cases and determine how context and perceived risk affect individuals’ propensity to trust and accept AI in specific health care scenarios. Methods: We collected and analyzed web-based survey data from 1100 Finnish participants, presenting them with 8 AI use cases in health care: 5 (62%) noninvasive applications (eg, activity monitoring and mental health support) and 3 (38%) physical interventions (eg, AI-controlled robotic surgery). Respondents evaluated intention to use, trust, and willingness to trade off personal data for these use cases. Gradient boosted tree regression models were trained to predict responses based on 33 demographic-, personality-, and technology-related variables. To interpret the results of our predictive models, we used the Shapley additive explanations method, a game theory–based approach for explaining the output of machine learning models. It quantifies the contribution of each feature to individual predictions, allowing us to determine the relative importance of various demographic-, personality-, and technology-related factors and their interactions in shaping participants’ trust in and acceptance of AI in health care. Results: Consumer attitudes toward technology, technology use, and personality traits were the primary drivers of trust and intention to use AI in health care. Use cases were ranked by acceptance, with noninvasive monitors being the most preferred. However, the specific use case had less impact in general than expected. Nonlinear dependencies were observed, including an inverted U-shaped pattern in positivity toward AI based on self-reported AI knowledge. Certain personality traits, such as being more disorganized and careless, were associated with more positive attitudes toward AI in health care. Women seemed more cautious about AI applications in health care than men. Conclusions: The findings highlight the complex interplay of factors influencing trust and acceptance of AI in health care. Consumer trust and intention to use AI in health care are driven by technology attitudes and use rather than specific use cases. AI service providers should consider demographic factors, personality traits, and technology attitudes when designing and implementing AI systems in health care. The study demonstrates the potential of using predictive AI models as decision-making tools for implementing and interacting with clients in health care AI applications. %R 10.2196/65567 %U https://www.jmir.org/2025/1/e65567 %U https://doi.org/10.2196/65567