Viewpoint
Abstract
Continuous monitoring of patients’ health facilitated by artificial intelligence (AI) has enhanced the quality of health care, that is, the ability to access effective care. However, AI monitoring often encounters resistance to adoption by decision makers. Healthcare organizations frequently assume that the resistance stems from patients’ rational evaluation of the technology’s costs and benefits. Recent research challenges this assumption and suggests that the resistance to AI monitoring is influenced by the emotional experiences of patients and their surrogate decision makers. We develop a framework from an emotional perspective, provide important implications for healthcare organizations, and offer recommendations to help reduce resistance to AI monitoring.
J Med Internet Res 2025;27:e51785doi:10.2196/51785
Keywords
Introduction
Continuous monitoring through artificial intelligence (AI) technology is becoming increasingly important in health care. AI is becoming an integral part of health care, for example, in pain monitoring [
], in creating medical imaging platforms [ ], and in delivering timely medical interventions to patients [ ] to increase the quality of care, that is, the ability to access effective care [ ]. AI monitoring solutions use machine learning techniques to learn from data generated from adhesive patches, sensor devices, video cameras, and other devices, and identify risks of illnesses and adverse events. Given the potential benefits that AI monitoring offers to health care [ ], regulators, and healthcare (we use both terms, healthcare and health care. Healthcare refers to an industry or a system that provides people with health care, whereas health care refers to the process of care or things that health professionals do) organizations strongly advocate its use. For instance, the Food and Drug Administration has been actively approving remote monitoring devices for patient care [ ], while prominent hospital systems like Stanford Medical are developing AI monitoring solutions for senior care [ ].However, novel solutions such as AI monitoring often encounter resistance from users [
, ]. Such resistance is usually attributed to users’ risk aversion toward innovation [ ], sometimes called “liability of newness” [ ], and their cognitive assessments of the costs and benefits of AI. However, we argue that emotions play an important role in AI monitoring resistance, even more than a rational evaluation of the decision, given that decision makers, specifically, patients and family members, lack extensive experience with novel AI solutions [ ], which limits their ability to conduct a thorough cost-benefit analysis. Health care decisions often carry high stakes, and making the wrong choices can lead to serious consequences, including the loss of life. Therefore, making emotion-driven decisions in adopting AI-based solutions presents a significant challenge for healthcare organizations.In this viewpoint paper, we discuss why the current view on resistance to AI monitoring may not fully capture the underlying reasons. We develop a framework from an emotional perspective that explains why decision makers resist AI monitoring and propose solutions that alleviate their concerns.
What Is Missing in the Current View of AI Monitoring Resistance?
Healthcare organizations often attribute resistance to patients’ reluctance to accept innovative solutions to their cognitive assessment of the technology’s costs and benefits [
]. For instance, prior studies have reported that some patients’ resistance to AI systems results from their evaluations regarding privacy intrusion and insecurity of sensitive medical data because AI systems may be vulnerable to data breaches and data misuse [ ]. Some patients may have doubts about the accuracy and reliability of AI systems [ ] or do not understand and fully comprehend how AI functions, and hence, are reluctant to adopt these technologies [ ]. Ethical dilemmas regarding AI decision-making in healthcare—in part, stemming from the fact that algorithms are inherently biased [ ] with serious consequences of medical decisions—further contribute to the resistance of AI systems [ ].Taken together, the AI literature assumes that resistance decisions are rational and that patients themselves make the decisions about adopting the AI monitoring system. However, recent studies challenge these assumptions [
]. AI systems are opaque and often lack transparency [ ], making it difficult for decision makers to rationally analyze their costs and benefits. Given the complexity of AI systems, decision makers often lack a comprehensive understanding of this advanced technology, exacerbating the challenges presented to rational decision-making [ ]. Thus, we argue that alongside cognitive assessments of AI’s costs and benefits, emotions play a crucial role in AI monitoring resistance [ ]. Decision makers may include others such as family members because, for example, senior citizens often have limited knowledge of technology and frequently turn to surrogate decision makers, such as their adult children, for help when making technology decisions [ , ].The AI Resistance Framework: An Emotion Perspective
Emotion is a complex psychological state that involves “loosely coupled changes in the domains of subjective experience, behavior, and peripheral physiology” [
]. Emotions are often triggered by external stimuli (eg, events or surrounding situations) [ ], and they can be instrumental in directing attention to critical environmental details, refining decision-making, and aiding behavioral responses [ ]. However, emotions can also be detrimental when they are inappropriate in type, intensity, or duration for a given situation. Therefore, individuals strive to regulate their emotions [ ].Emotion regulation refers to the “attempt to influence which emotions one has when one has them, and how one experiences or expresses these emotions” [
]. The goal of emotion regulation is to achieve some valued end (eg, decreasing negative emotion; Gross [ ]). There are two types of emotion regulation [ ]: Intrinsic emotion regulation focuses on regulating one’s own emotions, while extrinsic emotion regulation involves regulating another person’s emotions. According to a process model of emotion regulation proposed by Gross [ ], there are five types of emotion regulation strategies: situation selection, situation modification, attentional deployment, cognitive change, and response modulation. These strategies can influence both the individuals practicing them and those around them [ ]. This view on emotion regulation has attracted considerable interest across various domains, such as psychology, business, sociology, and healthcare, because of its potential to improve mental health, job performance, and social harmony, among others [ ]. Focusing on the emotions experienced and regulated by decision makers, as well as policies and practices used by healthcare organizations that impact patients’ emotions, we present a framework that explains how emotions significantly impact resistance to AI monitoring ( ).
Decision Makers in Health Care
Health care is a complex process involving various stakeholders who may take on decision-making responsibilities based on a given situation. Patients are typically responsible for their own health decisions, such as following medical advice, using technological aids, or seeking further professional support. However, patients can also rely on others to guide them and sometimes even make important decisions on their behalf. For example, senior citizens often rely on family members for important medical and technology-related decisions. They may ask their adult children to install medical apps or help them navigate through their electronic healthcare records. Thus, key decision makers include patients and family members who act as surrogate decision makers [
, ].Emotions of Decision Makers and Their Triggers
Overview
Emotions, both positive (eg, happiness and joy) and negative (eg, anxiety and fear), are triggered by specific experiences or events [
]. For example, decision makers may become anxious about adverse health outcomes. In the context of AI monitoring, emotions arise in response to the criticality of a given situation [ ] and the capabilities of the AI solution [ ]. Thus, we suggest that emotions play an important role in resistance to AI monitoring.presents two noteworthy sources triggering negative emotions, such as anxiety or fear, and positive emotions, such as happiness and joy. The first source relates to AI technology. Emotions triggered by AI monitoring systems encompass both negative and positive emotions. Anxiety about patients’ surveillance and apprehension about transferring the responsibility of patient monitoring from the caretaker to the AI solution are common [ ]. Conversely, positive emotions such as reassurance and comfort can arise from the continuous monitoring and alerts provided by AI systems, enhancing the sense of security for both patients and their families. The second source is specific to health care. Emotions triggered by the care process and its outcomes depend on the criticality of the health care situation. A critical health care situation, for example, would involve the possibility of the patient being hospitalized [ ]. Conversely, positive emotions such as hope and relief can emerge when effective care processes and interventions are in place. In the following, we discuss some exemplary emotions triggered by both sources, AI technology and the health care situation.
Emotions Triggered by AI Technology
AI technology can trigger a range of emotional responses. For example, reassurance is fostered by enhanced efficiency and reliability in patient care enabled by AI monitoring. With AI monitoring, specific care activities are managed by the system, which allows caretakers and health care providers to focus on more complex and personalized care tasks [
]. For example, AI monitoring can provide real-time alerts and updates about a patient’s condition, ensuring timely interventions and reducing the risk of human error. Decision makers feel reassured by the increased accuracy and efficiency that AI brings, as it supports health care providers in delivering high-quality care.However, AI can also trigger negative emotions. Many decision makers are anxious because they do not know what information will be recorded. Surveillance anxiety is caused by sensor-based AI systems that provide extensive tracking of users and their behavior [
]. It is a feeling of tension and worry from thoughts resulting from the use of monitoring solutions. Many decision makers experience surveillance anxiety and worry about the continuous monitoring of patients and the collection and analysis of patient’s data, leading to privacy and possibly security concerns. Surprisingly, a recent study [ ] has found that the level of health risk has a limited impact on surveillance anxiety. Rather, many worry about their patients being monitored even under conditions of high uncertainty, such as the high possibility of being hospitalized.Delegation anxiety is caused by the delegation of some health care tasks to AI, leading to a loss of personal interaction between health care providers and patients [
]. With AI monitoring, certain care activities are delegated to the system, which reduces the workload of caretakers and care providers [ ]. For example, AI monitoring can automatically communicate critical patient information to health care providers. Decision makers worry about a loss of personal interaction because technology use reduces the interaction between health care providers and their patients. Similar to surveillance anxiety, research has found that they experience delegation anxiety even when uncertainty is high, even though they are less likely to resist AI monitoring [ ].Technology can also provide positive emotional experiences. For example, when designing monitoring technology, providing rewards for goal attainment can reinforce self-efficacy and provide positive emotional responses that support a patient in managing their diabetes medication [
]. Reinforcements and reiterations of success are common techniques used in gamification, increasing positive emotional responses and motivation [ ].Emotions Triggered by Health Care Situation
The health care situation of a patient also affects the emotions experienced by decision makers [
]. For example, anxiety about health care is experienced when decision makers worry about whether patients receive appropriate health care and monitoring of their health in highly uncertain situations. The process of care involves monitoring the health status [ ] and providing physical, psychological, social, and spiritual support [ ]. This was particularly acute during the pandemic, when decision makers were anxious about the prospect of their own or relatives’ health problems going unnoticed especially when they belonged to a high-risk population [ ].Anxiety about health outcomes is experienced when decision makers worry about the potential negative health outcome for the patient. The health outcome includes positive developments (eg, improvement of symptoms) and negative developments (eg, deterioration of symptoms and hospitalization) [
]. Anxiety about health outcomes varies with the level of uncertainty of health outcomes, that is, decision makers experience more anxiety about health outcomes when uncertainty is high and less when uncertainty is low.Conversely, relief and hope can be experienced when decision makers are provided with sufficient care and treatment information. For example, knowing that patients receive continuous, attentive care and having access to clear, timely updates about their treatment progress can foster feelings of relief [
]. This information helps patients feel more secure about their health care situation and strengthens their trust in the health care received, enhancing their overall emotional comfort.Intrinsic Emotion Regulation With AI Resistance
Some emotions can be beneficial while others can be detrimental as they influence how decision makers perceive and interpret sensory information and shape their decisions based on such information, leading to either adaptive or maladaptive behaviors [
]. For example, decision makers’ emotions can increase and decrease resistance [ ]. Resistance decreases with technology-induced relief, increases with technology-induced anxiety, and decreases with health care situation-specific anxiety. Since decision makers are motivated to regulate their own emotions, particularly negative emotions, such as anxiety, anger, and fear, it is crucial to guide them toward beneficial regulation activities [ ].To regulate emotions, decision makers can use an emotion regulation strategy called situation selection, where they take actions to increase or decrease the likelihood of being in a situation that is expected to elicit desirable or undesirable emotions [
]. For example, decision makers may postpone, reject, or oppose AI monitoring. Postponing the adoption of AI monitoring solutions can be a form of passive innovation resistance, allowing them to avoid the emotion-inducing situation altogether [ ]. This strategy works when there is an alternative system in place and there is no mandate to move to the new system. Often, the decision to postpone is driven by the decision maker’s resistance to change as they feel more comfortable with the status quo.However, decision makers often cannot delay a decision. Consequently, they may decide to reject AI monitoring due to a lack of prior experience. Decision makers’ resistance to change and satisfaction with the status quo can catalyze this, as well as contextual factors, such as functional or psychological barriers [
]. Functional barriers can occur when decision makers perceive substantial challenges or drawbacks associated with adopting an innovation, such as difficulties in use, perceived lack of added value, or potential risks. On the other hand, psychological barriers emerge when the innovation clashes with decision makers’ existing beliefs or perceptions, which may be influenced by sources like rumors or media [ ]. Decision makers may oppose AI monitoring when they believe it is inherently unsuitable for their needs, even prior to a thorough evaluation. This resistance can manifest as attacking AI monitoring and the spreading of negative opinions about it [ ].Extrinsic Emotion Regulation Through Managerial Actions
Overview
Healthcare organizations and governmental institutions, as well as external entities with significant influence, have a vested interest in regulating decision makers’ emotions. As collective actors, these organizations engage in cognitive reasoning and shape emotional understanding to guide thinking and actions [
, ]. They also practice collective emotion regulation to ensure informed decision-making [ ]. In addition to regulating the emotions of their collective members, these organizations can also engage in extrinsic emotion regulation. This involves managing the emotions of external individuals, such as decision makers including patients and their families, to mitigate the rejection of AI innovations given their benefits and performance gains. We suggest the following emotion regulation strategies for organizations to manage decision makers’ emotions and mitigate AI resistance.Protect and Safeguard Personal Data for Responsible AI Solutions
Organizations can manage decision makers’ emotions by adopting the emotion regulation strategy called situation modification. This involves actively changing situations and reshaping events that induce negative emotions to reduce their effects [
]. More precisely, organizations need to be cognizant of decision makers’ emotions that may arise from a specific health care situation and the use of AI monitoring. A significant technology-related emotion is anxiety about the unauthorized disclosure and use of personal data. Healthcare organizations should carefully evaluate and choose an AI monitoring solution that is designed to protect and safeguard personal and other sensitive data collected from decision makers and others involved in a patient’s care, for example, by blurring images or deidentifying the data before they are stored [ ]. As AI monitoring solutions collect, store, and process data to provide predictions and recommendations, organizations need to balance ethics, personal privacy, and data security considerations against gains in quality of care and health outcomes [ ]. In addition, healthcare organizations should make sure that there are sufficient human interactions with patients when deploying AI monitoring solutions. Healthcare organizations should make decision makers aware that AI monitoring can even be used to enhance engagement by enforcing personal interactions and providing personalized care.Governmental institutions should proactively establish proper laws and regulations to manage AI monitoring. Examples include a blueprint from the White House that seeks to prevent negative implications of AI and its monitoring [
] and a draft from the European Union that seeks to ban AI for mass monitoring [ ]. These initiatives are in response to criticism about some AI monitoring practices [ ]. Seemingly, organizations are often too lax when managing personal data. However, when organizations take these concerns seriously, they can develop and deploy responsible AI solutions [ ]. For instance, increasing transparency of essential algorithms, their data, and data processing allows others to evaluate the appropriateness of the AI solution and engages them in an open debate about corporate and governmental responsibilities concerning the design and deployment of AI solutions.Provide Decision Makers a Sense of Control
Organizations can also manage decision makers’ emotions by giving them a sense of control [
], that is, the feeling of being empowered and having the capability to influence or manage the operation of AI systems and their impact on decision makers’ personal or professional lives. Decision makers typically regulate their emotions by choosing situations that either enhance or reduce the chances of experiencing favorable or unfavorable emotions [ ].Here, organizations can support decision makers by creating environments where they experience this sense of control. For instance, healthcare organizations should ensure that AI solutions are designed to provide decision makers with options to change the schedule, frequency, and type of data collected. Providing sufficient choice, freedom, and autonomy regarding what is monitored can alleviate decision makers concerns and reduce their negative technology-induced emotions, such as surveillance anxiety [
].Additionally, regular training and educational resources on AI systems can help decision makers understand and confidently engage with these tools. These resources should cover how AI operates, the available options, and the benefits and limitations of the technology. By fostering a thorough understanding, organizations can help decision makers feel more in control and less anxious.
Communicate the Risk of AI Resistance
Healthcare organizations should communicate the potential consequences of not using AI monitoring to address resistance and foster adoption. First, healthcare organizations need to communicate the risk with the status quo. Health risks might already be present and decision makers seek to manage such risks. Many decision makers may even lack a clear understanding of the risks faced by the patients. For example, more than one out of four people aged 65 years or older falls each year [
], but less than half tell their doctor and caretaker [ ] about these incidents. Therefore, it is important to communicate to decision makers the risk of adverse health outcomes such as a fall that may go undetected in the absence of adequate monitoring.Second, healthcare organizations need to communicate the risk of AI resistance, that is, they need to articulate how AI capabilities can alleviate existing risks through its innate ability to act. For example, in addition to educating the decision makers about the ability of AI monitoring systems to continuously monitor potential symptoms and provide early detection of serious illnesses [
], healthcare organizations can ask decision makers to think about what would happen if patients’ abnormal activities and adverse events such as falls are not noticed on time. When healthcare organizations also communicate the risk of AI resistance, decision makers can assess the existing risks more comprehensively, and thus, may choose to adopt AI monitoring.Promote Proper Evaluation of Health Care Situation
Despite the threats and challenges presented by AI, the technology offers significant advantages, such as assistance in identifying health problems, monitoring for adverse events (eg, falls), and detecting abnormal behaviors (eg, wandering). When decision makers focus on these advantages and the opportunities AI provides in a given health care situation, they experience positive emotions such as reassurance and are more likely to evaluate the systems in a positive light. To assist decision makers in navigating healthcare processes and associated AI tools, organizations should enhance communication by providing clear, consistent updates about AI-powered healthcare procedures, patient status, and quality measures [
, ]. This transparency helps reduce concerns and anxiety, ensuring decision makers feel informed and confident in the AI systems. They should also develop personalized care plans that integrate AI system capabilities to address each patient’s unique needs and involve family members in the planning process [ ]. Tailoring care with AI support fosters trust and reassures decision makers by demonstrating a commitment to their needs.In addition, establishing robust crisis management protocols that incorporate AI tools for real-time monitoring and response provides immediate support and reassurance [
]. This helps decision makers feel secure about the organization’s ability to manage emergencies effectively. By implementing these strategies, healthcare organizations can mitigate AI resistance by enhancing trust, reducing anxiety, and fostering a positive emotional response toward AI-assisted care.Acknowledgments
We thank the participants of the Digital Health Research Community, Deakin University (AU) and Digital Health Analytics Seminar at ITU University of Copenhagen (DK) with whom we discussed our research and from which we drew a lot of inspiration and ideas when writing this article. One of the author’s research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement W911NF-23-2-0224. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the US Government. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Conflicts of Interest
None declared.
References
- Racine N, Chow C, Hamwi L, Bucsea O, Cheng C, Du H, et al. Health care professionals' and parents' perspectives on the use of AI for pain monitoring in the neonatal intensive care unit: multisite qualitative study. JMIR AI. Feb 09, 2024;3:e51535. [FREE Full text] [CrossRef] [Medline]
- Karpathakis K, Pencheon E, Cushnan D. Learning from international comparators of national medical imaging initiatives for AI development: multiphase qualitative study. JMIR AI. Jan 04, 2024;3:e51168. [FREE Full text] [CrossRef] [Medline]
- Nathan J. Four ways artificial intelligence can benefit robotic surgery. Forbes. Feb 15, 2023. URL: https://www.forbes.com/sites/forbestechcouncil/2023/02/15/four-ways-artificial-intelligence-can-benefit-robotic-surgery/ [accessed 2023-05-15]
- Campbell SM, Roland MO, Buetow SA. Defining quality of care. Soc Sci Med. Dec 2000;51(11):1611-1625. [CrossRef] [Medline]
- Han JH, Lee JY. Digital healthcare industry and technology trends. 2021. Presented at: IEEE International Conference on Big Data and Smart Computing (BigComp); January 17-20, 2021:375-377; Jeju Island, Korea. [CrossRef]
- Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digital Med. 2020;3:118. [FREE Full text] [CrossRef] [Medline]
- Li FF, Milstein A. Stanford partnership in AI-assisted care. Stanford University. URL: https://aicare.stanford.edu [accessed 2020-08-24]
- Park EH, Werder K, Cao L, Ramesh B. Why do family members reject AI in health care? Competing effects of emotions. J Manage Inf Syst. 2022;39(3):765-792. [FREE Full text] [CrossRef]
- Coiera E. The last mile: where artificial intelligence meets reality. J Med Internet Res. 2019;21(11):e16323. [FREE Full text] [CrossRef] [Medline]
- Heidenreich S, Kraemer T. Innovations—doomed to fail? Investigating strategies to overcome passive innovation resistance. J Prod Innovation Manage. 2015;33(3):277-297. [CrossRef]
- Stinchcombe AL. Social structure and organizations. In: March J, editor. Handbook of Organizations. Chicago, IL. Rand McNally; 1965:142.
- Peek STM, Wouters EJM, van Hoof J, Luijkx KG, Boeije HR, Vrijhoef HJM. Factors influencing acceptance of technology for aging in place: a systematic review. Int J Med Inform. 2014;83(4):235-248. [FREE Full text] [CrossRef] [Medline]
- Williamson S, Prybutok VR. Balancing privacy and progress: a review of privacy challenges, systemic oversight, and patient perceptions in AI-driven healthcare. Appl Sci. 2024;14(2):675. [CrossRef]
- Balagurunathan Y, Mitchell R, El Naqa I. Requirements and reliability of AI in the medical context. Phys Med. 2021;83:72-78. [FREE Full text] [CrossRef] [Medline]
- Demírhan A, Örgev C, Aslankiliç M. Perception of mistrust towards artificial intelligence applications in the health sector: causes, effects and solutions. Int J Active Healthy Aging. 2023;1(1):1-6. [CrossRef]
- Werder K, Cao L, Ramesh B, Park EH. Empower diversity in AI development: diversity practices that mitigate social biases from creeping into your AI. Commun ACM. 2024;67(12):31-34. [FREE Full text] [CrossRef]
- Berente N, Gu B, Recker J, Santhanam R. Special Issue Editor’s Comments: Managing Artificial Intelligence. MIS. 2021;45(3):1433-1450.
- Luijkx K, Peek S, Wouters E. "Grandma, You Should Do It—It's Cool" older adults and the role of family members in their acceptance of technology. Int J Environ Res Public Health. 2015;12(12):15470-15485. [FREE Full text] [CrossRef] [Medline]
- Berridge C, Wetle TF. Why older adults and their children disagree about in-home surveillance technology, sensors, and tracking. Gerontologist. 2020;60(5):926-934. [CrossRef] [Medline]
- Gross JJ. Emotion regulation: current status and future prospects. Psychol Inq. 2015;26(1):1-26. [CrossRef]
- Aldao A. The future of emotion regulation research: capturing context. Perspect Psychol Sci. 2013;8(2):155-172. [CrossRef] [Medline]
- Gross JJ. Handbook of Emotion Regulation. New York. The Guilford Press; 2014.
- Gross JJ. The emerging field of emotion regulation: an integrative review. Rev Gen Psychol. 1998;2(3):271-299. [CrossRef]
- Richardson JP, Smith C, Curtis S, Watson S, Zhu X, Barry B, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digital Med. 2021;4(1):140. [FREE Full text] [CrossRef] [Medline]
- Frijda NH. The Laws of Emotion. London, England, United Kingdom. Psychology Press; 2017.
- Loewenstein GF, Weber EU, Hsee CK, Welch N. Risk as feelings. Psychol Bull. 2001;127(2):267-286. [CrossRef] [Medline]
- Kummer TF, Recker J, Bick M. Technology-induced anxiety: manifestations, cultural influences, and its effect on the adoption of sensor-based technology in German and Australian hospitals. Inf Manage. 2017;54(1):73-89. [CrossRef]
- Rubeis G. The disruptive power of artificial intelligence. Ethical aspects of gerontechnology in elderly care. Arch Gerontol Geriatr. 2020;91:104186. [CrossRef] [Medline]
- Baird A, Maruping LM. The next generation of research on IS use: a theoretical framework of delegation to and from agentic IS artifacts. MIS Q. 2021;45(1):315-341. [FREE Full text]
- Toscos T, Connelly K, Rogers Y. Designing for positive health affect: decoupling negative emotion and health monitoring technologies. 2013. Presented at: 7th International Conference on Pervasive Computing Technologies for Healthcare and Workshops; May 05-08, 2013:153-160; Venice, Italy. [CrossRef]
- Morschheuser B, Hassan L, Werder K, Hamari J. How to design gamification? A method for engineering gamified software. Inf Software Technol. 2018;95:219-237. [CrossRef]
- Schumacher KL, Stewart BJ, Archbold PG, Dodd MJ, Dibble SL. Family caregiving skill: development of the concept. Res Nurs Health. 2000;23(3):191-203. [CrossRef] [Medline]
- Bevan JL, Pecchioni LL. Understanding the impact of family caregiver cancer literacy on patient health outcomes. Patient Educ Couns. 2008;71(3):356-364. [CrossRef] [Medline]
- Wand APF, Zhong B, Chiu HFK, Draper B, De Leo D. COVID-19: the implications for suicide in older adults. Int Psychogeriatr. 2020;32(10):1225-1230. [FREE Full text] [CrossRef] [Medline]
- Rademakers J, Delnoij D, de Boer D. Structure, process or outcome: which contributes most to patients' overall assessment of healthcare quality? BMJ Qual Saf. 2011;20(4):326-331. [CrossRef] [Medline]
- Fröjd C, Swenne CL, Rubertsson C, Gunningberg L, Wadensten B. Patient information and participation still in need of improvement: evaluation of patients' perceptions of quality of care. J Nurs Manage. 2011;19(2):226-236. [CrossRef] [Medline]
- Talke K, Heidenreich S. How to overcome pro-change bias: incorporating passive and active innovation resistance in innovation decision models. J Prod Innovation Manage. 2013;31(5):894-907. [CrossRef]
- Laukkanen T. Consumer adoption versus rejection decisions in seemingly similar service innovations: the case of the internet and mobile banking. J Bus Res. 2016;69(7):2432-2439. [CrossRef]
- Kleijnen M, Lee N, Wetzels M. An exploration of consumer resistance to innovation and its antecedents. J Econ Psychol. 2009;30(3):344-357. [CrossRef]
- Ashkanasy N. Emotions in organizations: a multi-level perspective. In: Multi-Level Issues in Organizational Behavior and Strategy. Leeds, England. Emerald Publishing Limited; 2023:9-54.
- Barsade SG, Gibson DE. Why does affect matter in organizations? Acad Manage Perspect. 2007;21(1):36-59. [CrossRef]
- Fink G, Yolles M. Collective emotion regulation in an organisation—a plural agency with cognition and affect. J Organ Change Manage. 2015;28(5):832-871. [FREE Full text]
- Ethics and governance of artificial intelligence for health: large multi-modal models. WHO guidance. World Health Organization. URL: https://www.who.int/publications/i/item/9789240029200 [accessed 2024-08-14]
- Blueprint for an AI bill of rights—making automated systems work for the American people. The White House. URL: https://eric.ed.gov/?id=ED625670 [accessed 2023-05-25]
- Proposal for a regulation of the European Parliament and of the council on harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts. European Commission. Apr 21, 2021. URL: https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=celex:52021PC0206 [accessed 2025-01-21]
- Lazar WS, Yorke C. Watched while working: use of monitoring and AI in the workplace increases. Reuters. 2023. URL: https://www.reuters.com/legal/legalindustry/watched-while-working-use-monitoring-ai-workplace-increases-2023-04-25/ [accessed 2023-05-24]
- Werder K, Ramesh B, Zhang S. Establishing data provenance for responsible artificial intelligence systems. ACM Trans Manage Inf Syst. 2022;13(2):1-23. [CrossRef]
- Bergen G, Stevens MR, Burns ER. Falls and fall injuries among adults aged ≥65 years—United States, 2014. MMWR Morb Mortal Wkly Rep. 2016;65(37):993-998. [FREE Full text] [CrossRef] [Medline]
- Sterling DA, O'Connor JA, Bonadies J. Geriatric falls: injury severity is high and disproportionate to mechanism. J Trauma. 2001;50(1):116-119. [CrossRef] [Medline]
- Pinsonneault A, Addas S, Qian C, Dakshinamoorthy V, Tamblyn R. Integrated health information technology and the quality of patient care: a natural experiment. J Manage Inf Syst. 2017;34(2):457-486. [CrossRef]
- Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers' perspectives. BMC Med Inform Decis Mak. 2020;20(1):170. [FREE Full text] [CrossRef] [Medline]
- AI and healthcare: a giant opportunity. Intel AI Forbes Insights. URL: https://www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/#13593a964c68 [accessed 2019-10-11]
- Bekbolatova M, Mayer J, Ong CW, Toma M. Transformative potential of AI in healthcare: definitions, applications, and navigating the ethical landscape and public perspectives. Healthcare (Basel). 2024;12(2):125. [FREE Full text] [CrossRef] [Medline]
- Sun W, Bocchini P, Davison BD. Applications of artificial intelligence for disaster management. Nat Hazards. 2020;103(3):2631-2689. [CrossRef]
Abbreviations
AI: artificial intelligence |
Edited by T de Azevedo Cardoso; submitted 11.08.23; peer-reviewed by P-H Liao, A Barucci, N Singh; comments to author 28.06.24; revised version received 20.08.24; accepted 07.10.24; published 31.01.25.
Copyright©Karl Werder, Lan Cao, Eun Hee Park, Balasubramaniam Ramesh. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 31.01.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.