Original Paper
Abstract
Background: It is believed that artificial intelligence (AI) will be an integral part of health care services in the near future and will be incorporated into several aspects of clinical care such as prognosis, diagnostics, and care planning. Thus, many technology companies have invested in producing AI clinical applications. Patients are one of the most important beneficiaries who potentially interact with these technologies and applications; thus, patients’ perceptions may affect the widespread use of clinical AI. Patients should be ensured that AI clinical applications will not harm them, and that they will instead benefit from using AI technology for health care purposes. Although human-AI interaction can enhance health care outcomes, possible dimensions of concerns and risks should be addressed before its integration with routine clinical care.
Objective: The main objective of this study was to examine how potential users (patients) perceive the benefits, risks, and use of AI clinical applications for their health care purposes and how their perceptions may be different if faced with three health care service encounter scenarios.
Methods: We designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters between patients and physicians (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI as a traditional in-person visit). We used an online survey to collect data from 634 individuals in the United States.
Results: The interactions between the types of health care service encounters and health conditions significantly influenced individuals’ perceptions of privacy concerns, trust issues, communication barriers, concerns about transparency in regulatory standards, liability risks, benefits, and intention to use across the six scenarios. We found no significant differences among scenarios regarding perceptions of performance risk and social biases.
Conclusions: The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Thus, there are still various risks associated with implementing AI applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses. The concerns are also evident if the AI applications are used as a recommendation system under physician experience, wisdom, and control. Prior to the widespread rollout of AI, more studies are needed to identify the challenges that may raise concerns for implementing and using AI applications. This study could provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI clinical applications. Regulatory agencies should establish normative standards and evaluation guidelines for implementing AI in health care in cooperation with health care institutions. Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors of AI clinical applications.
doi:10.2196/25856
Keywords
Introduction
Artificial Intelligence
Artificial intelligence (AI) generally refers to a computerized system (hardware or software) that can perform physical tasks and cognitive functions, solve various problems, or make decisions without explicit human instructions [
]. A range of techniques and applications are under the broad umbrella of AI, such as genetic algorithms, neural networks, machine learning, and pattern recognition [ ]. AI is considered a frontline service technology (FST) in the literature [ ]. FST infusion in various industries has emerged as a topic of interest in the past decade (eg, [ - ]). Gursoy et al [ ] classified FST infusion under three main categories: (1) no FST, which refers to a technology-free encounter between consumers and frontline service providers; (2) augmenting FST, which refers to the technology as a human augmentation tool; and (3) substituting FST, which refers to the technology as a human substitution force. In augmenting FST, the technology can help to enhance human thinking, analysis, and behavior, and boost the ability to interact with other human actors, whereas in substituting FST, the technology substitutes a human actor and takes away the active role of humans in the service encounter. AI as an FST can augment or replace human tasks and activities within a wide range of industrial, intellectual, and social applications with potential impacts on productivity and performance. As nonhuman intelligence is programmed to complete specific tasks, AI can overcome some of humans’ computationally intensive and intellectual limitations [ ]. For example, AI could be a computer application that uses sophisticated algorithms to solve a business problem for managers. AI applications generate personalized recommendations to customers based on analysis of a huge data set. Thus, it is believed that AI could perform tasks better than the best humans and experts in any field [ ].AI technology, including algorithmic machine learning and autonomous decision-making, creates new opportunities for continued innovation in different industries, including finance, health care, manufacturing, retail, supply chain, logistics, and utilities [
]. Promoting AI applications has become one of the focal points of many companies’ strategies [ ]. The notable changes made by AI have inspired recent studies to examine the impacts and consequences of the technology and investigate AI’s performance implications. However, this objective requires an in-depth understanding of the factors affecting the acceptance of AI applications by potential users in different manufacturing and service fields.AI in Health Care
Previous studies highlight the importance of AI in health care, especially in medical informatics [
]. AI can improve patient care, diagnosis, and interpretation of medical data [ ]. Houssami et al [ ] showed that AI applications used for breast cancer screening reduced human detection errors; however, some of the interrelated ethical and societal trust factors, as well as reliance on AI, are yet to be developed. Prior research has shown that AI clinical applications exhibit the same or sometimes even better performance than their human counterparts or specialists in detecting Alzheimer disease using natural language processing techniques [ ], and for detecting skin cancer [ ] and heart arrhythmia [ ] using deep neural networks. AI applications for health care recommendations may differ from those in other sectors, mainly because of the highly sensitive nature of health information and high levels of consumer vulnerability to possible medical errors.In the context of AI in health care, there can be three possible patient encounters for care delivery. First, the patient can follow the traditional health care delivery model and visit a physician in person. This option is the most prevalent health care delivery process that focuses on the physician-human interaction and can be referred to as a “human-human interaction” or “traditional in-person visit.” Second, a patient can choose a collaborative intelligence [
] scenario where the physicians collaborate with an AI application to arrive at conclusions and make medical decisions. In other words, the physician’s thinking, analysis, and performance are augmented by using AI applications to interact with patients, but the ultimate responsibility of patient care and recommending treatment options and care planning still rest with the physician. Third, AI clinical applications substitute the physician, and a patient only encounters the AI clinical application unaided by human intelligence.Acceptance of AI in Clinical Applications
In April 2018, the Food and Drug Administration authorized the first AI application to diagnose diabetic retinopathy without a physician’s help in the United States [
]. An increasing number of health care service companies have invested in AI applications in mobile health devices or health apps to improve patient safety, increase practice quality, enhance patient care management, and decrease health care costs. However, previous studies suggest that not all individuals are willing to accept AI clinical applications [ ]. Successful implementation of AI applications requires a careful examination of users’ attitudes and perceptions about AI [ ]. Thus, investing in AI applications without recognizing potential users’ beliefs and willingness to use them may waste resources and even result in customer loss. This is especially true in the health care sector, where patient engagement is considered one of the most critical determinants of health care quality. If individuals do not view interacting with AI clinical applications as useful, they may demand interactions with physicians, and in turn, the AI applications may remain unused. Therefore, understanding the decision drivers and barriers that lead to acceptance or refusal of AI clinical applications in health care delivery is fundamental for health care providers and hospitals that plan to introduce or increase AI presence during health care delivery.Several studies have investigated the attitude of specific sample populations toward AI in health care. For example, Dos Santos and colleagues [
] evaluated medical students’ attitudes toward AI and found that the majority agreed that AI could improve medicine as a whole. In a similar study involving medical students in Canada [ ], the majority of the sample (67%) agreed that AI would reduce the demand for medicine and 47% were anxious about physicians’ future in association with AI. In a survey conducted among members of the European Society of Radiology [ ], more than half of the respondents strongly expressed that they were not ready to accept AI-only generated reports. Similarly, in a study investigating perceptions toward health care robots [ ], residents in a retirement village exhibited a more positive attitude toward the robot compared to the responses of their staff and relatives. Recently, Patel and colleagues [ ] compared the diagnosis results of pneumonia on chest radiographs among human experts alone, collaborative intelligence, and two state-of-the-art AI deep learning models. They demonstrated that both the collaborative and AI models have superior performance when compared with human experts alone. They also found that the combination of collaborative and AI models outperforms each of these methods alone.Research Gaps
Based on previous studies, health care professionals still express fundamental concerns about implementing AI clinical applications in care services [
- ]. These concerns and risks also directly affect patients’ perceptions (as potential users and beneficiaries) and make them withdraw from using AI clinical applications [ ]. The majority of studies investigating attitudes toward AI involve national samples [ ], medical students [ ], radiology experts [ ], and physicians [ ], and we did not find a study that examined the perceptions of patients with different health conditions (chronic and acute diseases) toward AI clinical applications. Exploring the perspectives of individuals with different diseases can allow researchers to effectively recognize the source of risks and concerns associated with AI clinical applications. Since the risks may vary by type of illness, this perspective can help health care providers better understand how to address the concerns of various patients.Thus, researchers need to understand the current challenges related to AI technologies more efficiently and analyze the urgent needs of health systems to design AI applications to address them. Even with physicians and other health care stakeholders accepting and assimilating AI to varying degrees, it is crucial to understand patients’ perspectives toward these different scenarios. Nevertheless, little is known about the risk beliefs associated with using AI clinical applications for diagnosis and treatments from the general public’s perspective. This stream of literature encouraged us to examine people’s perceptions and attitudes toward different types of health care delivery processes in this study.
Currently, the issues related to AI clinical applications in health care are still within the realm of research. However, it is widely believed that these systems will fundamentally change medical practice in the near future [
]. Historically, the medical sector does not integrate technology as quickly as other industries [ ]. Moreover, integrating AI into the current medical workflow could be very challenging without the involvement, cooperation, and endorsement of stakeholders (such as health care professionals and patients) and a robust legislative and regulatory framework. The main objective of this study was to examine how potential users perceive the benefits, risks, and use of AI clinical applications for their health care purposes, and how their perceptions may be different if faced with the three health care service encounter scenarios. The benefit perceptions and risk beliefs of prospective users may affect their future adoption of AI applications. Patients may not decide what tools health care professionals should use in their practice, but they can definitely highlight possible concerns, challenges, and barriers that may refrain them from supporting and using the tools implemented and promoted by clinicians.Literature Review
Overview
In this section, consistent with the research objectives, three topics are explained. First, the type of illness and the reactions of people with different illnesses (acute or chronic) are described. Second, possible risks and concerns associated with the use of AI clinical applications are highlighted. Third, potential benefits that users may perceive from using AI in health care are illuminated. The interrelationships among these three topics can provide further research background on how people with different health conditions may react to AI applications used for health care purposes to place our research objectives and experimental design in context.
Type of Illness (Acute or Chronic)
A patient may experience two general types of health conditions: acute diseases and chronic diseases. Following the medical literature, acute conditions are defined as diseases that develop suddenly, are severe and sudden in onset (the initial phase of a disease or condition in which symptoms first become apparent), last a short time (often only a few days or weeks), and can be cured [
]. In contrast, a chronic disease is described as a human health condition or disease that is persistent or otherwise long-lasting in its effects or a disease that develops over time [ ]. Thus, chronic diseases refer to long-term health conditions that last more than 1 year [ ], whereas acute diseases refer to health conditions that are sudden, short-term, and require medical attention. Examples of acute diseases include the common cold, flu, and infections, whereas examples of chronic diseases include Alzheimer disease, arthritis, diabetes, and depression [ ]. Given the contrasting nature of chronic and acute conditions, it is logical to argue that patients with different diseases will vary in their perceptions of AI in health care delivery. Previous research indicates that individuals tend to trust an algorithm or an AI system in low-risk conditions [ ]. Based on that finding, we can expect those with acute short-term conditions but in severe pain to opt for an AI clinical encounter. For example, Wu and colleagues [ ] explored older adults’ perceptions of mild cognitive impairment toward assistive AI and found a generally positive belief that these AI applications can be useful for the aging population.Perceived Concerns and Risks
Perceived Communication Barriers
Conventionally, the health care delivery process usually occurs in a hospital or a physician’s clinic and involves direct physician-patient interaction, which can be described as paternalistic in nature. In other words, with medical expertise, the physician leads the patient toward shared medical decision-making, resulting in outcomes such as prescription and treatment plans centered on both evidence-based medicine and moral competency in terms of showing empathy and compassion. Empathy and compassion are increasingly being viewed as the foundations of active patient engagement and patient-centered care [
, ]. Research highlights that patients mainly seek and trust health care providers who are competent and compassionate with good interpersonal skills [ - ]. Researchers also reveal that empathetic and compassionate physician-patient interaction can improve patient satisfaction and greater clinical adherence [ ]. Conversely, AI applications in service delivery (such as health care) may cause noteworthy communication barriers between customers and AI applications [ ]. Reliance on AI clinical applications may reduce physicians’ and patients’ interactions and conversations [ ]. Consumers may refuse to use AI applications because they need human social interaction during service encounters [ ]. AI applications are powered with higher-level technical and evidence-based medicine but may not be expected to exhibit human-like empathy, which, in turn, may discourage patients from choosing the AI applications for the health care delivery process. Although the idea of building empathetic machines is well pursued in AI, patients’ perceptions toward the clinical encounters involving AI as substituting or augmenting technology versus traditional in-person visits warrant further investigation.Perceived Transparency of Regulatory Standards
Physicians obtain their licensure after many rigorous training years in medicine and their specialty. This licensure is considered a regulating mechanism put forward by the government to ensure physicians’ quality and, ultimately, the quality of health care services. The licensure further allows a patient to choose a reliable doctor responsible for intentional errors or unintentional wrong-doings. However, in the AI context, regulatory authorities are yet to formalize standards to evaluate and maintain AI’s safety and impact in many countries [
]. Thus, people may become concerned if an appropriate regulatory and accreditation system regarding AI clinical applications is not yet in place. In the ever-changing AI and machine-learning landscape, more effective, efficient, and powerful algorithms are being developed on an everyday basis to power these AI health care applications. Often, the technical aspects of modern AI algorithms such as artificial neural networks (ANNs) remain a black box to society at large [ ]. This is because after an ANN is trained with a data set, the quest to understand the algorithm’s decision-making process becomes essentially impossible. This perception of the “unknown” could potentially affect a patient’s preference for the clinical encounter. Furthermore, the lack of transparency in the regulatory standards that can be understood, critiqued, and reviewed [ ] for AI applications by a larger community may discourage patients from choosing an AI encounter. The new IEEE (Institute of Electrical and Electronics Engineers) standard P7001 currently under development, “Transparency in autonomous systems,” has a set of measurable and testable levels of transparency that could be used to evaluate AI or autonomous systems for their level of compliance [ ].Perceived Liability Issues
A physician is usually held responsible for the consequences of their actions in a health care setting. With AI increasingly finding its way into health care research and practice, it becomes imperative to examine an AI system’s liability issues. Previous studies in public health demonstrate legal concerns about who will account for AI-based decisions when errors occur using AI applications [
]. Usually, the stakeholders in a medical encounter involving AI can be the developers, data feeders, health care organizations that adopted AI, or the health care provider that used the AI [ ]. Noting that an AI application in itself cannot be held liable for any misdiagnosis or medical recommendations that turn out to be disastrous for a patient, the lack of standard consensus or regulations on who can be held liable may discourage patients from choosing an AI application. As AI clinical applications make autonomous decisions, the accountability question becomes very hard to answer. For instance, it will create a risky situation for both clinicians and patients when it is still unclear who becomes responsible if AI clinical applications offer wrong health care recommendations [ ]. There is also no precise regulation regarding who is held liable when a physician follows the medical recommendations provided by AI and when a physician decides to override the recommendations [ ].Perceived Trust in AI Mechanisms, Collaborative Intelligence, and Physicians
Maintaining substantial trust between the public, health professionals, and health systems can create effective health care. Trust can be defined as trust in clinicians and the clinical tools they use (such as AI clinical applications) [
]. In the information systems (IS) literature, Vance and colleagues [ ] call for additional research on trust in information technology artifacts such as AI systems. Gaining the general public’s trust in the use of AI in health care is considered an important challenge to the successful implementation of AI in medical practices [ ]. Sun and Medaglia [ ] reported that, in general, individuals are likely to exhibit a lack of trust in the features of AI systems. For instance, people may not trust AI’s predictive power and diagnostic ability for treatment purposes. Another study indicated that the autonomy of AI systems affects the users’ perception of trustworthiness [ ]. Moreover, in a survey conducted by Longoni and Bonezzi [ ] to understand customer perceptions about AI in medicine, only 26% of the sample signed up for an AI diagnosis compared to 40% who signed up for a health care provider diagnosis. In the same study, the authors found that the majority preferred a human provider over AI, even when it meant a higher risk of misdiagnosis. They also indicated that patients are more willing to choose AI if the ultimate treatment decision rests with the physician and not only the AI. These results highlight that individuals have a higher level of distrust toward AI in medicine than toward a human provider. Trust in AI clinical applications is a significant factor affecting adoption decisions [ ]. Longoni et al [ ] suggested that a physician’s confirmation of the AI results (an example of AI as augmenting technology) could encourage patients to be more receptive to AI in their care.Perceived Performance Risks (Possible Errors)
AI-related studies consider the safety and quality of autonomous operations as essential factors affecting the use of AI applications [
]. According to Mitchell [ ], AI applications are still vulnerable in many areas such as hacker attacks. Hackers can change text files or images, which may not have a human cognitive effect but could cause potentially catastrophic errors. Since the AI program may not understand the input and outputs, they are susceptible to unexpected errors and untraceable attacks. Performance risks may be serious in the context that directly deals with people’s lives (such as health care). Medical errors generated by AI could endanger patient safety and result in death or injuries, which are mostly not reversible. Thus, users may be concerned that the mechanisms used by AI clinical applications could lead to incorrect diagnoses or wrong treatments. Reddy et al [ ] indicated that incomplete and nonrepresentative data sets in AI models can produce inaccurate predictions and medical errors. Thus, it could be expected that individuals may consider that possible functional errors resulting from using AI applications could lead to more risks.Perceived Social Biases
Studies in other contexts have shown that AI models overestimate crime risk among members of a specific racial group [
]. In the health care context, biased AI models may overestimate or underestimate health risks in specific patient populations. For instance, AI applications may engage in stereotyping and exhibit gender or racial bias. Bias in AI models may also occur when data sets are not representative of the target population or when AI systems use incomplete and inaccurate data for decision-making [ ]. Societal discrimination (such as poor access to health care) and small samples (such as minority groups) can lead to unrepresentative data and AI bias [ ]. Edwards [ ] argued that AI systems’ current architecture needs a more sophisticated structure to understand human moral values. If the AI algorithm is not transparent, it may exhibit some discrimination levels, even though humans are not involved in decision-making [ ]. The main purpose of AI is to create an algorithm that functions autonomously to find the best possible solutions to questions [ ]. However, researchers argue that predictive programs can be inevitably biased due to an overrepresentation of the social minorities in the pattern recognition process [ ]. Some studies support this argument by showing that AI algorithms may be coded in a biased manner, which can produce racist decisions [ ]. Therefore, if people are concerned that AI applications could lead to morally flawed health care practices by overestimating or underestimating health risks in a certain patient population, they will be more likely to perceive greater risks associated with AI.Perceived Privacy Concerns
Health-related data are often viewed as constituting the most sensitive information about a person [
]. In health care services, respecting a person’s privacy is an essential ethical principle because patient privacy is associated with well-being and personal identity [ ]. Thus, patients’ confidentiality should be respected by health care providers by protecting their health records, preventing secondary use of data, and developing a robust system to obtain informed consent from them for health care purposes [ ]. If patients’ privacy needs are not met, patients will be affected by psychological and reputational harm [ ]. Data breaches would increase risk beliefs associated with AI models designed to share personal health information. There is a concern that anonymized data can be reidentified through AI processes, and this anxiety may exacerbate privacy invasion and data breach risks [ ]. AI applications in public health require large data sets. Thus, collecting, storing, and sharing medical data raise ethical questions about safety, governance, and privacy [ ]. Privacy is one of the most critical concerns associated with using AI applications because users’ data (eg, habits, preferences, and health records) are likely to be stored and shared across the AI network [ ]. The method of data collection for AI may increase risks as AI systems need huge data sets, and patients are concerned that their personal information will be collected without their knowledge [ ].Perceived Benefits
AI can be used in health care for risk prediction and recommendation generation. Big data and AI significantly improve patient health-based diagnosis and predictive capability [
]. Recent studies highlight new AI application opportunities within medical diagnosis and pathology, where medical tasks can be performed in an automated manner with higher speed and accuracy [ ]. AI can improve health care delivery such as diagnostics, prognosis, and patient management [ ]. For instance, AI has been shown to be capable of diagnosing skin cancer more efficiently than dermatologists [ ]. Sohn and Kwon [ ] demonstrated that hedonic aspects such as enjoyment and curiosity about AI technology are stronger in predicting the behavioral intention to use AI products than utilitarian aspects (eg, usefulness). This point does not hold in health care since AI applications are mainly used in health care for utilitarian aspects such as patient-specific diagnosis, treatment decision-making, and population risk prediction analysis [ ]. Thus, with regard to benefit perceptions, in this study, we only focus on utilitarian aspects, not other motivational factors. Sun and Medaglia [ ] proposed the lack of sufficient knowledge of the AI technologies’ values and advantages as potential barriers to adopting AI applications. Individuals will endorse and use AI clinical applications if they believe that AI will bring essential benefits to their health care delivery. Thus, we can expect that the higher the perceived benefits from AI clinical applications, the higher the individuals’ intention to use them in the future.Research Objectives
Most AI-related studies use various acceptance models (eg, technology acceptance model [TAM] and unified theory of acceptance and use of technology) to examine AI acceptance by empirically testing the effects of the ease of use, usefulness, and social norms on the intention to use AI applications [
, ]. For example, Xu and Wang [ ] used the TAM to examine the adoption of AI robot lawyer technology for the legal industry. Another example is use of a TAM-based tool to measure AI-based assessment acceptance among students [ ]. However, to the best of our knowledge, no experimental research has explored the differences in patients’ perceptions (with different types of illnesses) in relation to utilizing AI clinical applications without physician interactions, AI clinical applications with physician interactions, and traditional in-person visits. We hypothesized that the interactions between the type of health care service encounters and illness type may significantly change patients’ perceived risks, benefits, and overall attitudes toward health care delivery. The main objectives of this study were to: (1) examine the difference between the perceptions of patients with chronic diseases about utilizing AI clinical applications with and without physician interactions, and visiting physicians in person for diagnosis and treatment recommendation purposes; (2) investigate the difference between the perceptions of patients with acute diseases about utilizing AI clinical applications with and without physician interactions, and visiting physicians in person for diagnosis and treatment recommendation purposes; and (3) explore which service encounter is preferable for people with chronic diseases and that desired by people with acute diseases.In this study, we focused on AI clinical applications for health care purposes. AI embedded in mobile health devices or health apps could help patients monitor their health status, check their health care information, and manage their chronic illnesses. These AI clinical applications use algorithms to learn from the past by analyzing the medical histories of patients with the same health conditions; recognizing patterns in clinical data; predicting possible health issues; and suggesting some treatment choices, diagnostic options, prescription advice, and care planning. These applications could reduce frequent patient-physician encounters and avoid unnecessary hospitalizations.
Study Significance
This research offers significant and timely insight into human-computer interaction by examining AI applications in health care. This study’s findings will provide researchers and managers with critical insights into the determinants of individuals’ intention to use AI applications in health care delivery. The results imply that incompatibility with instrumental, technical, ethical, or regulatory values can be a reason for rejecting AI applications in health care. Multidimensional concerns associated with AI clinical applications may also be viewed as a cause of technostress, which occurs when an individual is unable to adapt to using technology [
]. In the future, it will be the patient’s or customer’s right to choose AI-driven recommendations over human care or vice versa. Nevertheless, we propose that AI application developers and programmers devise practical strategies to anticipate possible concerns and minimize risk beliefs to encourage individuals to use AI technology for health care purposes. Our results highlight that patients may have various reactions to AI (as substituting or augmenting) technology. Thus, different strategies and policies may need to successfully implement AI as a substituting or as an augmenting technology to address potential concerns and risks.Methods
Study Design
To understand patients’ perceptions of AI applications in health care, we designed a 2×3 experiment that crossed a type of health condition (ie, acute or chronic) with three different types of clinical encounters (ie, AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI [traditional in-person visit]). In each scenario, we included two essential pieces of information: (1) the type of health condition and (2) the type of clinical encounter. We propose that the interactions between these two elements could lead to a comprehensive evaluation of how patients with different health conditions would perceive AI clinical applications in health care. It should be mentioned that participants of this study were actually suffering from either a chronic or acute disease. First, individuals entered their signs and symptoms to highlight what diseases they are suffering from. A filtering question was included at the beginning of the survey to categorize patients into the chronic or acute group. Each group was then randomly assigned to a hypothetical clinical encounter. For instance, an individual with an actual chronic illness was given a hypothetical situation in which they could use AI clinical applications under the physician’s control.
illustrates the six scenarios resulting from two types of diseases and three types of clinical encounters.In this study, we considered health conditions as either acute or chronic conditions. Acute diseases come on rapidly and are accompanied by distinct symptoms requiring urgent or short-term care, followed by improvement after treatment. For example, a broken bone that might result from a fall must be treated by a doctor and will heal in time. In some cases, an acute illness such as the common cold will simply go away on its own. Most people with acute illnesses will recover quickly. By contrast, chronic conditions develop slowly and may worsen over an extended period, from months to years. Chronic conditions are slower to develop, may progress over time, and may have many warning signs or no signs at all. Some examples of common chronic conditions are arthritis, diabetes, chronic heart disease, depression, high blood pressure, high cholesterol, and chronic kidney disease. Unlike acute conditions, chronic health conditions cannot be cured, only controlled and managed.
Regarding the second factor (clinical encounters), we focused on three categories: AI clinical applications as substituting technology, AI clinical applications as augmenting technology, and no AI (traditional in-person visit). In the AI as substituting technology scenarios (Scenarios 1-1 and 1-2), we defined a setting where patients directly use AI clinical applications for health care purposes. In these scenarios, we described a situation in which individuals can use an AI clinical application when they are suffering from a disease. The steps of using AI applications were clearly explained to respondents. For instance, when feeling sick, they can directly enter their signs, symptoms, and critical health complaints into the AI clinical application. Their health information will be recorded in a large database. The AI system then analyzes their health data, compares them to the learned patterns (eg, the list of diseases and medicines), and draws some clinical conclusions. Finally, based on the pattern found, the AI creates a report including some diagnostic options, some treatment choices, prescription advice (eg, dose, frequency and names of medications they need to take), care planning (eg, resting at home, taking suggested medicines for a specific period, or visiting a professional immediately). In summary, we highlighted that AI clinical applications could analyze clinical data and make medical decisions for patients without direct physician interactions. Therefore, we can consider this scenario using AI clinical applications without physician interactions to treat acute diseases or control chronic diseases.
In the AI as augmenting technology (with physician interaction) scenarios (Scenarios 2-1 and 2-2), we defined a setting in which patients have the option of using an AI clinical application that is monitored and controlled by their physician. In these scenarios, the physician will check the results and recommendations generated by the AI clinical application; discard some of the recommendations based on their experience and expertise; and make the final decision about the treatment choices, prescription options, and care planning. In summary, we emphasized that AI clinical applications can analyze clinical data and help physicians make medical decisions for patients. Thus, in this case, AI clinical applications are used with physicians’ direct supervision, and we can consider this scenario as AI-physician interactions (interactions and collaborations between physicians and AI).
In the no AI (only physician) scenarios (Scenarios 3-1 and 3-2), a patient has the option of visiting their physician. First, the physician asks for the patient’s signs, symptoms, and critical health complaints through a conventional in-person visit. The physician analyzes collected health information. Then, based on the physician’s knowledge, expertise, and experience, they draw some clinical conclusions. In summary, we highlighted that through a face-to-face patient-physician encounter, the physician can analyze clinical data and make final medical decisions. Thus, we can consider this scenario as an in-person visit with physicians.
displays the 2×3 experimental design of the six scenarios used in this study and the treatment group/scenario names.Type of illness | AIa as substituting technology without physician interaction | AI as augmenting technology with physician interaction | Traditional in-person visit |
Acute: temporary, short-term diseases | Scenario 1-1: Acute-AI-only | Scenario 2-1: Acute-AI-physician | Scenario 3-1: Acute-physician-only |
Chronic: long-lasting diseases | Scenario 1-2: Chronic-AI-only | Scenario 2-2: Chronic-AI-physician | Scenario 3-2: Chronic-physician-only |
aAI: artificial intelligence.
Each experiment included three sections. First, the scenario was described, which detailed each experiment’s purpose (eg, in Scenario 1-2, the objective was defined as using AI clinical applications to control and monitor a chronic disease). Second, a set of questions about the nine outcome variables was provided to evaluate the respondents’ perceptions based on the given scenario. For example, subjects were asked to reflect on possible trust issues with AI applications (in Scenarios 1-1 and 1-2), with collaborative intelligence (in Scenarios 2-1 and 2-2), and with physician interactions (in Scenarios 3-1 and 3-2). Finally, we asked our subjects to provide some demographic information.
Question Development
The main aim of this study was to evaluate individuals’ perceptions of the health care service options described by six scenarios. We used the following variables to measure patients’ perceptions: perceived performance risks, perceived communication barriers, perceived social biases, perceived privacy concerns, perceived trust, perceived transparency of regulatory standards, perceived liability issues, perceived benefits, and intention to use. We primarily included these variables to highlight the main barriers and facilitators of using AI clinical applications indicated by previous research [
]. Some variables such as perceived communication barriers and liability issues may have shared effects on the physician-patient interaction. However, in this study, we only focused on the impact perceived by patients. This study drew on the existing literature to measure the nine outcome variables used in the experiments, and minor changes were made to the questions to fit the given context (scenario). This study adapted items to measure outcome variables from existing scales developed by studies mainly conducted in the AI and medical fields. The descriptions of scenarios and final measure items used in this study are listed in . shows the definitions of all outcome variables used in this study.Outcome variables | Variable definition | Reference |
Perceived performance risks | The degree to which an individual believes that the clinical encounter (which is explained in the scenario) will exhibit pervasive uncertainties | Marakanon and Panjakajornsak [ | ]
Perceived communication barriers | The degree to which an individual feels that the clinical encounter (which is explained in the scenario) may reduce human aspects of relations in the treatment process | Lu et al [ | ]
Perceived social biases | The degree to which a person believes that a clinical encounter (which is explained in the scenario) may lead to societal discrimination to a certain patient group (eg, minority groups) | Reddy et al [ | ]
Perceived privacy concerns | The extent to which individuals are concerned about how the clinical encounter (which is explained in the scenario) will collect, access, use, and protect their personal information | Zhang et al [ | ]
Perceived trust | The degree to which an individual believes that the clinical encounter (which is explained in the scenario) is trustworthy | Luxton [ | ]
Perceived transparency of regulatory standards | The extent to which an individual believes that regulatory standards and guidelines to assess the safety of the clinical encounter (which is explained in the scenario) are yet to be formalized | Cath [ | ]
Perceived liability issues | The extent to which an individual is concerned about the liability and responsibility of using the clinical encounter (which is explained in the scenario) | Laï et al [ | ]
Perceived benefits | The extent to which an individual believes that the clinical encounter (which is explained in the scenario) can improve diagnostics and care planning for patients | Lo et al [ | ]
Intention to use | The extent to which an individual is willing to use the proposed clinical encounter (which is explained in the scenario) for diagnostics and treatments | Turja et al [ | ]
Since this study’s subjects were individuals, we took two steps to ensure that the definitions, given scenarios, and questions were illustrated to be understandable for the general public. First, once the initial scenarios and surveys were developed, we consulted three professionals in the AI domain and two physicians (who were familiar with AI clinical applications) to improve our study’s content validity and finalize the definitions, scenarios, and questions used in each survey. Consistent with the experts’ suggestions, we modified the terms used to describe AI clinical applications, AI-physician interaction, as well as in-person examination, and improved the scenarios and questions to ensure that they were sufficiently transparent and easy to understand for the public. Second, we performed a face validity evaluation with 14 students (2 doctoral students in computer science, 1 doctoral student in IS, 4 master’s students in computer science, 5 master’s students in IS, and 2 medical students) to ensure that the readability of the scenarios and wording of the questions were acceptable and consistent with the objectives of our study. Thus, we reworded some ambiguous terms and removed technical language and jargon to describe the scenarios and develop the surveys in an understandable manner. It should be mentioned that graduate students may have a higher reading level than an average person. However, they were asked to detect and flag technical expressions and ambiguous terms that might not be clear to an average person. Therefore, the graduate students used more scrutiny to focus on every detail to ensure the questions were sufficiently transparent for our potential sample.
Data Collection
This study was reviewed and approved by the Institutional Review Board of Florida International University, and the data collection was performed confidentially. Written informed consent was obtained from all participants. All methods used in this study were carried out in accordance with relevant guidelines and regulations.
We used a power analysis to identify the appropriate sample size per scenario. The results of the power analysis showed that for a range of medium (0.5) to high (0.8) effect size [
], with α=.05 and power of more than 0.8, the total minimum sample required is about 50 respondents per scenario. In this study, there were nine main outcome variables with 49 measures. Therefore, to reduce possible sampling errors, we initially collected a sample of 121 respondents per scenario to ensure an adequate sample size after data cleaning and matching respondents in different scenarios. Data were collected in May 2020 from Amazon’s Mechanical Turk (MTurk) to obtain a representative group of subjects in the United States. MTurk is a survey tool used in previous research that is considered as an acceptable means to collect individual-level data from the general population of interest [ ]. The surveys of six scenarios were posted to MTurk, and the respondents’ location was limited to the United States. We enabled a microcode in the survey design to prevent respondents from taking each survey more than once. Following previous studies that used MTurk for data collection, a monetary reward (US $0.70) was given as the incentive for participation. The range of average completion time for the six experimental groups was between5 minutes, 17 seconds and 8 minutes, 49 seconds, which indicated acceptable responses in terms of the time spent on each survey by the participants.Data Analysis
IBM SPSS Statistics V21.0 was used to analyze the data. Propensity score matching was used with a tolerance of 0.05 to match participants and avoid any demographic bias between scenarios. To find each outcome variable’s total score, we calculated unweighted sum scores of items for each variable. Analysis of variance (ANOVA) was then performed to examine the differences between the six proposed scenarios for each of the outcome variables: perceived performance risk, perceived social biases, perceived privacy concerns, perceived trust, perceived communication barriers, perceived concerns about the transparency of regulatory standards, perceived liability issues, perceived benefits, and intention to use. Prior to ANOVA, the Levene test was run to ensure the homogeneity of variance, as this is one of the fundamental assumptions of ANOVA. There was sufficient evidence to hold the assumption of homogeneity of variance for all outcome variables. The Scheffe posthoc test was used to identify which scenarios significantly differed from each other per outcome variable.
Results
After cleaning the data for biases and incomplete responses, there were a total of 634 completed surveys. After matching across scenarios, there were 105 participants in Acute-AI-only, 104 participants in Chronic-AI-only, 113 participants in Acute-AI-physician, 103 participants in Chronic-AI-physician, 105 participants in Acute-physician-only, and 104 participants in Chronic-physician-only. The detailed demographic information of the six scenarios is reported in
. In summary, 44% of participants were women; approximately 30% of participants were between 20 and 29 years old, 33% were between 30 and 39 years old, 18% were between 40 and 49 years old, and 17% were above 50 years of age. The majority of the participants were White (65%), followed by 17% Asian, 11% African American, and 5% Hispanic. Regarding the level of education, 6% were high school graduates, 11% completed some college, 8% held a 2-year degree, 48% had a bachelor’s degree, and 23% had a master’s degree. Regarding employment status, most of the participants were full-time employees (72%), followed by 14% part-time employees, 8% unemployed, 2% retired, and 4% students. Approximately 15% of participants in our study reported an annual household income of less than US $25,000, 26% reported an income between US $25,000 and US $49,999, 22% reported an income between US $50,000 and US $74,999, 18% reported an income between US $75,000 and US $99,999, and approximately 19% reported an income of more than US $100,000.Across the six scenarios, there were no significance differences in terms of gender (χ25=1.76, P=.88), age (χ225=30.31, P=.21), race (χ225=22.37, P=.62), level of education (χ230=37.89, P=.15), employment (χ220=16.20, P=.70), and annual household income (χ225= 19.85, P=.76). Respondents were also asked to report their personal innovativeness on a Likert scale to ensure this factor would not introduce any bias into different scenarios. The ANOVA results across the six scenarios revealed no significant differences among respondents regarding the level of personal innovativeness (P=.19).
The items were adapted from previous studies with slight changes to fit them into this research context. All items were measured on a 5-point Likert-type scale, with 1 indicating “strongly disagree” and 5 indicating “strongly agree.”
shows the number of items and Cronbach α values per outcome variable, which were all above .70 as the recommended threshold value [ ], implying adequate reliability per outcome variable.Outcome variables | Number of items | Cronbach α |
Perceived performance risks | 5 | .92 |
Perceived social biases | 4 | .85 |
Perceived privacy concerns | 6 | .93 |
Perceived trust | 5 | .92 |
Perceived communication barriers | 5 | .92 |
Perceived transparency of regulatory standards | 5 | .92 |
Perceived liability issues | 6 | .93 |
Perceived benefits | 7 | .92 |
Intention to use | 5 | .92 |
The summary statistics (mean score, SD) per outcome variable are presented in
. Some of the trends are evident from these results. For example, we can observe lower privacy concerns and liability issues for traditional in-person examinations than AI-based interactions.Outcome variable | AIa as substituting technology (without physician interaction), mean (SD) | AI as augmenting technology (with physician interaction), mean (SD) | Traditional in-person visit, mean (SD) | ANOVAb | ||||
F statistic (df=5, 628) | P value | |||||||
Perceived performance risks | 1.36 | .24 | ||||||
Acute, short-term illness | 16.3 (5.1) | 16.6 (5.5) | 15.4 (5.0) | |||||
Chronic, long-lasting illness | 16.2 (5.1) | 17.1 (4.8) | 15.9 (5.3) | |||||
Marginal meansc | 16.3 (5.1) | 16.8 (5.1) | 15.6 (5.1) | |||||
Perceived biases | 0.86 | .51 | ||||||
Acute, short-term illness | 12.8 (3.7) | 13.0 (4.4) | 12.2 (3.8) | |||||
Chronic, long-lasting illness | 13.0 (3.9) | 13.2 (3.8) | 12.4 (4.3) | |||||
Marginal means | 12.9 (3.8) | 13.1 (4.1) | 12.3 (4.0) | |||||
Perceived privacy concerns | 3.35 | .005 | ||||||
Acute, short-term illness | 19.0 (6.7) | 20.8 (6.1) | 17.7 (6.1) | |||||
Chronic, long-lasting illness | 20.5 (5.8) | 19.8 (6.1) | 19.5 (7.0) | |||||
Marginal means | 19.8 (6.3) | 20.3 (6.1) | 18.6 (6.6) | |||||
Perceived trust | 6.27 | <.001 | ||||||
Acute, short-term illness | 16.5 (5.0) | 17.3 (4.8) | 18.6 (4.4) | |||||
Chronic, long-lasting illness | 15.4 (4.9) | 16.8 (4.7) | 18.2 (4.8) | |||||
Marginal means | 16.0 (5.0) | 17.1 (4.8) | 18.4 (4.6) | |||||
Perceived communication barriers | 9.24 | <.001 | ||||||
Acute, short-term illness | 17.4 (5.7) | 18.1 (5.2) | 14.7 (4.8) | |||||
Chronic, long-lasting illness | 17.1 (4.9) | 17.5 (4.8) | 14.6 (5.7) | |||||
Marginal means | 17.3 (5.3) | 17.8 (5.0) | 14.6 (5.3) | |||||
Perceived transparency of regulatory standards | 9.42 | <.001 | ||||||
Acute, short-term illness | 17.9 (5.0) | 18.1 (5.1) | 14.9 (4.9) | |||||
Chronic, long-lasting illness | 17.6 (4.7) | 17.6 (5.0) | 15.0 (5.5) | |||||
Marginal means | 17.8 (4.9) | 17.8 (5.0) | 14.9 (5.2) | |||||
Perceived liability issues | 6.27 | <.001 | ||||||
Acute, short-term illness | 21.3 (6.4) | 22.1 (5.7) | 18.5 (6.0) | |||||
Chronic, long-lasting illness | 20.8 (5.8) | 20.6 (6.2) | 18.3 (6.7) | |||||
Marginal means | 21.0 (6.1) | 21.4 (6.0) | 18.4 (6.4) | |||||
Perceived benefits | 3.28 | .006 | ||||||
Acute, short-term illness | 24.5 (6.2) | 25.9 (5.5) | 26.1 (5.9) | |||||
Chronic, long-lasting illness | 23.6 (6.5) | 24.2 (6.6) | 26.0 (6.2) | |||||
Marginal means | 24.1 (6.4) | 25.1 (6.1) | 26.1 (6.0) | |||||
Intention to use | 9.71 | <.001 | ||||||
Acute, short-term illness | 16.6 (4.8) | 17.4 (4.9) | 19.3 (4.6) | |||||
Chronic, long-lasting illness | 15.8 (5.3) | 16.5 (5.2) | 19.3 (4.6) | |||||
Marginal means | 16.2 (5.1) | 17.0 (5.0) | 19.3 (4.6) |
aAI: artificial intelligence.
bANOVA: analysis of variance.
cDifference in the means of acute short-term illness and chronic long-lasting illness.
Significant differences (P<.05) between different groups were found for the following variables: perceived privacy concern, perceived trust, perceived communication barriers, perceived concerns about transparency in regulatory standards, perceived liability issues, perceived benefits, and intention to use. No significant difference was found between scenarios regarding perceived performance risk and perceived social biases.
shows a summary of significant differences between scenarios from the Scheffe posthoc test. Patients suffering from an acute temporary short-term disease were significantly more concerned about the privacy of their health information when AI clinical applications with physician interaction was used compared to having a traditional in-person interaction with their physicians (P=.03). No significant differences were found in terms of perceived privacy concerns among patients with chronic conditions across scenarios.
Concerning trust, our results showed that patients with chronic illnesses found AI clinical applications to be less trustworthy compared to traditional diagnostic and treatment processes when they interact directly with the physicians (P=.004). The trust in physicians was also significantly higher for patients with acute conditions (P<.001).
Regarding perceived communication barriers, patients were significantly more concerned that AI clinical applications may reduce or eliminate the human aspect of relations between patients and professional care providers in comparison with face-to-face physician interactions for both acute (P=.01) and chronic (P<.001) health conditions. Similarly, when AI clinical applications are used in addition to physician interaction, there were still significantly greater concerns about lack of human relations than in face-to-face physician visits for both acute (P=.03) and chronic (P=.005) illnesses.
Further, the results showed that patients were significantly more concerned about the transparency of regulatory standards to assess AI algorithms and tools in comparison with the transparency of guidelines to monitor the performance of physicians’ practices for both acute (P=.002) and chronic (P=.02) conditions. Similarly, when AI clinical applications are used in addition to physician interactions, patients were significantly more concerned about the transparency of guidelines for AI-physician interactions than traditional in-person physician visits for acute (P=.001) and chronic (P=.02) illnesses.
Regarding patients’ concerns about liability issues, patients with acute illnesses were significantly more concerned when AI clinical applications are used under physicians’ control. This may be because of the lack of clarity about who is responsible if appropriate AI-recommended treatment options are mistakenly dismissed or offer wrong recommendations compared to physician liability in traditional visits (P=.003). Interestingly, patients suffering from a chronic condition were significantly more concerned about liability issues using only AI clinical applications (P=.04) or AI tools with physician control (P=.001).
Lastly, patients with acute illnesses indicated significantly higher intentions to use in-person visits than only AI clinical applications (P=.01). By contrast, patients with chronic illnesses were significantly more willing to use in-person visits compared to only AI tools (P<.001) as well as AI clinical applications under physician control (P=.006). The detailed results, including nonsignificant differences, are included in
.Scenarios compared | Mean difference (SE) | P value | 95% CI | |||
Perceived privacy concern | ||||||
Acute-AI-physician vs Acute-physician-only | 3.07 (0.86) | .03 | 0.21 to 5.92 | |||
Perceived trust | ||||||
Chronic-AI-only vs Acute-physician-only | –3.22 (0.66) | <.001 | –5.42 to –1.01 | |||
Perceived communication barriers | ||||||
Acute-AI-only vs Acute physician only | 2.75 (0.72) | .01 | 0.36 to 5.14 | |||
Acute-AI-only vs Chronic-physician-only | 2.86 (0.72) | .01 | 0.46 to 5.26 | |||
Chronic-AI-only vs Acute-physician-only | 2.41 (0.72) | .05 | 0.01 to 4.81 | |||
Chronic-AI-only vs Chronic physician-only | 2.52 (0.72) | .03 | 0.12 to 4.92 | |||
Acute-AI-physician vs Acute-physician-only | 3.38 (0.70) | <.001 | 1.03 to 5.73 | |||
Acute-AI-physician vs Chronic-physician only | 3.49 (0.71) | <.001 | 1.13 to 5.84 | |||
Chronic-AI-physician vs Acute-physician-only | 2.87 (0.72) | .01 | 0.46 to 5.27 | |||
Chronic-AI-physician vs Chronic-physician-only | 2.98 (0.72) | <.001 | 0.57 to 5.38 | |||
Perceived transparency of regulatory standards | ||||||
Acute-AI-only vs Acute-physician-only | 3.04 (0.70) | <.001 | 0.71 to 5.36 | |||
Acute-AI-only vs Chronic-physician-only | 2.91 (0.70) | <.001 | 0.57 to 5.24 | |||
Chronic-AI-only vs Acute-physician-only | 2.77 (0.70) | .01 | 0.44 to 5.10 | |||
Chronic-AI-only vs Chronic-physician-only | 2.64 (0.70) | .02 | 0.30 to 4.97 | |||
Acute-AI-physician vs Acute-physician-only | 3.22 (0.68) | <.001 | 0.94 to 5.51 | |||
Acute-AI-physician vs Chronic-physician-only | 3.09 (0.69) | <.001 | 0.80 to 5.38 | |||
Chronic-AI-physician vs Acute-physician-only | 2.71 (0.70) | .01 | 0.37 to 5.04 | |||
Chronic-AI-physician vs Acute-physician-only | 2.57 (0.70) | .02 | 0.23 to 4.91 | |||
Perceived liability issues | ||||||
Acute-AI-only vs Chronic-physician-only | 2.92 (0.85) | .04 | 0.09 to 5.75 | |||
Acute-AI-physician vs Acute-physician-only | 3.53 (0.83) | <.001 | 0.75 to 6.30 | |||
Acute-AI-physician vs Chronic-physician-only | 3.73 (0.83) | <.001 | 0.94 to 6.51 | |||
Intention to use | ||||||
Acute-AI-only vs Acute-physician-only | –2.75 (0.68) | .01 | –5.02 to –0.49 | |||
Acute-AI-only vs Chronic-physician-only | –2.73 (0.68) | .01 | –5.00 to –0.46 | |||
Chronic-AI-only vs Acute-physician-only | –3.52 (0.68) | <.001 | –5.79 to –1.25 | |||
Chronic-AI-only vs Chronic-physician-only | –3.49 (0.68) | <.001 | –5.76 to –1.22 | |||
Chronic-AI-physician vs Acute-physician-only | –2.79 (0.68) | .01 | –5.06 to –0.52 | |||
Chronic-AI-physician vs Chronic-physician-only | –2.76 (0.68) | .01 | –5.04 to –0.48 |
Discussion
Principal Findings
Given the promising opportunities created by AI technology (such as better diagnostic and decision support), the main question is when AI applications will become part of routine clinical practice [
]. AI embedded in smart devices democratizes health care by bringing AI clinical applications into patients’ homes [ ]. Nevertheless, some concerns related to the use of AI need to be addressed. As previous studies introduced several concerns and challenges with AI [ ], this study’s main focus was to analyze the perceptions of people with different health conditions about the use of AI clinical applications as an alternative for diagnostics and treatment purposes. This study required participants who are actually suffering from an acute or chronic disease to consider hypothetical situations (ie, using AI clinical applications). If the study had recruited participants who are current users of AI clinical applications for health care purposes, the results could have been different. Current users of AI applications in health care settings may have more accurate perceptions about AI, and the findings could be more practical. In the following subsections, we propose our theoretical contributions and practical implications related to each outcome variable.Perceived Communication Barriers
The results showed that people with both acute and chronic health conditions may believe that both AI applications and collective intelligence can lead to communication barriers. This point is in line with previous studies highlighting that the use of AI applications in service delivery (such as health care) may cause noteworthy communication barriers between customers and service providers [
]. Reliance on AI clinical applications may reduce physicians’ and patients’ interactions and conversations [ ]. Consumers may refuse to use AI applications because they need human social interaction during service encounters [ ]. AI technology fundamentally changes traditional physician-patient communications. Thus, individuals may worry as they may lose face-to-face cues and personal interactions with physicians. AI creates challenges to patient-clinician interactions, as clinicians need to learn how to interact with the AI system for health care delivery and patients are required to reduce their fear of technology [ ]. As AI continues to proliferate, users still encounter some challenges concerning effective use, such as how the partnership between AI systems and humans could be synergic [ ]. A previous study proposed that more sophisticated technologies should be integrated into current AI clinical applications to improve human-computer interactions and streamline the information flow between two parties [ ]. Therefore, the nature of AI clinical applications (even coupled with physician controls) may reduce conversation between physicians and patients, resulting in the emergence of more risk beliefs.Suppose individuals are concerned that AI applications may reduce human aspects of relations in medical contexts. In that case, they may lose face-to-face cues and personal interactions with physicians and find themselves in a more passive position for making health-related decisions. This finding is consistent with a study in the chatbot context (within the area of AI systems), which indicated that users have stronger feelings of copresence and closeness when the chatbot uses social cues [
]. In the context of robot care, a study showed that when robots are used in rehabilitation, they are viewed by patients as reducing human contact [ ]. Developers need to add more interactive and entertaining social cues to AI clinical applications to address the possible communication barriers between users and AI. For instance, AI-driven recommendations and assistance can be appealing if the application holds promise of allowing users more time to interact with it to establish empathy.Perceived Privacy Concerns
In the case of suffering from an acute illness, people may perceive more serious privacy concerns if they can use AI clinical applications that are under physician control. Thus, they may prefer to use face-to-face interactions with physicians to reduce their privacy concerns. Deeper privacy concerns may have roots in two common perceptions. The first is the belief that anonymized data can be reidentified through AI models, and in turn, could increase the likelihood of privacy invasion and data breach [
]. The second is that AI systems need massive data sets; thus, patients are concerned that their health information may be collected or shared without permission for purposes other than treatment [ ].Perceived Trust
The findings imply that individuals with chronic conditions may not trust AI clinical applications if no physician interactions are included in health care delivery. According to previous studies, the nature of AI models (such as deep learning) may increase a lack of transparency related to AI systems and threaten patient trust, resulting in higher risk beliefs [
]. When patients cannot understand the inside workings of AI applications (such as decision-making models), they may exhibit lower trust in their functions and how they generate treatment solutions and recommendations. Thus, people with chronic diseases may be more willing to trust direct patient-physician interactions to control and manage their symptoms.Perceived Accountability Issues
Accountability and liability are the other major concerns related to the use of AI. In this study, patients with acute conditions were more likely to be concerned about liability issues in the scenarios of both purely AI clinical applications and AI-physician interactions. Acute diseases are often accompanied by distinct symptoms that require urgent or short-term care. Thus, patients with severe and sudden signs and symptoms may seek quick care planning, accurate diagnosis, and reliable treatment options to cure their health problems promptly. In this situation, patients will become more nervous if they do not know who is held responsible for possible medication errors (such as wrong drug selection, wrong dose, or wrong quantity). This finding is consistent with previous studies in public health demonstrating the legal concerns surrounding who can be held accountable for AI-based decisions when errors occur using AI systems [
]. In general, society is yet to fully grasp many of the accountability and responsibility considerations associated with AI and big data [ ]. Accountability involves several stakeholders such as AI developers, government agencies, health care institutions, health care professionals, and patient communities. Nevertheless, it is still not clear how the regulatory concerns around responsibility and accountability of using solutions made by AI systems can be dealt with formally [ ]. Liability complexity becomes higher since it is not transparent to what extent AI systems can guide and control clinical practices [ ]. Responsibility concerns are not only limited to the incidents in which AI may generate errors. Another aspect of liability risk is when the right and appropriate treatment options recommended by AI are mistakenly dismissed [ ]. Thus, the higher the perceived liability issues, the greater the risk beliefs associated with AI. Regulatory agencies and health care organizations require clear policies to identify each stakeholder’s responsibility (eg, patients, physicians, hospitals, and AI developers) when AI clinical applications are widely offered.Perceived Transparency of Regulatory Standards
Regarding the regulatory risks associated with the transparency of standards, patients with either acute or chronic conditions were concerned with using purely AI-based services as well as AI applications under physicians’ direct supervision. This is in line with previous studies, which highlight that regulatory concerns are critical challenges to the use of AI in health care as the policies and guidelines for AI applications are not yet transparent [
]. The existing literature indicates that regulatory agencies require agreement on a set of standards that medical AI rollout must be rated against, such as determining the reliability of auditing the decisions made by autonomous AI clinical applications [ ]. Due to the intelligence nature of AI systems, regulatory agencies should establish new requirements, official policy, and safety guidelines regarding AI rollout in health care [ ]. For example, there is a legal need to evaluate the decision made by AI systems in case of litigation. AI applications operate based on autolearn models, which improve their performance over time [ ]. This inner mechanism differentiates AI applications from other health care tools and gives rise to new regulatory concerns that may not be the case in different domains. Generally, algorithms that change continuously with features that are not limited to the original accepted clinical trials may need a new range of policies and guidelines [ ]. Regulatory authorities are yet to formalize standards to evaluate and maintain AI’s safety and impact in many countries [ ]. Thus, people may become concerned if an appropriate regulatory and accreditation system regarding AI clinical applications is not yet in place.The lack of clear guidelines to monitor the performance of AI applications in the medical context can lead to higher risk beliefs associated with AI. Hence, if health care organizations cannot reduce regulatory concerns, many individuals may refuse to use AI clinical applications and request traditional interactions with physicians. Even if hospitals decide to use AI applications as supportive services under health care professionals’ supervision, the regulatory concerns should be mitigated prior to implementing AI systems. Regulatory agencies should establish normative standards and evaluation guidelines for implementing and using AI in health care in cooperation with health care institutions. The policies should clarify how AI clinical applications will be designed and developed in health care to comply with the accepted ethical principles (such as fairness and health equity). Regular audits and ongoing monitoring and reporting systems can be used to continuously evaluate the safety, quality, transparency, and ethical factors associated with services delivered through AI clinical applications.
Perceived Performance Risks
There were no significant differences found across the scenarios regarding performance risks. This result may reflect the belief of people with either acute or chronic diseases that the possibility of making medical errors with AI clinical applications would be the same as that for traditional in-person visits with physicians, even when doctors monitor the AI applications. Thus, the findings provide no solid evidence that individuals may believe that AI models and their features exhibit functional errors or technological uncertainties that endanger patient safety and lead to death or injuries. Respondents reported that any clinical encounters (ie, traditional, collaborative intelligence, or AI applications) could lead to incorrect diagnoses or wrong treatments.
Perceived Benefits
The results demonstrated significant differences in the perceived benefits according to the scenarios when using three alternative clinical encounters. Although the posthoc test did not show a significant difference among the six experimental groups, we observed that people with either acute or chronic conditions associated more benefits to direct interactions with their physicians. Among the AI options, only Acute-AI-physician (acute conditions and collaborative intelligence) showed similar responses to those given for traditional face-to-face interactions. Moreover, the scores of Acute-AI-physician on perceived risks such as privacy concerns, communication barriers, as well as regulatory and liability issues were not significantly different from those of other AI-based scenarios. Therefore, since the Acute-AI-physician scenario was associated with relatively higher benefits and nonsignificant differences in risk perceptions, we can argue that it might be a better option to start with. Accordingly, we recommend that implementing an AI-based service that physicians directly control and monitor would be an acceptable choice for patients with acute diseases.
Moreover, these results suggest that health care organizations, physicians, and AI application developers need to highlight potential AI benefits in their marketing campaigns to promote usability and the value of their AI applications, and ultimately increase the rate of usage. This argument is consistent with other studies suggesting that patients become more likely to use AI clinical applications if they believe they can improve diagnostics, prognosis, and patient management systems [
]. Specific marketing strategies in medical AI application companies and hospitals can be developed to enhance users’ awareness about both human and computer intelligence. These strategies should enlighten physicians on maintaining interactions with patients while using AI clinical applications that could suggest accurate care planning, reduce health care costs, and boost health care outcomes. Thus, highlighting the performance benefits of AI, such as accuracy of diagnosis, reliability of data analysis, the efficiency of care planning, and consistency of treatments, in communication with users along with marketing materials may increase individuals’ intention to at least try services provided by AI applications in health care.Intention to Use
The results indicated that people with chronic diseases are less willing to solely use AI clinical applications or AI applications controlled by physicians. Since chronic conditions encourage patients to visit their physicians frequently to consult them about their illness signs, symptoms, and progress, they are generally more likely to prefer human-human consultations over human-computer interactions. This point highlights that patients suffering from a long-lasting disease may not be ready to use pure or partial AI clinical applications to control their chronic conditions. Therefore, health care organizations need to exercise caution when implementing these applications for chronic diseases.
Furthermore, it should be mentioned that even though one of the primary outcome variables in this study was the intention to use, we do not propose that an unconditional acceptance of AI clinical applications is the ideal situation in health care. In contrast, we exhibit how important value-based consideration is when implementing AI applications in health care contexts. Suppose the rejection of medical AI is explained by huge and unaddressed technological, ethical, or regulatory concerns. In that case, there is not much sense in partially coping with these concerns by setting up the mandatory use of medical AI covering the entire patient spectrum. We propose that a successful rollout of AI clinical applications be managed with the knowledge and consideration of potential users’ benefits and risk perceptions. There is growing interest in research about AI-centric technologies; however, individuals have not yet integrated AI applications into many aspects of their lives [
]. We can argue that the public’s general technical knowledge about AI performance and how it works is still at an early stage. If AI clinical applications gained more ground in everyday care work, people would have a better perspective on the benefits and risks associated with them and actually start using them.The Role of Training
AI clinical applications should be designed in a way to respect patients’ autonomy and decision-making freedom. AI agents should not follow a coercive approach to force patients to make health-related decisions under pressure. Regulations should illuminate patients’ roles in relation to AI applications so that they are aware of their position to refuse AI-based treatments where possible [
]. An important aspect that needs to be built into AI systems in health care is the transparency of AI algorithms so that the AI system does not remain a black box to the users. Technical education, health knowledge, and explicit informed consent should be emphasized in the AI implementation model to prepare patients for AI use. Training should target the patient community to ensure that the patients obtain sufficient information to make informed health decisions. Thus, if users understand the basics of AI applications, and the potential benefits and limitations they can bring to health care, they will become more willing to accept AI use to obtain improved health care delivery. Under this circumstance, users will be active partners of AI applications rather than passive AI recommendation receivers.Limitations and Future Work
Although this study provides theoretical and practical implications, it has some limitations. First, we collected data from a sample of respondents from the United States. Care work culture and technology use are diverse among different countries. Therefore, we recommend that future studies consider subjects from other geographical locations such as other developed countries and developing countries that may not yet be implementing and using technologically advanced infrastructures in health care services (such as smart devices or AI clinical applications). Second, our study used an online survey to recruit participants digitally, and several measures were taken to provide clear definitions and scenarios. Since a self-rated sample of participants on MTurk was used, there is still a small chance that some respondents were not completely aware of AI technology and may have formed their own perceptions of the information technology artifact. Therefore, we suggest that further studies use a different method to ensure that subjects are knowledgeable about medical AI. For instance, future research can recruit informed patients who are directly referred by the providers using patient self-management tools such as wearable devices with embedded AI.
Third, due to the online data collection procedure through MTurk, we only considered respondents who could access the internet and were healthy enough to participate in an online survey. Although the MTurk pool has been recognized as an acceptable data collection means for academic research, caution should be exercised when generalizing this study’s results. Future researchers may extend this study by using other data collection methods to reach out to patients. Since the experiments did not occur within a health care setting (such as a hospital), the generalizability of our findings could have been limited. Thus, it would be interesting for future studies to repeat the same experiments through simulations with treating physicians and patients suffering from acute and chronic diseases visiting a health care center (eg, a hospital).
Fourth, the lack of educational background diversity (eg, 71% of participants had higher education) and age variation (eg, 63% were younger than 40 years) of the sample may be considered a limitation for the generalizability of our results. Thus, it is recommended that future studies consider drawing samples with more representative subjects in wider age groups with various levels of education. Fifth, we used the general concept of AI, and no specific type of AI clinical application was examined. Users’ perceptions may have different underlying objectives depending on the type of AI application considered. However, this study can also serve as a starting point for further empirical studies in the context of individual adoption of AI clinical applications. For instance, it would be interesting to investigate how alternative AI application brands influence risk beliefs, perceived benefits, and intention to use. Finally, we defined AI applications as tools that consumers can voluntarily choose to use for health care management. Another promising research avenue would be to examine public perspectives in other health care contexts such as when AI applications are implemented and used in hospitals and health care professionals recommend that patients start to use these applications. We also recommend a follow-up study examining users’ value perceptions in the context of mandatory AI applications used for diagnosing and completing patient treatments.
Conclusions
Disruptive advances in technology inevitably change societies, communications, and working life. Technology and health care have become inseparable in recent times. One of the fundamental technological changes that could impose significant health care effects is the widespread implementation and use of AI clinical applications. AI technology is an integral element of many organizations’ business models, and it is a critical strategic component in the plans for many sectors of business such as health care institutions. Implementing advanced information systems (such as AI) in health care requires an in-depth understanding of the factors associated with technology acceptance among groups of stakeholders. One of the most important stakeholders of AI clinical applications is patients. Due to the distinct characteristics of the health care sector, the implementation of AI applications should be conducted with several necessary considerations. From the public perspective, using AI applications is a form of endorsing them. Our results highlight that there are still noticeable concerns about implementing AI clinical applications in diagnostics and treatment recommendations for patients with both acute and chronic illnesses, even if these tools are used as a recommendation system under physician experience and wisdom. Our study shows that individuals may still not be ready to accept and use AI clinical applications owing to some risk beliefs. Before implementing AI, more studies are needed to identify the challenges that raise concerns for the implementation and use of AI tools. We recommend addressing the concerns contributing to risk beliefs about using AI clinical applications as a priority for health care organizations. If privacy concerns, trust issues, communication barriers, concerns related to the transparency of regulatory standards, and liability risks are not analyzed, rationalized, and resolved accordingly, people may not use these applications. They may further view AI applications as a threat to their health care. AI application developers and health care providers need to highlight the potential benefits from AI technology and address different dimensions of concerns to justify using an AI clinical application to the public. Health care regulatory agencies need to clearly define the rights and the responsibilities of health care professionals, developers, programmers, and end users to demonstrate acceptable approaches in using AI applications in health care.
Conflicts of Interest
None declared.
Experiments and scenarios.
DOCX File , 33 KB
Demographic information of participants in the six scenarios.
DOCX File , 28 KB
Detailed Scheffe posthoc test results.
DOCX File , 70 KBReferences
- Kaplan A, Haenlein M. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz 2019 Jan;62(1):15-25. [CrossRef]
- Jarrahi MH. Artificial intelligence and the future of work: human-AI symbiosis in organizational decision making. Bus Horiz 2018 Jul;61(4):577-586. [CrossRef]
- Bitner MJ, Brown SW, Meuter ML. Technology infusion in service encounters. J Acad Mark Sci 2000 Jan 01;28(1):138-149. [CrossRef]
- Marinova D, de Ruyter K, Huang M, Meuter ML, Challagalla G. Getting smart. J Service Res 2016 Dec 02;20(1):29-42. [CrossRef]
- Larivière B, Bowen D, Andreassen TW, Kunz W, Sirianni NJ, Voss C, et al. “Service Encounter 2.0”: An investigation into the roles of technology, employees and customers. J Bus Res 2017 Oct;79:238-246. [CrossRef]
- Robinson S, Orsingher C, Alkire L, De Keyser A, Giebelhausen M, Papamichail KN, et al. Frontline encounters of the AI kind: An evolved service encounter framework. J Bus Res 2020 Aug;116:366-376. [CrossRef]
- Chi OH, Denton G, Gursoy D. Artificially intelligent device use in service delivery: a systematic review, synthesis, and research agenda. J Hosp Mark Manag 2020 Feb 11;29(7):757-786. [CrossRef]
- Huang M, Rust RT. A strategic framework for artificial intelligence in marketing. J Acad Mark Sci 2020 Nov 04;49(1):30-50. [CrossRef]
- De Keyser A, Köcher S, Alkire (née Nasr) L, Verbeeck C, Kandampully J. Frontline service technology infusion: conceptual archetypes and future research directions. J Serv Manag 2019 Jan 14;30(1):156-183. [CrossRef]
- Gursoy D, Chi OH, Lu L, Nunkoo R. Consumers acceptance of artificially intelligent (AI) device use in service delivery. Int J Inf Manag 2019 Dec;49:157-169. [CrossRef]
- López-Robles J, Otegi-Olaso J, Porto Gómez I, Cobo M. 30 years of intelligence models in management and business: a bibliometric review. Int J Inf Manag 2019 Oct;48:22-38. [CrossRef]
- Coombs C, Hislop D, Taneva SK, Barnard S. The strategic impacts of intelligent automation for knowledge and service work: an interdisciplinary review. J Strat Inf Syst 2020 Dec;29(4):101600. [CrossRef]
- Khanna S. Artificial intelligence in health – the three big challenges. Australas Med J 2013 Jun 01;6(5):315-317. [CrossRef]
- Dreyer K, Allen B. Artificial intelligence in health care: brave new world or golden opportunity? J Am Coll Radiol 2018 Apr;15(4):655-657. [CrossRef] [Medline]
- Houssami N, Turner RM, Morrow M. Meta-analysis of pre-operative magnetic resonance imaging (MRI) and surgical treatment for breast cancer. Breast Cancer Res Treat 2017 Sep 6;165(2):273-283 [FREE Full text] [CrossRef] [Medline]
- Fraser KC, Meltzer JA, Rudzicz F. Linguistic features identify Alzheimer’s disease in narrative speech. J Alzheimers Dis 2015 Oct 15;49(2):407-422. [CrossRef]
- Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017 Feb 02;542(7639):115-118 [FREE Full text] [CrossRef] [Medline]
- Hannun AY, Rajpurkar P, Haghpanahi M, Tison GH, Bourn C, Turakhia MP, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med 2019 Jan 7;25(1):65-69 [FREE Full text] [CrossRef] [Medline]
- Wilson J, Daugherty PR. Collaborative intelligence: humans and AI are joining forces. Harvard Business Review. 2018 Jun 01. URL: https://hbr.org/2018/07/collaborative-intelligence-humans-and-ai-are-joining-forces [accessed 2021-11-20]
- Laï MC, Brian M, Mamzer M. Perceptions of artificial intelligence in healthcare: findings from a qualitative survey study among actors in France. J Transl Med 2020 Jan 09;18(1):14 [FREE Full text] [CrossRef] [Medline]
- Romero-Brufau S, Wyatt KD, Boyum P, Mickelson M, Moore M, Cognetta-Rieke C. A lesson in implementation: a pre-post study of providers' experience with artificial intelligence-based clinical decision support. Int J Med Inform 2020 May;137:104072. [CrossRef] [Medline]
- Pinto Dos Santos D, Giese D, Brodehl S, Chon SH, Staab W, Kleinert R, et al. Medical students' attitude towards artificial intelligence: a multicentre survey. Eur Radiol 2019 Apr 6;29(4):1640-1646. [CrossRef] [Medline]
- Gong B, Nugent JP, Guest W, Parker W, Chang PJ, Khosa F, et al. Influence of artificial intelligence on Canadian medical students' preference for radiology specialty: a national survey study. Acad Radiol 2019 Apr;26(4):566-577. [CrossRef] [Medline]
- European Society of Radiology (ESR). Impact of artificial intelligence on radiology: a EuroAIM survey among members of the European Society of Radiology. Insight Imag 2019 Oct 31;10(1):105-111 [FREE Full text] [CrossRef] [Medline]
- Broadbent E, Tamagawa R, Patience A, Knock B, Kerse N, Day K, et al. Attitudes towards health-care robots in a retirement village. Australas J Ageing 2012 Jun;31(2):115-120. [CrossRef] [Medline]
- Patel BN, Rosenberg L, Willcox G, Baltaxe D, Lyons M, Irvin J, et al. Human–machine partnership with artificial intelligence for chest radiograph diagnosis. NPJ Digit Med 2019 Nov 18;2(1):111. [CrossRef]
- Turja T, Aaltonen I, Taipale S, Oksanen A. Robot acceptance model for care (RAM-care): A principled approach to the intention to use care robots. Inf Manag 2020 Jul;57(5):103220. [CrossRef]
- Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers' perspectives. BMC Med Inform Decis Mak 2020 Jul 22;20:170 [FREE Full text] [CrossRef] [Medline]
- Lee H, Piao M, Lee J, Byun A, Kim J. The purpose of bedside robots: exploring the needs of inpatients and healthcare professionals. Comput Inform Nurs 2020 Jan;38(1):8-17. [CrossRef] [Medline]
- Alami H, Rivard L, Lehoux P, Hoffman SJ, Cadeddu SBM, Savoldelli M, et al. Artificial intelligence in health care: laying the foundation for responsible, sustainable, and inclusive innovation in low- and middle-income countries. Global Health 2020 Jun 24;16:52 [FREE Full text] [CrossRef] [Medline]
- Zhang B, Dafoe A. Artificial intelligence: American attitudes and trends. SSRN J 2019 Jan 09:1-111. [CrossRef]
- Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, et al. A guide to deep learning in healthcare. Nat Med 2019 Jan 7;25(1):24-29. [CrossRef] [Medline]
- Hermes S, Riasanow T, Clemons EK, Böhm M, Krcmar H. The digital transformation of the healthcare industry: exploring the rise of emerging platform ecosystems and their influence on the role of patients. Bus Res 2020 Sep 11;13(3):1033-1069. [CrossRef]
- Holman H, Lorig K. Patients as partners in managing chronic disease. Partnership is a prerequisite for effective and efficient health care. BMJ 2000 Feb 26;320(7234):526-527 [FREE Full text] [CrossRef] [Medline]
- Grichnik KP, Ferrante FM. The difference between acute and chronic pain. Mt Sinai J Med 1991 May;58(3):217-220. [Medline]
- About chronic diseases. Centers for Disease Control and Prevention. 2020. URL: https://www.cdc.gov/chronicdisease/about/index.htm [accessed 2020-12-05]
- Xu J. Overtrust of robots in high-risk scenarios. 2018 Presented at: 2018 AAAI/ACM Conference on AI, Ethics, and Society; February 2-3, 2018; New Orleans, LA p. 390-391. [CrossRef]
- Wu Y, Cristancho-Lacroix V, Fassert C, Faucounau V, de Rotrou J, Rigaud A. The attitudes and perceptions of older adults with mild cognitive impairment toward an assistive robot. J Appl Gerontol 2016 Jan 09;35(1):3-17. [CrossRef] [Medline]
- Bauchat JR, Seropian M, Jeffries PR. Communication and empathy in the patient-centered care model—why simulation-based training is not optional. Clin Simul Nurs 2016 Aug;12(8):356-359. [CrossRef]
- Spiro H. Commentary: the practice of empathy. Acad Med 2009;84(9):1177-1179. [CrossRef]
- Charon R. The patient-physician relationship. Narrative medicine: a model for empathy, reflection, profession, and trust. JAMA 2001 Oct 17;286(15):1897-1902. [CrossRef] [Medline]
- Kerasidou A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ 2020 Jan 27;98(4):245-250. [CrossRef]
- Heinze K, Suwanabol PA, Vitous CA, Abrahamse P, Gibson K, Lansing B, et al. A survey of patient perspectives on approach to health care: focus on physician competency and compassion. J Patient Exp 2020 Dec 28;7(6):1044-1053 [FREE Full text] [CrossRef] [Medline]
- Singh P, King-Shier K, Sinclair S. South Asian patients' perceptions and experiences of compassion in healthcare. Ethn Health 2020 May 11;25(4):606-624. [CrossRef] [Medline]
- Kelley JM, Kraft-Todd G, Schapira L, Kossowsky J, Riess H. The influence of the patient-clinician relationship on healthcare outcomes: a systematic review and meta-analysis of randomized controlled trials. PLoS One 2014 Apr 9;9(4):e94207 [FREE Full text] [CrossRef] [Medline]
- Lu L, Cai R, Gursoy D. Developing and validating a service robot integration willingness scale. Int J Hospit Manag 2019 Jul;80:36-51. [CrossRef]
- Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: Addressing ethical challenges. PLoS Med 2018 Nov 6;15(11):e1002689 [FREE Full text] [CrossRef] [Medline]
- Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc 2020 Mar 01;27(3):491-497 [FREE Full text] [CrossRef] [Medline]
- Miller DD, Brown EW. Artificial intelligence in medical practice: the question to the answer? Am J Med 2018 Feb;131(2):129-133. [CrossRef] [Medline]
- LaRosa E, Danks D. Impacts on trust of healthcare AI. 2018 Presented at: 2018 AAAI/ACM Conference on AI, Ethics, and Society; February 2-3, 2018; New Orleans, LA p. 210-215. [CrossRef]
- Bryson J, Winfield A. Standardizing ethical design for artificial intelligence and autonomous systems. Computer 2017 May;50(5):116-119. [CrossRef]
- Gupta RK, Kumari R. Artificial intelligence in public health: opportunities and challenges. JK Sci 2017;19(4):191-192.
- Tang A, Tam R, Cadrin-Chênevert A, Guest W, Chong J, Barfett J, Canadian Association of Radiologists (CAR) Artificial Intelligence Working Group. Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology. Can Assoc Radiol J 2018 May 01;69(2):120-135 [FREE Full text] [CrossRef] [Medline]
- Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare delivery. J R Soc Med 2019 Jan 03;112(1):22-28. [CrossRef] [Medline]
- Luxton D. Should Watson be consulted for a second opinion? AMA J Ethics 2019;21(2):E131-E137.
- Vance A, Elie-Dit-Cosaque C, Straub DW. Examining trust in information technology artifacts: the effects of system quality and culture. J Manag Inf Syst 2014 Dec 08;24(4):73-100. [CrossRef]
- Whittlestone J, Nyrup R, Alexandrova A, Dihal K, Cave S. Ethical and societal implications of algorithms, data, and artificial intelligence: a roadmap for research. Nuffield Foundation. 2019. URL: https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf [accessed 2021-11-20]
- Sun TQ, Medaglia R. Mapping the challenges of artificial intelligence in the public sector: evidence from public healthcare. Gov Inf Quart 2019 Apr;36(2):368-383. [CrossRef]
- Lee J, Kim KJ, Lee S, Shin D. Can autonomous vehicles be safe and trustworthy? Effects of appearance and autonomy of unmanned driving systems. Int J Hum-Comput Interact 2015 Jul 09;31(10):682-691. [CrossRef]
- Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res 2019;46(4):629-650. [CrossRef]
- Hengstler M, Enkel E, Duelli S. Applied artificial intelligence and trust—The case of autonomous vehicles and medical assistance devices. Technol Forecast Soc Change 2016 Apr;105:105-120. [CrossRef]
- He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019 Jan 7;25(1):30-36 [FREE Full text] [CrossRef] [Medline]
- Mitchell M. Artificial intelligence hits the barrier of meaning. Information 2019 Feb 05;10(2):51. [CrossRef]
- Angwin J, Larson J, Mattu S, Kirchner L. Machine bias. ProPublica. 2016 May. URL: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [accessed 2021-11-20]
- Edwards SD. The HeartMath coherence model: implications and challenges for artificial intelligence and robotics. AI Soc 2018 Mar 8;34(4):899-905. [CrossRef]
- Dwivedi YK, Hughes L, Ismagilova E, Aarts G, Coombs C, Crick T, et al. Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. Int J Inf Manag 2021 Apr;57:101994. [CrossRef]
- Stuart R, Norvig P. Artificial intelligence-a modern approach 3rd edition. Berkeley, CA: Pearson; 2016.
- Kirkpatrick K. It's not the algorithm, it's the data. Commun ACM 2017 Jan 23;60(2):21-23. [CrossRef]
- Noble SU. Algorithms of oppression: how search engines reinforce racism NYU Press, 2018. 256 pp. Science 2021 Oct 29;374(6567):542-542. [CrossRef] [Medline]
- Cath C. Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Philos Trans A Math Phys Eng Sci 2018 Oct 15;376(2133):20180080 [FREE Full text] [CrossRef] [Medline]
- Esmaeilzadeh P. The effects of public concern for information privacy on the adoption of Health Information Exchanges (HIEs) by healthcare entities. Health Commun 2019 Sep 08;34(10):1202-1211. [CrossRef] [Medline]
- Dawson D, Schleiger E, Horton J, McLaughlin J, Robinson C, Quezada G, et al. Artificial intelligence: Australia's ethics framework. Analysis and Policy Observatory. 2019. URL: https://apo.org.au/node/229596 [accessed 2020-09-23]
- Zandi D, Reis A, Vayena E, Goodman K. New ethical challenges of digital technologies, machine learning and artificial intelligence in public health: a call for papers. Bull World Health Organ 2019 Jan 01;97(1):2. [CrossRef]
- Beregi J, Zins M, Masson J, Cart P, Bartoli J, Silberman B, Conseil National Professionnel de la Radiologie et Imagerie Médicale. Radiology and artificial intelligence: An opportunity for our specialty. Diagn Interv Imaging 2018 Nov;99(11):677-678 [FREE Full text] [CrossRef] [Medline]
- Tizhoosh H, Pantanowitz L. Artificial intelligence and digital pathology: challenges and opportunities. J Pathol Inform 2018;9(1):38 [FREE Full text] [CrossRef] [Medline]
- Oakden-Rayner L. Reply to 'Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists' by Haenssle et al. Ann Oncol 2019 May 01;30(5):854 [FREE Full text] [CrossRef] [Medline]
- Sohn K, Kwon O. Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telemat Inform 2020 Apr;47:101324. [CrossRef]
- Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol 2017 Dec 21;2(4):230-243 [FREE Full text] [CrossRef] [Medline]
- Xu N, Wang K. Adopting robot lawyer? The extending artificial intelligence robot lawyer technology acceptance model for legal industry by an exploratory study. J Manag Organiz 2019 Feb 13:1-19. [CrossRef]
- Sánchez-Prieto JC, Cruz-Benito J, Therón R, García-Peñalvo F. Assessed by machines: development of a TAM-based tool to measure AI-based assessment acceptance among students. Int J Interact Multimed Artif Intell 2020;6(4):80. [CrossRef]
- Zhao X, Xia Q, Huang W. Impact of technostress on productivity from the theoretical perspective of appraisal and coping processes. Inf Manag 2020 Dec;57(8):103265. [CrossRef]
- Marakanon L, Panjakajornsak V. Perceived quality, perceived risk and customer trust affecting customer loyalty of environmentally friendly electronics products. Kasetsart J Soc Sci 2017 Jan;38(1):24-30. [CrossRef]
- Zhang X, Liu S, Chen X, Wang L, Gao B, Zhu Q. Health information privacy concerns, antecedents, and information disclosure intention in online health communities. Inf Manag 2018 Jun;55(4):482-493. [CrossRef]
- Lo WLA, Lei D, Li L, Huang DF, Tong K. The perceived benefits of an artificial intelligence-embedded mobile app implementing evidence-based guidelines for the self-management of chronic neck and back pain: observational study. JMIR Mhealth Uhealth 2018 Nov 26;6(11):e198 [FREE Full text] [CrossRef] [Medline]
- Cohen J. A power primer. Psychol Bull 1992;112(1):155-159. [CrossRef]
- Paolacci G, Chandler J. Inside the Turk. Curr Dir Psychol Sci 2014 Jun 03;23(3):184-188. [CrossRef]
- Hair JF, Ringle CM, Sarstedt M. PLS-SEM: indeed a silver bullet. J Mark Theory Pract 2014 Dec 08;19(2):139-152. [CrossRef]
- Froomkin AM, Kerr IR, Pineau J. When AIs outperform doctors: the dangers of a tort-induced over-reliance on machine learning and what (not) to do about it. Ariz Law Rev 2019;61:33-99. [CrossRef]
- Xu J, Yang P, Xue S, Sharma B, Sanchez-Martin M, Wang F, et al. Translating cancer genomics into precision medicine with artificial intelligence: applications, challenges and future perspectives. Hum Genet 2019 Feb 22;138(2):109-124 [FREE Full text] [CrossRef] [Medline]
- Lee S, Lee N, Sah YJ. Perceiving a mind in a chatbot: effect of mind perception and social cues on co-presence, closeness, and intention to use. Int J Hum-Comput Interact 2019 Dec 06;36(10):930-940. [CrossRef]
- Sharkey A, Sharkey N. Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf Technol 2010 Jul 3;14(1):27-40. [CrossRef]
- Duan Y, Edwards JS, Dwivedi YK. Artificial intelligence for decision making in the era of Big Data – evolution, challenges and research agenda. Int J Inf Manag 2019 Oct;48:63-71. [CrossRef]
- Pesapane F, Volonté C, Codari M, Sardanelli F. Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insight Imag 2018 Oct 15;9(5):745-753 [FREE Full text] [CrossRef] [Medline]
- Waring J, Lindvall C, Umeton R. Automated machine learning: Review of the state-of-the-art and opportunities for healthcare. Artif Intell Med 2020 Apr;104:101822 [FREE Full text] [CrossRef] [Medline]
- Tran V, Riveros C, Ravaud P. Patients' views of wearable devices and AI in healthcare: findings from the ComPaRe e-cohort. NPJ Digit Med 2019 Jun 14;2(1):53-58. [CrossRef] [Medline]
- Fiske A, Henningsen P, Buyx A. Your robot therapist will see you now: ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. J Med Internet Res 2019 May 09;21(5):e13216 [FREE Full text] [CrossRef] [Medline]
- Schiff D, Borenstein J. How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA J Ethics 2019 Feb 01;21(2):E138-E145 [FREE Full text] [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
ANN: artificial neural network |
ANOVA: analysis of variance |
FST: frontline service technology |
IEEE: Institute of Electrical and Electronics Engineers |
IS: information systems |
MTurk: Mechanical Turk (Amazon) |
TAM: technology acceptance model |
Edited by JMIRPE Office; submitted 18.11.20; peer-reviewed by B Fakieh, L Bouchacourt, C Gross, M Nißen; comments to author 15.03.21; revised version received 04.05.21; accepted 26.10.21; published 25.11.21
Copyright©Pouyan Esmaeilzadeh, Tala Mirzaei, Spurthy Dharanikota. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.11.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.