Published on in Vol 25 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/41430, first published .
Persuading Patients Using Rhetoric to Improve Artificial Intelligence Adoption: Experimental Study

Persuading Patients Using Rhetoric to Improve Artificial Intelligence Adoption: Experimental Study

Persuading Patients Using Rhetoric to Improve Artificial Intelligence Adoption: Experimental Study

Original Paper

1Georgia Institute of Technology, Atlanta, GA, United States

2Georgia State University, Atlanta, GA, United States

3Capella University, Minneapolis, MN, United States

*all authors contributed equally

Corresponding Author:

Amrita George, PhD

Georgia State University

Room 1713

55 Park Place

Atlanta, GA, 30303

United States

Phone: 1 4048344213

Email: ageorge12@gsu.edu


Background: Artificial intelligence (AI) can transform health care processes with its increasing ability to translate complex structured and unstructured data into actionable clinical decisions. Although it has been established that AI is much more efficient than a clinician, the adoption rate has been slower in health care. Prior studies have pointed out that the lack of trust in AI, privacy concerns, degrees of customer innovativeness, and perceived novelty value influence AI adoption. With the promotion of AI products to patients, the role of rhetoric in influencing these factors has received scant attention.

Objective: The main objective of this study was to examine whether communication strategies (ethos, pathos, and logos) are more successful in overcoming factors that hinder AI product adoption among patients.

Methods: We conducted experiments in which we manipulated the communication strategy (ethos, pathos, and logos) in promotional ads for an AI product. We collected responses from 150 participants using Amazon Mechanical Turk. Participants were randomly exposed to a specific rhetoric-based advertisement during the experiments.

Results: Our results indicate that using communication strategies to promote an AI product affects users’ trust, customer innovativeness, and perceived novelty value, leading to improved product adoption. Pathos-laden promotions improve AI product adoption by nudging users’ trust (n=52; β=.532; P<.001) and perceived novelty value of the product (n=52; β=.517; P=.001). Similarly, ethos-laden promotions improve AI product adoption by nudging customer innovativeness (n=50; β=.465; P<.001). In addition, logos-laden promotions improve AI product adoption by alleviating trust issues (n=48; β=.657; P<.001).

Conclusions: Promoting AI products to patients using rhetoric-based advertisements can help overcome factors that hinder AI adoption by assuaging user concerns about using a new AI agent in their care process.

J Med Internet Res 2023;25:e41430

doi:10.2196/41430

Keywords



Background

Artificial intelligence (AI) technologies refer to any device that perceives its environment and takes action to maximize its chance of success [1]. Some examples of these technologies include machine learning, rule-based systems, natural language processing, and speech recognition. Technological advancements and improved computing capabilities have increased the proliferation of AI products. The adoption of AI products in health care has the potential to transform the health care industry, with medical AI forecasted to exceed a market size of ≥US $30 billion by 2025 [2].

Prior research on technology innovation has used the Technology Acceptance Model (TAM), Theory of Planned Behavior, and Unified Theory of Acceptance and Use of Technology to examine AI adoption and use [3]. The Unified Theory of Acceptance and Use of Technology has been widely used to explain intentions for technology use in various fields, including intelligent health care systems [4,5], whereas the TAM discusses the impact of perceived usefulness and ease of use on the behavioral intention to use and the actual use of technology. In addition, the TAM has been extended to derive the Value-based Adoption Model [6] to study the influence of benefits, such as usefulness and enjoyment, and sacrifices (such as technicality and perceived fee) on the perceived value of technology adoption. Moreover, AI adoption research has examined the antecedents of AI adoption [7]. For example, studies have established the influence of antecedents such as perceived usefulness, consumer innovativeness, and reference groups on the adoption intention of wearable health care technology [8]. Prior research points to the presence of antecedent hindering factors to AI adoption, such as privacy concerns, lack of trust (especially related to the accuracy, efficiency, and precision of AI), different degrees of consumer innovativeness, and lack of perceived novelty value [7,9]. Although prior research has established that communication strategies can persuade users to overcome concerns about using technology [10], to the best of our knowledge, this has not been empirically validated or studied in the context of AI product technology adoption. Addressing this research gap would be a substantial contribution, because communications have been found to be persuasive in health care outreach programs [11]. Therefore, different communication strategies can be used to persuade users to adopt AI products, particularly in health care.

The Art of Rhetoric by Aristotle [12], written in the 4th century BC, is highly regarded as a seminal work in argumentation and persuasion that forms the basis for communication strategies. In his work, Aristotle describes 3 main methods of persuasion: logos (logical), ethos (ethics), and pathos (emotion). Logos uses logical reasoning and evidence for persuasion. Ethos uses character, credibility, ethics, and previous persuasion achievements. Pathos uses emotions and passion for persuasion. Furthermore, in The Art of Rhetoric, Aristotle defines the 3 styles of oration: deliberative (political), forensic (legal), and epideictic (ceremonial) [13]. With these foundational principles established, Aristotle describes how logos, ethos, and pathos may be successfully applied in different forms of oration, that is, differing forms of messaging and communication. Knowing how and when to apply logos, ethos, and pathos in a persuasive argument allows a speaker to pattern their rhetorical style to best suit an intended audience. Organizations aiming to increase the adoption of their AI products can also choose appropriate communication strategies to market the product to the intended audience. In this study, we aimed to understand the communication strategies that can aid in overcoming user concerns, thereby improving technology adoption, specifically AI adoption. Therefore, the research question we sought to answer was as follows: Do communication strategies assuage barriers to AI adoption?

Research on technology adoption and communication strategies suggests that managers of technology-based products could use communication strategies that promote the use of their technologies [10]. This study attempts to extend this finding to AI adoption in health care by studying the influence of communication strategies to assuage hindrances in AI adoption from a patient’s perspective. We conducted experiments with potential AI product users to understand their intention to adopt an AI product when various communication strategies were used. The results of our study have major implications for both theory and practice. In terms of theoretical contributions, we have identified how using the right communication strategies could alleviate hindering factors identified in the technology adoption literature, such as privacy concerns, trust, and perceived novelty, which in turn can influence the behavioral intention to use and actual system use (specifically AI products in health care). Our study also makes a substantial practical contribution by making both health care AI product manufacturers and clinicians aware of structuring their communication with patients to persuade them to adopt the AI product that can be beneficial to the practitioner and patient.

Relevant Literature

For the literature review, we did a Boolean search using keywords such as “Technology Acceptance Model” AND “communication strategies (ethos, pathos, logos)” as well as “antecedents” OR “hindrances” OR “inhibitor” OR “enabler” AND “AI adoption” OR “technology adoption” specifically in health care to find relevant studies in top information systems, health informatics, and computer science journals (eg, Computers in Human Behavior, Management Science, Journal of Medical Internet Research, Information Systems Journal, and MIS Quarterly). We found that prior studies summarize the enablers and inhibitors in AI adoption from consumer and practitioner perspectives (Table S1 in Multimedia Appendix 1 [7-11,14-22]), how these factors influence behavioral intention to use, and the actual system use and ultimately impact technology adoption.

Factors Influencing AI Adoption

Overview

AI is defined as the intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. However, in the present context of medical imaging, a more specific definition may be more appropriate: “a system’s ability to correctly interpret external data, to learn from such data, and to use what was learned to achieve specific goals and tasks through flexible adaptation” [23]. AI research has examined the key challenges and hindrances to AI adoption and how these challenges can be overcome [24].

Some of the hindering factors identified included privacy concerns, trust (especially related to the accuracy, efficiency, and precision of health IT systems), consumer innovativeness, and perceived novelty value (Tables S1 and S5 in Multimedia Appendix 1). The implications of these factors on AI adoption are discussed in subsequent sections.

Privacy Concerns

Data privacy concerns can hinder technology adoption, particularly when many data privacy directives, such as the Health Insurance Portability and Accountability Act of 1996, create national standards to protect sensitive patient health information. Similarly, widespread news reports of data breaches can trigger a user’s privacy concerns. In such situations, users could have concerns with sharing sensitive personal information (eg, medical information) with AI bots or applications. In addition, data may need to be shared across multiple institutions and geographies (eg, telehealth), which might also raise one’s privacy concerns (Table S5 in Multimedia Appendix 1). Research also notes that lack of data integrity and continuity and lack of standards for data collection, format, and quality are some of the concerns impacting stakeholders in the adoption of AI in public health care [24]. In addition, most health care professionals, who are obligated to promote the tenets of confidentiality, do not understand their respective responsibilities toward medical confidentiality [25]. Security and privacy concerns influence both technology trust and user well-being, as well as the behavioral intention to use AI products [14], including consumers’ intention to adopt wearable health care technology [8].

Trust

Trust is a psychological state broadly defined based on 3 main dimensions, namely, benevolence, integrity, and ability. Prior research on AI adoption has established that technology trust influenced the behavioral intention of the use of AI products [14]. Trust is the cornerstone of effective AI user interactions, such that it affects how much users rely on AI [26]. Many users are skeptical about using technologies, such as AI assistants, owing to certain perceived risks. For example, many patients trust a surgeon more than a robotic surgical system, despite these systems being as efficient as a surgeon [15]. Furthermore, many aspects of machine learning, such as deep learning, remain a black box, with the lack of explainability and transparency impacting the trust-building process [27]. Trust in AI can also be influenced by several personal factors, such as education, past experiences, user biases, and emotions, as well as properties of the AI system, including controllability, model complexity, embedded biases, and reliability (ie, whether AI technology can perform a task predictably and consistently) [28].

Consumer Innovativeness

Customer innovativeness is the potential of consumers in a target segment to adopt a new product or technology [29]. Consumer research shows that people are likely to adhere to their existing routines, characterized by risk aversion and a general preference to buy familiar products [30]. The users who are ready to buy or try a product as soon as it hits the markets are considered consumer innovators and, in most cases, are opinion leaders or influencers. Prior research points out that user innovativeness can influence the behavioral intention to use AI products [14], including the adoption intention of wearable health care technology [8]. Consumer innovativeness plays a critical role in AI adoption and can be influenced by marketing campaigns from the organization [31], especially in the case of the adoption of medical technologies [32,33].

Novelty Value

Novelty value is the value characteristic that users obtain from using or adopting a new product, service, or technology that is surprising and fresh [34]. The conceptualization of innovativeness by Hirschman [34] focuses on consumer desires to obtain information about innovations that aid in the adoption or use of novel products and technology. Innovativeness is equated with inherent novelty seeking and is defined as “the desire to seek out the new and different.” [34] The perceived usefulness of a novel product also drives its increased adoption [16,35]. Furthermore, research notes that novelty (epistemic) values and emotional and social values significantly influence the adoption of new technologies [17]. Although these factors can hinder AI adoption, research on technology adoption suggests that managers of technology-based products could use communication strategies that promote the use of their technologies. Prior research indicates that communication strategies can persuade users to overcome concerns about using technology [10].

Communication Strategies in Technology Adoption

Prior studies have examined the impact of communication strategies on influencing consumer behavior in technology adoption (Table S1 in Multimedia Appendix 1). Knowing how and when to apply logos, ethos, and pathos in a persuasive argument allows a speaker to pattern their rhetorical style in a manner that best suits an intended audience. Research on technology adoption and communication strategies notes that communication strategies used by managers of technology-based products to promote the use of their technologies have an impact on technology adoption [10]. However, the impact of communication strategies on overcoming the hindering factors in AI adoption has received scant attention. Innovative technologies such as AI that promise enormous improvements in processes, goal attainment, outcomes, safety, etc, have many uncertainties (eg, lack of trust or confidence) that can impact adoption. AI product creators can aim at overcoming these barriers through active marketing campaigns with messages tailored to reach the right audience. Similar to other areas, communication strategies can be used to overcome risks. For example, Wieder [11] proposed a theoretical approach to using communication strategies to deliver a persuasive message on communicating radiation risk. Pathos communication justifications impact emotions and are likely to elicit powerful yet unsustainable social actions [18]. Logos approaches appeal to the logical part of the mind; they tend to elicit methodical calculation of means and ends to achieve efficiency or effectiveness [18]. Ethos justifications impact moral or ethical sensibilities [18]. A sequence of justifications starting with pathos and logos produces pragmatic legitimacy, whereas ethos would generate moral legitimacy [18]. For example, rhetorical modes of logos (rational) and pathos (emotional) were used to change the UK’s societal attitudes toward sharing health data [19]. In addition, the presence of both pragmatic and moral justification is required to create cognitive legitimacy [18].

Research Model and Hypotheses

Researchers have investigated various factors influencing technology adoption, such as customer innovativeness, privacy concerns, trust, and novelty value. Specifically, in the context of AI adoption, research has studied the impact of privacy concerns, technology trust, and consumer innovativeness, positively impacting behavioral intention to use the technology [8,14]. However, the effect of communication strategies on assuaging barriers to AI technology adoption has received limited attention. To address this research gap, we propose the research model in Figure 1 to hypothesize that communication strategies impact various concerns and traits of end users (patients), thereby indirectly influencing the adoption of AI technology. The literature has already considered the influence of these barriers on perceived ease of use or usefulness and, in turn, intention to use. We have not included perceived ease of use and perceived usefulness in the model for parsimony.

Figure 1. Research model. AI: artificial intelligence; H: hypothesis.

Consumer innovativeness, “the degree to which an individual is earlier in adopting new ideas than the average number of his or her social system” [36], includes creativity and adaptability to change. Consumer innovativeness is highly acknowledged by marketers for the successful diffusion of innovation to make businesses more profitable and competitive [36].

Ethos, a communication strategy that uses credibility and ethics for persuasion, can mitigate any risk perceived with using the AI product. For example, American Medical Association–endorsed products will be viewed as meeting certain standards and regulations, which alleviates concerns related to safety or ethics or errors when using the AI product. Similarly, Food and Drug Association–approved AI and machine learning medical devices will be viewed favorably by the users, including the innovative customers. It has been established that ethos positively affects consumer innovativeness [37]. An endorsement by a trusted figure improves consumer confidence in the product, thereby increasing adoption [38].

Pathos, a communication strategy that uses emotions and passion for persuasion, can alleviate emotional concerns when using the AI product. For example, a happy customer endorsing an AI product can influence the user’s perception of the product, thereby viewing the product as something that will improve their quality of life [39]. When someone similar to us endorses a product with a happy emotion, it improves innate consumer innovativeness [40]. consumer’s innovativeness can be nudged by their passion to be unique [36]. The need to be unique is nudged when the advertisement demonstrates a unique opportunity, thereby positively influencing consumer innovativeness [41].

Logos, a communication strategy that uses logical reasoning and evidence for persuasion, can provide fact-based evidence to alleviate perceived risks. It provides customers reassurance based on historical successes, which in turn improves confidence in the product. Providing logos-based information on product credibility furthers consumer confidence in the product and improves the adoption rate. For example, when a physician advises a patient on a course of treatment, the physician will present relevant medical evidence and explain why the benefits derived from the recommended course of action will likely provide the best outcome for the patient while outweighing the potential risks. Logic-based arguments such as these are designed not only to inform but also to influence and persuade patient behavior.

Studies have established that consumer innovativeness influences the behavioral intention to use AI products [14] and consumers’ adoption intention for wearable health care technology [8]. Studies found that when customer innovativeness is high, users are more likely to accept the innovative technology [42]. With AI being a new technology in health care, nudging customer innovation can improve acceptance of the AI product. For example, e-learning and web-based classes are still in the developmental phase in many parts of the world. Sharing the positive effects of e-learning, including performance expectancy and social influence, has helped spark customer innovativeness and interest in offerings that lead to increased adoption [43,44]. Therefore, the following were our hypotheses (Hs):

  • H1: using communication strategies for promoting the AI product can positively impact customer innovativeness.
  • H2: nudging customer innovativeness can improve acceptance of the AI product being promoted.

Prior privacy concerns, computer anxiety, perceived control, and app permission concerns can affect a user’s privacy concerns when using an AI product [45]. Studies have established that privacy concerns influence the behavioral intention to use AI products [14], including consumers’ intention to adopt wearable health care technology [8].

Advertisements using an ethos communication strategy will help alleviate ethical concerns related to compliance, standards of data collection, format, preservation of data integrity, and data integration and continuity. For example, an ad in which a celebrity endorses an AI product and clearly mentions that it is safe to use and compliant with most data directives would alleviate patients’ privacy concerns. By contrast, advertisements using a pathos communication strategy will aid in reconciling the cognitive dissonance that a user may have about using the product. For example, an ad that showcases an older adult couple being taken care of by a humanoid robot can emotionally persuade the end user and reduce any concerns the user could have regarding AI. The cognitive dissonance may create an unpleasant emotional state that can be alleviated through an appropriate pathos communication strategy. Finally, when advertisements use logos communication strategy, it can provide evidence of adherence to privacy policies (ie, notice, enforcement, access, security, or choice) that can alleviate privacy concerns [46]. For example, providing assurance that the product is compliant with various data privacy directives, such as the Health Insurance Portability and Accountability Act of 1996 and General Data Protection Regulation, along with clarifying what this compliance entails, helps address the concerns identified in research [45].

Previous studies have found that when the perceived security of a specific technology is high, users are more likely to accept innovative technology [42]. This justifies our hypotheses that privacy and security are barriers preventing the adoption of AI. For example, recently, social media technology giants such as Google have started advertisements to disseminate information on privacy considerations while designing their products, which is clearly aimed at users addressing their privacy concerns, thereby nudging them to further use their products. Therefore, we hypothesized the following:

  • H3: using communication strategies for promoting the AI product can alleviate privacy concerns with using the product.
  • H4: alleviating privacy concerns can improve acceptance of the AI product being promoted.

Trust, a psychological construct that encompasses an emotional and a logical aspect [47], can influence a user’s perception of using an AI product. When an AI product is advertised using an ethos communication strategy, the logical aspect of trust can be nudged. For example, when a credible organization such as the American Medical Association promotes the implementation of AI in health care and talks about the benefits of AI adoption, it positively influences many people by alleviating their concerns about AI, thereby enhancing users’ trust. Similarly, when the AI product is marketed using a pathos communication strategy, the emotional facet of trust can be nudged. For example, AI used for medical procedures enables better precision and higher success of these procedures, thereby allowing the best care for patients. It can be emotional for users to see their family getting the best possible care, and this emotional impetus allows them to trust AI better. Finally, marketing an AI product by using a logos communication strategy can improve the logical component of trust. For example, listing the benefits of automation in health care, such as accuracy and efficiency, including the hours and effort saved, helps build trust in AI products.

Prior research observed trust as an important antecedent of technology acceptance [48] and behavioral intention to use AI products [14]. The authors point out that trust provides a measurement of the subjective guarantee that the agent can make good on its side of the deal, behave as promised, and genuinely care [48]. With AI being a new technology in health care, where users are uncertain of the risks posed by using the technology, nudging the emotional and logical facets of trust can improve acceptance of the AI product. Therefore, we hypothesized the following:

  • H5: using communication strategies for promoting the AI product can improve trust in using the product.
  • H6: enhancing trust can lead to better acceptance of the AI product being promoted.

The novelty of the content or novelty value of a new technology will positively influence (1) its perceived ease of use and (2) its perceived usefulness [49,50]. Communication strategies such as pathos, ethos, and logos can improve the perceived ease of use and perceived usefulness of a new technology-laden product, such as AI products, because it communicates the novelty value of the product. When using pathos messaging in the marketing of AI products, marketers are convincing users of the product’s novelty through the manipulation of emotions, which in turn improves the product’s perceived usefulness and ease of use. For example, marketing Pria (an AI product) by stating that the product is easy to use by an older adult can convince consumers about the automated medicine dispenser and its ease of use and usefulness for someone near and dear to them. Similarly, when using ethos messaging in the marketing of AI products, marketers are convincing users of a product’s novelty through advocacy from a credible source, which in turn improves the product’s perceived usefulness and ease of use. For example, marketing Pria (an AI product) using celebrities or agencies such as the American Medical Association can convince consumers about the automated medicine dispenser and its ease of use and usefulness for themselves. Similarly, when using logos messaging in the marketing of AI products, marketers are convincing users of the product’s novelty through facts and evidence, which in turn improves the product’s perceived usefulness and ease of use. For example, marketing Pria (an AI product) by showing statistics about improvements in medicine adherence can convince consumers about the automated medicine dispenser and its ease of use and usefulness for themselves. Improved perceived ease of use and usefulness can positively impact technology adoption [16,35], which also applies in the context of novel technology [50], including AI products. Therefore, we hypothesized the following:

  • H7: using communication strategies for promoting the AI product can positively influence the perceived novelty value of the product.
  • H8: perceived novelty value can lead to better acceptance of the AI product being promoted.

Experiment Design

To test the hypothesis, we conducted 4 experiments in which we manipulated the communication strategy (ethos, pathos, and logos) using screenshots of advertisements for a product (Figures S1-3 in Multimedia Appendix 1). The participants were randomly assigned to each group. A control group was also included in the experiment to ensure the primes worked. Our primes were designed to ensure that communication strategies (ethos, pathos, and logos) were induced (Figures S1-3 in Multimedia Appendix 1). Participants were asked advertisement effectiveness assessment questions as a manipulation check to identify whether the primes induced different responses (Table S2 in Multimedia Appendix 1). We followed up on each experiment by asking questions to evaluate trust, novelty value, customer innovativeness, privacy concerns, and AI adoption of the product for which the advertisement was shown. We adapted scales from the literature to measure trust [51,52], novelty value [51,53], customer innovativeness [54], privacy concerns [55], and AI adoption [56] (Table S3 in Multimedia Appendix 1). The items measuring trust, novelty value, customer innovativeness, privacy concerns, and AI adoption ranged from 1 (strongly disagree) to 5 (strongly agree).

We collected data via Amazon Mechanical Turk (AMT), where we recruited AMT users with a Human Intelligence Task approval rate greater than 95%. One of the main advantages of using the AMT population is that it improves the generalizability of inferences as compared with traditional data collection methods [57]. The AMT workers received a small monetary reward for their participation.

Manipulation Check

We assessed whether the manipulation was successful. The participants were asked to rate questions related to the effectiveness of each advertisement. For example, participants were asked to rate “this advertisement was relevant/meaningful/important to me” (“1 strongly disagree...5 strongly agree”; see Table S4 in Multimedia Appendix 1 for the entire list). An ANOVA was conducted and a significant mean difference between the communication strategy conditions indicated that the manipulation was successful (Table 1). Participants in the pathological condition of the treatment group reported scores (mean 3.37, SD 0.90) different from those in the ethos condition (mean 3.52, SD 0.97) and logos condition (mean 3.43, SD 0.89). Thus, the results confirmed the effectiveness of communication strategy manipulation.

Table 1. Mean score and ANOVA results for induced communication strategy (CS) conditions.
Experimental condition and CS typeSample size, nValues, mean (SD)ANOVA
CSF1,144=43.79; P<.001

Pathos523.37 (0.90)

Logos483.43 (0.89)

Ethos503.52 (0.97)

Sample Characteristics and Psychometrics

We restricted the sample to AMT workers. Table S6 in Multimedia Appendix 1 presents the descriptive statistics. Partial least squares (PLS) analysis using SmartPLS was used to validate the psychometric properties of our measures and test the paths hypothesized in Figure 1. We chose PLS because it permits the modeling of latent variables and the simultaneous assessment of the measurement and structural models, while placing minimal demands on sample size and distributional assumptions [58,59]. We first examined the psychometric properties of our measures using the measurement model and then tested our hypotheses using a structural model.

We assessed the reliability and validity of our measurement items by examining the factor loadings, Cronbach α, and average variance extracted. The results of our analyses indicate that the scales had good reliability and validity (Tables S7-12 in Multimedia Appendix 1). We then conducted single-factor test by Harmon [60,61] to rule out common method bias. The results suggest that common method bias is unlikely to be a significant problem in our data given that more than one factor emerged from the factor analysis as well as the fact that the first factor did not account for most of the variance in our data (Tables S7-12 in Multimedia Appendix 1).

Ethics Approval

Data collection proceeded after obtaining approval from the institutional review board. The review board provided permission to proceed in its determination letter issued on November 18, 2021 (request # HR-4022). All participants were required to provide informed consent to participate in the study at the beginning of the web-based questionnaire. Data were handled in accordance with US regulations.


Overview

To test our hypotheses, we estimate 3 PLS models for each communication strategy. Model 1 examined the effect of the path communication strategy for a patient AI product on the dependent variable (ie, AI adoption). Model 2 tested the influence of an ethos communication strategy for a patient’s AI product on AI adoption. Model 3 tested the influence of the logos’ communication strategy for a patient AI product on AI adoption. Table 2 presents the results of the 3 models. To test H1 to H8, we assessed the structural model by examining the path coefficients and their significance levels for each model. The path coefficients were computed for each group. The significance levels for the effects were computed in SmartPLS using 1000 bootstrap samples [61].

Table 2. Results of 3 models of communication strategy (CS)a.
PLSb pathCS

PathosP valueEthosP valueLogosP value
CS→AIc adoption−0.145.380.199.140.227.09
CS→trust0.532<.0010.662<.0010.657<.001
Trust→AI adoption0.361.010.140.350.474.008
CS→privacy concerns−0.332.010.106.52−0.267.05
Privacy concerns→AI adoption−0.111.340.105.28−0.074.49
CS→customer innovativeness0.514<.0010.465<.0010.602<.001
Customer innovativeness→AI adoption−0.121.450.417<.001−0.017.91
CS→novelty value0.517.0010.666<.0010.582<.001
Novelty value→AI adoption0.591<.0010.199.250.104.61

aPathos: R2=0.584, adjusted R2=0.539; ethos: R2=0.614, adjusted R2=0.570; logos: R2=0.555, adjusted R2=0.502.

bPLS: partial least squares.

cAI: artificial intelligence.

Hypothesis Testing

We ran a regression model with communication strategy as the independent variable; customer innovativeness, trust, perceived novelty value, and privacy concern as mediators; and AI adoption as the dependent variable for the treatment groups.

For the pathos condition, the communication strategy predicted trust and perceived novelty value. The results showed that the coefficient of communication strategy on trust was positive (β=.532; P<.001) and that the coefficient of communication strategy on perceived novelty value was positive (β=.517; P=.001). The coefficient of trust in AI adoption was also positive and significant (β=.361; P=.01). In addition, the coefficient of perceived novelty value on AI adoption was positive and significant (β=.591; P<.001). While there was an effect of communication strategy on privacy concerns (β=−.332; P=.01) and customer innovativeness (β=.514; P<.001), the effect of privacy concerns (β=−.11; P=.34) and customer innovativeness (β=−.12; P=.45) on AI adoption was insignificant, but the magnitude confirmed previous theoretical findings that these factors inhibited AI adoption. Whereas H4 and H8 were not supported, our results indicated support for H1, H2, H3, H5, H6, and H7 in the pathos condition.

For the ethos condition, communication strategy predicted customer innovativeness. The results showed that the coefficient of communication strategy on customer innovativeness was positive (β=.465; P<.001). The coefficient of customer innovativeness on AI adoption is also positive and significant (β=.417; P<.001). The coefficients of communication strategy on perceived novelty value (β=.666; P<.001) and trust (β=.662; P<.001) were positive and significant. However, the coefficients of perceived novelty value (β=.199; P=.25) and trust (β=.140; P=.35) in AI adoption were insignificant. The effect of communication strategy on privacy concerns was also insignificant (β=.106; P=.52), as was the effect of privacy concerns on AI adoption (β=.105; P=.28), but the magnitude confirmed previous theoretical findings that privacy concerns inhibited AI adoption. Whereas H1 and H2 were supported, our results provided no support for H3, H4, H5, H6, H7, and H8 in the ethos condition.

For the logos condition, communication strategy predicted trust. The results showed that the coefficient of communication strategy on trust was positive (β=.657; P<.001). The coefficient of trust on AI adoption was also positive and significant (β=.474; P=.008). The coefficients of communication strategy on perceived novelty value (β=.582; P<.001), customer innovativeness (β=.602; P<.001), and privacy concerns (β=−.267; P=.05) are significant . However, the coefficients of perceived novelty value (β=.104, P=.61), customer innovativeness (β=−.017; P=.91), and privacy concerns (β=−.074; P=.49) on AI adoption were insignificant, and the magnitude confirmed the previous theoretical findings that customer innovativeness and privacy concerns inhibited AI adoption. Although H5 and H6 were supported, our results did not support for H1, H2, H3, H4, H7, and H8 for logos.


Overview

We empirically validated the influence of different communication strategies on overcoming factors that inhibit AI adoption. Having presented the results of our analysis (Table 3), we now consider the implications for users and research. We also discuss the limitations of this study and how they might inform future research initiatives.

Ethos-based communication strategies that rely on credibility and personal branding directly affect customer innovativeness, thereby increasing AI technology adoption. This can be because end users or patients can verify the veracity of these endorsers and discern and understand the credibility of ethos-based ads. By contrast, communication strategies based on logos and pathos independently aid in alleviating the trust issues that patients have, thereby increasing the adoption rate of AI. It is to be noted that trust has both emotional and logical parts. Hence, trust can be influenced emotionally (through pathos messaging) and logically (through logos messaging). Furthermore, a pathos-based communication strategy purposefully evokes emotions, thereby making end users feel the persuasion and connection to the product at a more personal level, leading them to identify the ease of use and usefulness of the AI product, which improves novelty value and AI adoption. Although pathos- and logos-based communication, which uses emotions and evidence to persuade, had a negligible impact on alleviating the privacy concerns of AI product users, questions remain about unethical data sharing and potential misuse of data by commercial organizations. This could be owing to information asymmetry between end users and organizations regarding how medical data may be used. Privacy concerns, including unethical data sharing and potential misuse of data by commercial organizations, continue to have a major impact on the adoption of AI technologies [45].

Table 3. Summary of the analysis.
HypothesisPathosEthosLogos
H1: using communication strategies for promoting the AIa product can positively impact customer innovativeness.SupportedSupportedSupported
H2: nudging customer innovativeness can improve acceptance of the AI product being promoted.Not supportedSupportedNot supported
H3: using communication strategies for promoting the AI product can alleviate privacy concerns with using the product.SupportedNot supportedSupported
H4: alleviating privacy concerns can improve acceptance of the AI product being promoted.Not supportedNot supportedNot supported
H5: using communication strategies for promoting the AI product can improve trust in using the product.SupportedSupportedSupported
H6: enhancing trust can lead to better acceptance of the AI product being promoted.SupportedNot supportedSupported
H7: using communication strategies for promoting the AI product can positively influence the perceived novelty value of the product.SupportedSupportedSupported
H8: perceived novelty value can lead to better acceptance of the AI product being promoted.SupportedNot supportedNot supported

aAI: artificial intelligence.

Prior studies note that AI systems will not replace human clinicians on a large scale, but rather will augment their efforts to care for patients [21]. With AI being a new agent introduced into the care process, patients often need to share sensitive information with the system without being fully aware of the consequences of such actions. Through such actions, patients stand to gain in terms of temporal displacement of care (ie, using AI to displace later high-cost interventions in favor of earlier preventive procedures) [28]. Despite these benefits, patients risk the loss of privacy, face systemic inequality or discrimination because of embedded biases in the AI tool, and face the possibility of being subjected to errors or injuries because of miscalculations of the system. Using appropriate communication strategies can alleviate some of the concerns users may have about using a new agent in their care process.

Our findings make a key theoretical contribution to the technology adoption literature, specifically AI adoption in health care. Effective health communication with the public does not just happen, and this process of communicating with the public needs to be taught and practiced in health care [11]. Although it has been established that AI in health care is much more efficient than a clinician, the growth in the adoption of AI has been slower than expected because of various factors, such as the novelty of technology and other user concerns. Consumer research shows that people usually adhere to their existing routines, characterized by risk aversion [9], and that novelty value, along with emotional and social values, significantly influences the adoption of new technologies [17]. The AI adoption literature has examined the factors that can inhibit the adoption of a product. Although many medical AI products are marketed to their users, the role of communication strategies in overcoming inhibiting factors has received scant attention. This study addresses this research gap by identifying the underlying mechanism that each type of communication strategy can have on overcoming some of the inhibiting factors to improve AI adoption. It clearly notes users’ concerns regarding the adoption of AI technologies and which communication strategies work best to address these user concerns, thereby helping with quicker technology adoption.

Limitations and Future Work

In the current rush toward using AI for aiding businesses in a variety of tasks [62] and with AI increasingly becoming integrated in many aspects of human life [63], we believe that communication strategies can help users transcend any perceived risks inherent to using AI products. Inducing pathos, ethos, and logos communication strategies improved AI adoption. In this study, we did not consider the effects of multiple communication strategies acting simultaneously on AI adoption; therefore, it would be beneficial for future studies to examine whether the effects of multiple communication strategies on AI adoption are additive. Another limitation of this study was that it did not consider the impact of communication strategies on AI adoption by various stakeholders (eg, health care practitioners, researchers, and patients). Health care practitioners are trained and possess more knowledge of the medical domain. Therefore, they may not be easily swayed through emotion-based communication strategies. Similarly, older users could be more apprehensive about privacy risks, thereby leading to less adoption among them [64,65]. Further research is required to investigate its effects on various stakeholders. In addition, we observed that the inhibiting factors were influenced differently by pathos, ethos, and logos communication strategies. For example, trust was influenced by pathos and logos but not by ethos. Thus, researchers can further examine the differential effects observed in our study for various communication strategies.

Conclusions

In our research, we were able to establish that communication strategies influence the adoption of AI by effectively mitigating any concerns that end users might have regarding the adoption and use of medical AI products. The increased adoption of AI in the US health sector would be a major advantage from both efficiency and cost perspectives, resulting in improved patient well-being. Thus, although health AI would not fully replace human clinicians, increased adoption of AI aided by appropriate communication strategies would result in reduced cost and better affordability of health services by end users. Our research can be used by hospitals and clinicians for targeted ads and communication while trying to allay any user concerns related to AI in health care.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Literature review and experiment supplementary material.

DOCX File , 4974 KB

  1. Russell SJ, Norvig P. Artificial Intelligence: A Modern Approach. 2nd edition. Upper Saddle River, NJ, USA: Prentice Hall; 2002.
  2. Kishor A, Chakraborty C. Artificial intelligence and internet of things based healthcare 4.0 monitoring system. Wirel Pers Commun 2022;127(2):1615-1631 [FREE Full text] [CrossRef]
  3. Sohn K, Kwon O. Technology acceptance theories and factors influencing artificial intelligence-based intelligent products. Telemat Inform 2020 Apr;47:101324 [FREE Full text] [CrossRef]
  4. Hsieh PJ. Healthcare professionals’ use of health clouds: integrating technology acceptance and status quo bias perspectives. Int J Med Inform 2015 Jul;84(7):512-523 [FREE Full text] [CrossRef] [Medline]
  5. Chen Y, Le D, Yumak Z, Pu P. EHR: a sensing technology readiness model for lifestyle changes. Mob Netw Appl 2017 May 11;22(3):478-492 [FREE Full text] [CrossRef]
  6. Kim HW, Chan HC, Gupta S. Value-based adoption of mobile internet: an empirical investigation. Decis Support Syst 2007 Feb;43(1):111-126 [FREE Full text] [CrossRef]
  7. Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020 Jun 19;22(6):e15154. [CrossRef] [Medline]
  8. Cheung ML, Chau KY, Lam MH, Tse G, Ho KY, Flint SW, et al. Examining consumers' adoption of wearable healthcare technology: the role of health attributes. Int J Environ Res Public Health 2019 Jun 26;16(13):2257. [CrossRef] [Medline]
  9. Baumeister RF. Yielding to temptation: self-control failure, impulsive purchasing, and consumer behavior. J Consum Res 2002 Mar;28(4):670-676 [FREE Full text] [CrossRef]
  10. Son M, Han K. Beyond the technology adoption: technology readiness effects on post-adoption behavior. J Bus Res 2011 Nov;64(11):1178-1182 [FREE Full text] [CrossRef]
  11. Wieder JS. Communicating radiation risk: the power of planned, persuasive messaging. Health Phys 2019 Feb;116(2):207-211 [FREE Full text] [CrossRef] [Medline]
  12. Aristotle. Art of Rhetoric. Volume 2. Cambridge, MA, USA: Harvard University Press; 1926.
  13. Smith RW. The Art of Rhetoric in Alexandria: Its Theory and Practice in the Ancient World. Dordrecht, Netherlands: Springer; 1974.
  14. Meyer-Waarden L, Cloarec J. “Baby, you can drive my car”: psychological antecedents that drive consumers’ adoption of AI-powered autonomous vehicles. Technovation 2022 Jan;109:102348 [FREE Full text] [CrossRef]
  15. Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res 2019 Dec;46(4):629-650 [FREE Full text] [CrossRef]
  16. Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q 1989 Sep;13(3):319-340 [FREE Full text] [CrossRef]
  17. Hedman J, Gimpel G. The adoption of hyped technologies: a qualitative study. Inf Technol Manag 2010 Oct 6;11(4):161-175. [CrossRef]
  18. Green Jr SE. A rhetorical theory of diffusion. Acad Manage Rev 2004 Oct 01;29(4):653-669 [FREE Full text] [CrossRef]
  19. Sleigh J, Vayena E. Public engagement with health data governance: the role of visuality. Humanit Soc Sci Commun 2021 Jun 18;8(1):149 [FREE Full text] [CrossRef]
  20. Miles A, Mezzich JE. The care of the patient and the soul of the clinic: person-centered medicine as an emergent model of modern clinical practice. Int J Pers Cent Med 2011 Jun 30;1(2):207-222 [FREE Full text] [CrossRef]
  21. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019 Jan 07;25(1):44-56 [FREE Full text] [CrossRef] [Medline]
  22. Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q 2003 Sep;27(3):425-478 [FREE Full text] [CrossRef]
  23. Visvikis D, Cheze Le Rest C, Jaouen V, Hatt M. Artificial intelligence, machine (deep) learning and radio(geno)mics: definitions and nuclear medicine imaging applications. Eur J Nucl Med Mol Imaging 2019 Dec;46(13):2630-2637. [CrossRef] [Medline]
  24. Sun TQ, Medaglia R. Mapping the challenges of artificial intelligence in the public sector: evidence from public healthcare. Gov Inf Q 2019 Apr;36(2):368-383 [FREE Full text] [CrossRef]
  25. Adeleke IT, Adekanye AO, Adefemi SA, Onawola KA, Okuku AG, Sheshi EU, et al. Knowledge, attitudes and practice of confidentiality of patients' health records among health care professionals at Federal Medical Centre, Bida. Niger J Med 2011 Apr;20(2):228-235. [Medline]
  26. Lee JD, See KA. Trust in automation: designing for appropriate reliance. Hum Factors 2004;46(1):50-80. [CrossRef] [Medline]
  27. Wang W, Siau K. Trusting artificial intelligence in healthcare. In: Proceedings of the 24th Americas Conference on Information Systems. 2018 Presented at: AMCIS '18; August 16-18, 2018; New Orleans, LA, USA.
  28. Thompson S, Whitaker J, Kohli R, Jones C. Chronic disease management: how IT and analytics create healthcare value through the temporal displacement of care. MIS Q 2020;44(1b):227-253 [FREE Full text] [CrossRef]
  29. Goldsmith RE, Hofacker CF. Measuring consumer innovativeness. J Acad Mark Sci 1991 Jun;19(3):209-221 [FREE Full text] [CrossRef]
  30. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019 Jan;25(1):30-36 [FREE Full text] [CrossRef] [Medline]
  31. Lassar WM, Manolis C, Lassar SS. The relationship between consumer innovativeness, personal characteristics, and online banking adoption. Int J Bank Mark 2005 Mar 01;23(2):176-199 [FREE Full text] [CrossRef]
  32. Yasaka TM, Lehrich BM, Sahyouni R. Peer-to-peer contact tracing: development of a privacy-preserving smartphone app. JMIR Mhealth Uhealth 2020 Apr 07;8(4):e18936 [FREE Full text] [CrossRef] [Medline]
  33. Emani S, Yamin CK, Peters E, Karson AS, Lipsitz SR, Wald JS, et al. Patient perceptions of a personal health record: a test of the diffusion of innovation model. J Med Internet Res 2012 Nov 05;14(6):e150 [FREE Full text] [CrossRef] [Medline]
  34. Hirschman EC. Innovativeness, novelty seeking, and consumer creativity. J Consum Res 1980 Dec 01;7(3):283-295 [FREE Full text] [CrossRef]
  35. Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manag Sci 2000 Feb;46(2):186-204 [FREE Full text] [CrossRef]
  36. Jasrai L. Measuring mobile telecom service innovativeness among youth: an application of domain-specific innovativeness scale. Paradigm 2014 Jun;18(1):103-116 [FREE Full text] [CrossRef]
  37. Abbas A, Afshan G, Aslam I, Ewaz L. The effect of celebrity endorsement on customer purchase intention: a comparative study. Curr Econ Manag Res 2018;4(1):1-10 [FREE Full text]
  38. Hutchesson MJ, Collins CE, Morgan PJ, Callister R. An 8-week web-based weight loss challenge with celebrity endorsement and enhanced social support: observational study. J Med Internet Res 2013 Jul 04;15(7):e129 [FREE Full text] [CrossRef] [Medline]
  39. Higgins C, Walker R. Ethos, logos, pathos: strategies of persuasion in social/environmental reports. Account Forum 2012 Feb 16;36(3):194-208 [FREE Full text] [CrossRef]
  40. Shin HS, Callow M, Farkas ZA, Lee YJ, Dadvar S. Measuring user acceptance of and willingness-to-pay for CVI technology : final research report. U.S. Department of Transportation. 2016 Sep 30.   URL: https://rosap.ntl.bts.gov/view/dot/31468 [accessed 2021-10-01]
  41. Lee K, Khan S, Mirchandani D. Hierarchical effects of product attributes on actualized innovativeness in the context of high-tech products. J Bus Res 2013 Dec;66(12):2634-2641 [FREE Full text] [CrossRef]
  42. Juaneda-Ayensa E, Mosquera A, Murillo YS. Omnichannel customer behavior: key drivers of technology acceptance and use and their effects on purchase intention. Front Psychol 2016 Jul 28;7:1117 [FREE Full text] [CrossRef] [Medline]
  43. Nguyen TD, Nguyen TM, Pham QT, Misra S. Acceptance and use of e-learning based on cloud computing: the role of consumer innovativeness. In: Proceedings of the 14th International Conference on Computational Science and Its Applications. 2014 Presented at: ICCSA '14; June 30-July 3, 2014; Guimarães, Portugal p. 159-174. [CrossRef]
  44. Cheng YM. Antecedents and consequences of e-learning acceptance. Inf Syst J 2011 May;21(3):269-299 [FREE Full text] [CrossRef]
  45. Degirmenci K. Mobile users’ information privacy concerns and the role of app permission requests. Int J Inf Manage 2020 Feb;50:261-272 [FREE Full text] [CrossRef]
  46. Wu WK, Huang SY, Yen DC, Popova I. The effect of online privacy policy on consumer privacy concern and trust. Comput Human Behav 2012 May;28(3):889-897 [FREE Full text] [CrossRef]
  47. Cho JH, Chan KS, Adalı S. A survey on trust modeling. ACM Comput Surv 2015 Nov;48(2):1-40 [FREE Full text] [CrossRef]
  48. Wu K, Zhao Y, Zhu Q, Tan X, Zheng H. A meta-analysis of the impact of trust on technology acceptance model: investigation of moderating influence of subject and context type. Int J Inf Manage 2011 Dec;31(6):572-581 [FREE Full text] [CrossRef]
  49. McLean G, Wilson A. Shopping in the digital world: examining customer engagement through augmented reality mobile applications. Comput Human Behav 2019 Dec;101:210-224 [FREE Full text] [CrossRef]
  50. Lai PC. The literature review of technology adoption models and theories for the novelty technology. J Inf Syst Technol Mana 2017 Jan;14(1):21-38 [FREE Full text] [CrossRef]
  51. Hasan R, Shams R, Rahman M. Consumer trust and perceived risk for voice-controlled artificial intelligence: the case of Siri. J Bus Res 2021 Jul;131:591-597 [FREE Full text] [CrossRef]
  52. Arttu K. Technology acceptance of voice assistants: anthropomorphism as factor. University of Jyväskylä. 2017.   URL: https://jyx.jyu.fi/handle/123456789/54612 [accessed 2021-10-01]
  53. Prebensen NK, Xie J. Efficacy of co-creation and mastering on perceived value and satisfaction in tourists' consumption. Tour Manag 2017 Jun;60:166-176 [FREE Full text] [CrossRef]
  54. Zhang TC, Lu C, Kizildag M. Engaging generation Y to co-create through mobile technology. Int J Electron Commer 2017 Oct 02;21(4):489-516 [FREE Full text] [CrossRef]
  55. Dinev T, Albano V, Xu H, D’Atri A, Hart P. Individuals’ attitudes towards electronic health records: a privacy calculus perspective. In: Gupta A, Patel VL, Greenes RA, editors. Advances in Healthcare Informatics and Analytics. Cham, Switzerland: Springer; 2016:19-50.
  56. Pelau C, Dabija DC, Ene I. What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Comput Human Behav 2021 Sep;122:106855 [FREE Full text] [CrossRef]
  57. Buhrmester M, Kwang T, Gosling SD. Amazon's mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 2011 Jan;6(1):3-5. [CrossRef] [Medline]
  58. Chin WW. The partial least squares approach to structural equation modeling. In: Marcoulides GA, editor. Modern Methods for Business Research. Mahwah, NJ, USA: Lawrence Erlbaum Associates; Apr 12, 1998:295-336.
  59. Sarstedt M, Ringle CM, Hair JF. Partial least squares structural equation modeling. In: Homburg C, Klarmann M, Vomberg A, editors. Handbook of Market Research. Cham, Switzerland: Springer; Aug 22, 2017:1-40.
  60. Podsakoff PM, MacKenzie SB, Lee JY, Podsakoff NP. Common method biases in behavioral research: a critical review of the literature and recommended remedies. J Appl Psychol 2003 Oct;88(5):879-903. [CrossRef] [Medline]
  61. Preacher KJ, Hayes AF. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behav Res Methods 2008 Aug;40(3):879-891. [CrossRef] [Medline]
  62. Garvey MD, Samuel J, Pelaez A. Would you please like my tweet?! An artificially intelligent, generative probabilistic, and econometric based system design for popularity-driven tweet content generation. Decis Support Syst 2021 May;144:113497 [FREE Full text] [CrossRef]
  63. Grimes GM, Schuetzler RM, Giboney JS. Mental models and expectation violations in conversational AI interactions. Decis Support Syst 2021 May;144:113515 [FREE Full text] [CrossRef]
  64. Fox G, Connolly R. Mobile health technology adoption across generations: narrowing the digital divide. Inf Syst J 2018 Nov;28(6):995-1019 [FREE Full text] [CrossRef]
  65. Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients’ perceptions toward human–artificial intelligence interaction in health care: experimental study. J Med Internet Res 2021 Nov 25;23(11):e25856 [FREE Full text] [CrossRef] [Medline]


AI: artificial intelligence
AMT: Amazon Mechanical Turk
H: hypothesis
PLS: partial least squares
TAM: Technology Acceptance Model


Edited by G Eysenbach; submitted 26.07.22; peer-reviewed by SR Sebastian, M Kapsetaki, I Adeleke; comments to author 17.11.22; revised version received 12.01.23; accepted 23.01.23; published 13.03.23

Copyright

©Glorin Sebastian, Amrita George, George Jackson Jr. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 13.03.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.