Review
Abstract
Background: Conversational agents (CAs), also known as chatbots, are digital dialog systems that enable people to have a text-based, speech-based, or nonverbal conversation with a computer or another machine based on natural language via an interface. The use of CAs offers new opportunities and various benefits for health care. However, they are not yet ubiquitous in daily practice. Nevertheless, research regarding the implementation of CAs in health care has grown tremendously in recent years.
Objective: This review aims to present a synthesis of the factors that facilitate or hinder the implementation of CAs from the perspectives of patients and health care professionals. Specifically, it focuses on the early implementation outcomes of acceptability, acceptance, and adoption as cornerstones of later implementation success.
Methods: We performed an integrative review. To identify relevant literature, a broad literature search was conducted in June 2021 with no date limits and using all fields in PubMed, Cochrane Library, Web of Science, LIVIVO, and PsycINFO. To keep the review current, another search was conducted in March 2022. To identify as many eligible primary sources as possible, we used a snowballing approach by searching reference lists and conducted a hand search. Factors influencing the acceptability, acceptance, and adoption of CAs in health care were coded through parallel deductive and inductive approaches, which were informed by current technology acceptance and adoption models. Finally, the factors were synthesized in a thematic map.
Results: Overall, 76 studies were included in this review. We identified influencing factors related to 4 core Unified Theory of Acceptance and Use of Technology (UTAUT) and Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) factors (performance expectancy, effort expectancy, facilitating conditions, and hedonic motivation), with most studies underlining the relevance of performance and effort expectancy. To meet the particularities of the health care context, we redefined the UTAUT2 factors social influence, habit, and price value. We identified 6 other influencing factors: perceived risk, trust, anthropomorphism, health issue, working alliance, and user characteristics. Overall, we identified 10 factors influencing acceptability, acceptance, and adoption among health care professionals (performance expectancy, effort expectancy, facilitating conditions, social influence, price value, perceived risk, trust, anthropomorphism, working alliance, and user characteristics) and 13 factors influencing acceptability, acceptance, and adoption among patients (additionally hedonic motivation, habit, and health issue).
Conclusions: This review shows manifold factors influencing the acceptability, acceptance, and adoption of CAs in health care. Knowledge of these factors is fundamental for implementation planning. Therefore, the findings of this review can serve as a basis for future studies to develop appropriate implementation strategies. Furthermore, this review provides an empirical test of current technology acceptance and adoption models and identifies areas where additional research is necessary.
Trial Registration: PROSPERO CRD42022343690; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=343690
doi:10.2196/46548
Keywords
Introduction
Background
Health care services worldwide face significant challenges from increasing demand on the one hand and an increasing lack of availability and accessibility on the other hand, accompanied by rising health care costs [
]. The current COVID-19 pandemic has also affected health care delivery and has highlighted the need for alternative approaches that can overcome geographic, temporal, and organizational barriers to providing comprehensive high-quality care [ ].A promising way to overcome these barriers is technological progress, which is driven in particular by increasing digitization and advances in the field of artificial intelligence (AI). One promising technology is conversational agents (CAs), also known as chatbots [
, ]. On the basis of previous definitions, we define CAs as digital dialog systems that enable people to have text-based, speech-based, or nonverbal conversations with a computer or another machine based on natural language. The related concepts and variants of CAs are provided in [ - ].The use of CAs offers new opportunities and various benefits for health care. Current research points to their ability to improve the accessibility of health care services and medical knowledge and to foster patient-centered care while reducing health care costs. Furthermore, their ability to communicate in multiple languages has been discussed [
, ]. This technology can support health care professionals in their daily work and thus reduce their burden [ ]. Numerous studies have demonstrated the effectiveness and efficiency of using CAs in health care, such as in supporting diagnostic decision-making [ ] and cognitive behavioral therapy for psychiatric and somatic disorders [ - ]. In this regard, CAs support effective, acceptable, and practical health care comparable with that provided by human physicians [ , ]. Owing to the nonjudgmental nature and impartiality of CAs, studies postulate that the systems may even be better suited than health care professionals to meet the needs of patients in some areas [ ].Achieving acceptability, acceptance, and adoption is challenging for new technologies, as the user’s journey to technology acceptability, acceptance, and adoption is complex and nonlinear [
, ]. The success of an innovation depends on its use by end users, that is, its acceptability, acceptance, and adoption. Acceptability is understood as a person’s perception of a technology before its use. Acceptance, by contrast, is a person’s perception of a technology after its initial use [ ]. Adoption refers to a multistage process that explains a person’s choice to use an innovation. It involves a decision-making process that begins with the perception of the technology and ends with the confirmation of the adoption decision or achievement of permanent use [ - ]. The users of the technology must, therefore, be at the center of the digitization process because without including their values and interests in the acceptability, acceptance, and adoption processes, an innovation cannot be successful [ , - ]. Therefore, it is necessary to determine the factors that affect these processes. Such a broad knowledge base of influencing factors will enable the development of effective strategies for the implementation of new technologies and will serve as a starting point for tailoring new technologies in a user-centric manner, which is crucial for sustainable use [ ]. Research regarding the acceptability, acceptance, and adoption of CAs in health care has gained interest tremendously in recent years and has become a significant field.It is not only private users and patients who are crucial stakeholders within the innovation process of CAs but also staff in health care organizations. In particular, the attitudes and beliefs of staff are crucial for the introduction of CAs to medical institutions because the establishment of new technologies often fails not because of the nature of the systems but because of the employees [
]. One of the most common reasons for the failure of innovations is insufficient knowledge about the acceptability, acceptance, and adoption processes at the time of introduction [ ].Several studies have indicated that despite the benefits of CAs, there is insufficient acceptability, acceptance, and adoption among those who can most benefit from this technology, namely people with health issues [
]. In addition, this technology is usually associated with poor adoption by physicians [ ]. To date, several studies have investigated the factors influencing the user acceptability, acceptance, and adoption of CAs in health care. Some factors, such as performance expectations, effort expectations, trust, and facilitating conditions, have already been determined [ - ]. However, a complete overview of the factors influencing the acceptability, acceptance, and adoption of CAs in health care does not yet exist.Objectives
This study presents an overview of the facilitating and hindering factors that influence the acceptability, acceptance, and adoption of CAs from the perspectives of patients and health care professionals. Both groups are considered separately to assess whether the influencing factors differ and to derive recommendations for how CAs in the health care system must be designed so that they are used by both patients and health care professionals. Furthermore, CAs can be sustainably integrated into care only if health care professionals are convinced of their benefits and prescribe or recommend them to patients. From the perspective of health care professionals, it is crucial to differentiate between providers’ perception of the use of CAs for patients and their perception of the use of CAs for supporting their own work. On the basis of the identified influencing factors, which were derived from previous technology acceptance and adoption research, a comprehensive thematic map was developed, providing a visualization of the factors that determine the acceptability, acceptance, and adoption of CAs in health care. This up-to-date literature review that shows the factors influencing the acceptability, acceptance, and adoption of health care CAs will enable the design of effective strategies for the implementation and establishment of the technology and user-centered design of the systems, which will lead to sustainable use. Furthermore, the review can serve as a guide for developers, as it shows how the technology should be designed so that it is accepted and adopted by the target group.
These objectives lead to the following research question: what are the factors influencing the acceptability, acceptance, and adoption of CAs in health care from the perspectives of patients and health care professionals?
Technology Acceptance and Adoption Models
Numerous studies have demonstrated that technology acceptance and adoption models are suitable for investigating the factors influencing technology acceptance and adoption in the health care sector [
, ]. These models attempt to explain the adoption process and use of new technologies and share a basic conceptual framework. This framework explains how individual attitudes affect the intention to use and, ultimately, actual use of new technologies. Researchers from a wide range of disciplines have developed various user acceptance and adoption models for understanding acceptability, acceptance, and adoption from the perspective of individuals or organizations. shows the important models and theories of individual acceptance and adoption and their respective determinants.Whereas previous models could explain between 17% and 53% of an individual’s intention to use a technology, the Unified Theory of Acceptance and Use of Technology (UTAUT) can explain approximately 70% of the variance in an employee’s behavioral intention and up to 50% of the variance in technology use in the organizational context. Moreover, the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) can explain approximately 74% of the variance in stated behavioral intentions and approximately 53% of the variance in technology use in the consumer context [
, ]. Furthermore, both models have been successfully used in the health care sector on several occasions [ , , , ]. Thus, these 2 models serve as the initial models for this review and are described in more detail below.The UTAUT emerged from a review and synthesis of the 8 most prominent user acceptance models identified in a literature review by Venkatesh et al [
]. The reformulated model includes 4 core determinants, namely performance expectancy, effort expectancy, social influence, and facilitating conditions, which directly determine behavioral intention and use behavior. Unlike previous models, the UTAUT also includes 4 moderators (gender, age, experience, and voluntariness of use) that have a moderating influence on the 4 core determinants [ ]. In 2012, Venkatesh et al [ ] modified the UTAUT for the consumer technology acceptance and use context to form the UTAUT2. Whereas the UTAUT focuses on predicting the intention to use and actual use of a technology primarily in the organizational context, additional constructs and relationships were identified for the UTAUT2 to predict the intentions to use and actual use of a technology in the consumer context. In particular, the determinants hedonic motivation, price value, and habit were added, and the moderating factor voluntariness of use was removed. Therefore, in the UTAUT2, 7 determinants are moderated by 3 factors [ ].Model | Determinants | Reference, year |
TRAa |
| Fishbein and Ajzen [ | ], 1975
TAMb |
| Davis [ | ], 1989
TPBc |
| Ajzen [ | ], 1991
MPCUd |
| Thompson et al [ | ], 1991
C-TAM-TPBe |
| Taylor and Todd [ | ], 1995
IDTf |
| Rogers [ | ], 1995
MMg |
| Vallerand [ | ], 1997
SCTh |
| Compeau et al [ | ], 1999
TAM 2i |
| Venkatesh and Davis [ | ], 2000
UTAUTj |
| Venkatesh et al [ | ], 2003
UTAUT2k |
| Venkatesh et al [ | ], 2012
aTRA: Theory of Reasoned Action.
bTAM: Technology Acceptance Model.
cTPB: Theory of Planned Behavior.
dMPCU: Model of PC Utilization.
eC-TAM-TPB: Combined Technology Acceptance Model and Theory of Planned Behavior.
fIDT: Innovation Diffusion Theory.
gMM: Motivation Model.
hSCT: Social Cognitive Theory.
iTAM 2: Technology Acceptance Model 2.
jUTAUT: Unified Theory of Acceptance and Use of Technology.
kUTAUT2: Unified Theory of Acceptance and Use of Technology 2.
Methods
Overview
An integrative review (IR) was chosen to answer the research question. This approach allows the inclusion of studies with diverse methodologies (ie, experimental and nonexperimental research) [
, ] and can precisely represent the state of the current research literature [ ]. IRs are the most comprehensive methodological approach to reviews [ ] and have many benefits, including identifying gaps in the current research and the need for future studies, evaluating the strength of the scientific evidence, identifying a conceptual or theoretical framework [ ], and analyzing methodological issues of a particular topic [ ]. Furthermore, the varied sampling frame of IRs in conjunction with the multiplicity of its purpose has the potential to generate a comprehensive understanding of problems related to health care [ ].To ensure methodological rigor, we used Cooper’s [
] 5-stage IR method modified by Whittemore and Knafl [ ]. This five-step approach includes (1) problem identification, (2) data collection, (3) data evaluation (quality appraisal), (4) data analysis and interpretation (data extraction), and (5) presentation of results. At the same time, we used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist as a guide for preparing the IR, which is available in [ ].This review has been registered on PROSPERO (CRD42022343690).
Information Sources and Search Strategy
To identify the relevant literature for this review, a broad literature search was conducted in June 2021 with no date limits and using all fields in 5 databases (PubMed, Cochrane Library, Web of Science, LIVIVO, and PsycINFO). To keep the IR current, a second search was conducted in March 2022. The search terms were derived from the guiding research question. The following 3 keyword groups were set: “conversational agent,” “acceptability, acceptance, and adoption,” and “influencing factor.” Various synonyms within these keyword groups were generated from the Medical Subject Headings terms of the 5 databases, and further synonyms were derived through a web-based search and from previously published literature discussing CAs. Finally, we used an extensive list of 43 search terms (
). The search strategy was cross-checked with the Guideline Statement for Electronic Search Strategies [ ]. In addition, a preliminary search was conducted in each database to ensure the appropriateness and relevance of the adopted keywords because different digital databases use search engines with different requirements.Eligibility Criteria
The PRISMA selection process was used to review publications for inclusion [
]. All studies were assessed against a set of predetermined inclusion and exclusion criteria, which were defined and guided by the research question and purpose of the IR. Studies were included in the IR if they met the following criteria: (1) the language was English or German; (2) the papers were primary studies (3) published until December 31, 2021, and (4) described the acceptability, acceptance, and adoption of CAs and their influencing factors (5) in health care; and (6) the studies adopted a quantitative, qualitative, or mixed methods design ( ).Characteristics | Included | Excluded |
Language | English and German | All other languages |
Article type | Published primary study | All article types other than primary studies (eg, reviews) and unpublished primary studies |
Year of publication | Inception of the database to December 31, 2021 | N/Aa |
Context | Studies that describe the acceptability, acceptance, and adoption of CAsb and studies that describe the factors influencing the acceptability, acceptance, and adoption of CAs | N/A |
Area | Health care | All other areas |
Study design | Quantitative, qualitative, and mixed methods studies | N/A |
aN/A: not applicable.
bCA: conversational agent.
Selection and Data Collection Process
Screening of studies for inclusion was independently performed by 2 authors (MW and MH) in 2 stages: title and abstract review and full paper review. There were 5 disagreements between the 2 reviewers. The discrepancies were discussed between the authors and resolved through consensus.
Data Extraction and Outcomes
Data analysis was conducted via the 4-phase process described by Whittemore and Knafl [
]. During the initial phase (data reduction), we extracted the following information from the studies: the perspectives (of patients and health care professionals) on acceptability, acceptance, and adoption; the wording used in terms of acceptability, acceptance, and adoption of the technology; methodology; theory; study area; number of participants; year; country; and the influencing factors. In addition, it was noted whether acceptability, acceptance, or adoption was part of the research question. In the second phase (data display), we converted the extracted data from the individual sources into a table matrix. During the third phase (data comparison), the factors were analyzed in more detail, paraphrased, and assigned to superordinate categories based on the relationships between them and their underlying meaning.Synthesis of Results
The influencing factors were coded through parallel deductive and inductive approaches. In the deductive approach, we searched for statements reflecting the factors proposed by the UTAUT or UTAUT2. During the inductive coding, we developed categories that were not included in the UTAUT or UTAUT2. To verify the identification and classification of the influencing factors by the first reviewer (MW), a second independent reviewer (MH) checked the identification and classification of the influential factors in 10% (8/76) of the included studies that were randomly selected. There was only 1 disagreement between the 2 reviewers. The disagreement was resolved after a short conversation, and the identification and assignment of the first author was followed. The final phase (conclusion drawing and verification) comprised interpreting the information derived from the previous stages [
].Risk of Bias Assessment
All the retrieved papers were subjected to a quality assessment. The Mixed Methods Assessment Tool version 2018 was used because it is a critical appraisal tool for reviews that include qualitative, quantitative, and mixed methods studies [
].One researcher (MW) rated all the identified studies, and a second researcher (JK-N) independently rated 10% (8/76) of the identified studies that were randomly selected. For 1 study, the researchers provided slightly different ratings of quality. However, this difference was quickly resolved and was judged to be sufficiently minor to not question the viability of the other ratings.
Results
Study Selection
illustrates the flow diagram of the database searches and study screenings.
The first database search yielded 602 studies, and the second database search yielded 303 studies. After duplicates were removed, the titles and abstracts of 532 studies were screened, of which 195 were included in the full-text screening.
Finally, 72 studies were identified through a systematic search. According to Whittemore and Knafl [
], complementary searches are essential for an IR to identify the maximum number of eligible primary sources. Therefore, we used a snowballing approach by searching the reference lists of the eligible studies. In addition, we conducted a hand search. Through the snowballing approach we were able to identify 1 more study and by hand search another 3 studies. A total of 76 studies were included in the review.Risk of Bias
The results of the appraisal are available in
[ , , - , - , , - , - ]. Overall, the quality of the included studies was high. Because the aim of this IR is to provide a comprehensive account of the factors influencing the acceptability, acceptance, and adoption of CAs in health care, the authors decided to include all studies.The critical appraisal of the papers revealed a minor risk of bias in 11 (14%) of the 76 publications.
Study Characteristics
The characteristics of the included studies are summarized in
[ , , - , - , , - , - ].Of the 76 included studies, 69 (91%) exclusively focused on acceptability, acceptance, or adoption among patients, and 3 (4%) focused solely on acceptability, acceptance, or adoption among health care professionals. The remaining 4 (5%) of the 76 studies (the studies by Dupuy et al [
], Kowatsch et al [ ], LeRouge et al [ ], and Potts et al [ ]) explored and described the influencing factors from both the patient and health care professional perspectives.The 76 included papers were published between 2005 and 2021, with most (n=62, 82%) published from 2019 to 2021. The studies originated from 19 countries. Most studies were conducted in the United States (29/76, 38%) and the United Kingdom (13/76, 17%). The sample sizes of the studies ranged from 4 to 16.519. Of the 76 studies, concerning the study design, 21 (28%) studies had a qualitative design, 17 (22%) had a quantitative nonrandomized design, 15 (20%) had a quantitative randomized controlled design, 14 (18%) had a mixed methods design, and 9 (12%) had a quantitative descriptive design.
Furthermore, the included studies were mostly pilot studies with short intervention periods (mostly between 2 and 4 weeks). Among the 76 studies, there were only 1 (1%) long-term study conducted for >12 months [
] and 2 (3%) studies with a timeframe of >6 months [ , ]. Moreover, the studies were mostly laboratory studies conducted in a controlled environment; only 14 (18%) of the 76 papers used and tested CAs in real-world conditions [ , , , , , - ]. Only the studies by Sillice et al [ ], Baptista et al [ ], and Fan et al [ ] were long-term studies under real-world conditions.The reviewed studies displayed a wide variation in the wording used in relation to the acceptability, acceptance, and adoption of technology. Of the 76 studies, 36 (47%) used the term “acceptability” exclusively, 12 (16%) used the term “acceptance,” and 11 (14%) used the term “adoption.” In addition, of the 76 studies, 8 (11%) studies used both “acceptance” and “adoption congruently,” 7 (9%) used both “acceptance” and “acceptability,” and 1 (1%) used both “acceptability” and “adoption.” Similarly, 1 (1%) study used all 3 terms, (“acceptability,” “acceptance,” and “adoption,”) congruently in their descriptions. By contrast, 3 (4%) studies used none of the terms explicitly but only described the users’ perceptions [
, , ]. These studies were assigned to the term “adoption” [ ]. It was also noted that none of the included studies defined the terms used. Overall, the included studies described a high level of acceptability, acceptance, and adoption of CAs in health care.Furthermore, only in 10 (13%) of the 76 studies, “acceptability,” “acceptance,” “adoption,” or a synonym was part of the research question or primary research objective [
, , , , , , , , , ]. In 44 (58%) of the 76 studies, “acceptability,” “acceptance,” “adoption,” or a synonym was part of the secondary objectives. Of the 8 studies with an established model to measure the acceptability, acceptance, and adoption of health CAs, 6 (75%) referred to the Technology Acceptance Model [ , , - ], and 2 (25%) referred to the UTAUT2 [ , ].The CAs of the included studies targeted various health domains, as shown in
. The textbox shows which health domain was supported by a CA and where it was used within a common medical care pathway. The most CAs dealt with mental health issues and covered the complete care path.Prevention
- Mental health [ , ]
- Family health history [ , , ]
- Pregnancy care [ , ]
- Health adviser and promoter [ , ]
- Vaccination [ ]
- Sexual health advice [ , ]
- Health care for children [ ]
- Healthy lifestyle behavior [ , ]
- Exercise and sun protection [ ]
- Tuberculosis [ ]
- Physical activity [ ]
- Cancer [ ]
- Diabetes [ ]
Diagnostic
- Self-diagnosis [ , ]
- Mental health [ ]
- COVID-19 [ ]
Treatment
- Mental health [ , , , , , , , , - ]
- Pregnancy care [ ]
- Genetic counseling [ ]
- Chronic pain [ ]
- Diabetes [ , ]
- Sleeping concerns [ ]
- Heart disease [ , , ]
- HIV [ ]
- Adiposities [ ]
- Smoking cessation [ ]
- Substance misuse [ , ]
- Sickle cell disease [ ]
- Drug information and risk minimization measures by physicians [ ]
Rehabilitation
- Physical therapy [ ]
- Mental health [ , , ]
- Cancer [ , ]
- Pregnancy care [ ]
Care
- Dementia [ ]
- Home care [ ]
- Care of older people [ , , , - , , ]
General
- General use of CAs in health care [ , , , , ]
- General use of CAs in relation to COVID-19 [ ]
Influencing Factors
Overview
Among the 76 papers, the 73 (96%) papers dealing with acceptability, acceptance, and adoption among patients included 354 mentions of 13 distinct influencing factors. The 7 (9%) of the 73 studies dealing with the acceptability, acceptance, and adoption among health care professionals referred to different health care professional groups. In addition to physicians [
, , ], the studies investigated the acceptability, acceptance, and adoption of CAs among physiotherapists [ ], mental health professionals [ ], and (home) care providers [ , ]. In the analysis of the data and description of the results, it was important to distinguish between the perception of health care professionals of the use of CAs for patients and their perception of the use of CAs to support their daily work.summarizes the factors, along with their subthemes, that explain the acceptability, acceptance, and adoption of CAs among patients and health care professionals. For a clearer presentation of the influencing factors, different shades of gray are used in the thematic map (influencing factors and subthemes mentioned by both patients and health care professionals are shaded in light gray, those mentioned by patients only are shaded in white, and those mentioned by providers only are shaded in dark gray).
For better comprehensibility within the presentation of results, no distinction was made among the outcomes acceptability, acceptance, and adoption. The original terms from the included primary studies regarding acceptability, acceptance, and adoption can be seen in
[ , , - , - , , , , - , - , - , - ] and [ , , , , , , , , , , , ] for patients and health care professionals, respectively. In addition, and include a numerical listing of the influencing factors.UTAUT and UTAUT2 Factors
Performance Expectancy
Performance expectancy refers to the degree to which individuals believe that using a technology will provide them with benefits in performing certain activities [
, ]. According to Venkatesh et al [ ], performance expectancy captures the relative advantage [ ] and perceived usefulness [ ] of the target technology. In 68 (93%) of the 73 studies among patients, performance expectancy was the most frequently identified and researched factor influencing the acceptability, acceptance, and adoption of CAs among patients. A total of 63 (86%) of the 73 studies found that CAs were used by patients when they were perceived as useful and helped them improve their health and quality of life. However, the results from the included studies also suggested that many users rated the performance expectancy of CAs as low because they felt that the technology was not yet sophisticated enough to address complex health issues or detect symptoms of less common health conditions or diseases [ - , , , , ]. Some studies (2/73, 3%) highlighted that AI at this stage is far too limited and simplistic to be truly effective in many complex health cases [ , ]. Therefore, CAs were more preferred for general questions and interactions with physicians for specific questions [ , , , , ]. The immaturity of the technology was also reflected in the fact that a large number of studies (21/73, 29%) pointed to technical problems with CAs that significantly affected the performance expectancy, acceptability, acceptance, and adoption of the systems. Owing to technical problems, patients did not use the systems or ended the process prematurely [ , , , , , , , , , , , , , , , , , , , , ]. Moreover, some patients found CAs unhelpful and found talking to a machine disturbing [ , , , , , , ].In addition, some studies (21/73, 29%) reported that patients perceived certain unique advantages of AI-performed therapy over human-performed therapy [
, , , , , , , - , , , - , , , , ]. Above all, anonymity in the interactions with CAs and the nonjudgmental nature of CAs were strong motivators for their acceptability, acceptance, and adoption and motivated people to use a health CA. Therefore, patients were more willing to share personal, embarrassing, and uncomfortable information with a CA than with a human. Some patients reported that they experienced judgment and blame for their conditions from real people [ , , , , , - , , - , , , , , ]. Furthermore, patients liked the convenience of a CA-based therapy, which is not possible in a traditional human therapy. They appreciated the ubiquitous availability of CAs and the facts that they have no time pressure in their requests, there is no waiting time, they do not disturb the physicians and waste their time unnecessarily, they can repeat questions or ask uninformed questions, and they can repeat or replay the conversation as often as they want. In addition, patients valued receiving personalized medical treatment or advice through CAs because it is exclusively about them and there are no interruptions from other patients. Moreover, that a CA never makes a patient feel alone and always motivates them to improve their health was perceived as pleasant. Some users felt that well-designed CAs can be more accurate and logical than physicians [ , , , , , - , , , , , , , , - , , , ]. Furthermore, patients perceived CAs to be faster, more anonymous, and more informative than information pipelines and search engines. Nevertheless, there were concerns that health CAs could affect the overall quality of health care by replacing experienced professionals [ ].With mentions in all of the 7 included studies on health care professionals, performance expectancy was also the most frequently identified and researched factor influencing the acceptability, acceptance, and adoption of CAs among health care professionals. Health CAs were described by health care professionals as important, useful, and promising [
, , , , , , ].Some studies (5/7, 71%) showed that health care professionals expected patients’ use of CAs to significantly improve health care. By using the systems, patients can better manage their health, access to care can be improved, travel times to medical facilities can be reduced, and unnecessary treatment visits can be avoided. Furthermore, health care professionals anticipated that patients would give more information to the CA owing to the anonymity in the interactions. Significant facilitation and benefits from establishing CAs were observed primarily in the areas of scheduling appointments, finding medical facilities, medication reminders, treatment adherence, providing treatment instructions, and requesting health care. In addition, benefits were also expected in physical therapy. In particular, the freedom of time and space for patients when using a CA and the real-time feedback provided during home exercises were seen as major advantages [
, , , , ].In addition to the benefits that the introduction of CAs will bring to patients, health care professionals expected that the systems could be of great help in their daily work and would make it much easier. Health care professionals assumed that the use of health CAs would free up time that the providers could then use to provide higher-quality and more individualized care to the remaining patients. The main requirement that health care professionals placed on CAs was that they quickly provide accurate medical information [
, , , ].Among health care professionals, technical problems with CAs had a significant impact on perceived usefulness, acceptability, acceptance, and adoption. However, health care professionals were aware that CAs are still at an experimental stage and, therefore, not yet mature enough to take on more complex tasks. However, some health care workers did not consider CAs to be useful for health care and were skeptical about whether the systems could improve the quality of care and facilitate their daily work [
, , ].Effort Expectancy
Effort expectancy is defined as the degree of ease associated with the use of a technology [
, ]. It can relate to the ease of use for consumers or clients [ ] as well as the ease of use for employees or providers [ ]. In 82% (n=60) of the 73 studies that addressed the patient perspective, the effort expectancy of CAs was described as a significant factor influencing the acceptability, acceptance, and adoption of CAs. In this regard, it was crucial for patients that the CA responds in a pleasant, light-hearted, user-friendly, and interactive manner; that the interface is simple and easy to use; that the CA is easily accessible; and that the explanations are easy to understand. It was perceived as particularly advantageous that the CA provides all types of health care through 1 device [ , , , , - , , , , , , , - , , - , - , - , - , - ]. However, other patients found CA applications too complicated, the technology too fast or too slow, or the input into a digital system too time consuming [ , , , , , , , , , - , ]. Overall, it became apparent that users’ requirements for the usability of CAs vary widely. Some patients wished that each user could personalize (eg, with regard to speed and skills) the CA themselves. CAs that were customizable were highly appreciated by patients [ , , , , , , , , , , , , , , ]. In addition, the usability of CAs was found to affect the usefulness of the systems [ ].Another aspect of usability concerned the restriction on user input during conversations. In most CA applications, the user can only respond with a list of response options instead of a free-text input. However, patients would like to formulate their answers freely so that they can describe the problems as accurately as possible [
, , , , , , ]. In addition, some studies (5/73, 7%) found that patients preferred to talk to the CA rather than chat with the system through text [ , , , , ]. These patients wanted the CA to talk, as interactions with physicians also occur via oral conversations. Moreover, it was pointed out that the patient needs to be able to multitask in a text-based conversation, as questions need to be read and answered simultaneously, and attention must be focused on the CA [ ]. In the study by Easton et al [ ], the participants were able to participate in the development of the CA and its features and preferred to be able to choose between voice and text communication.Effort expectancy was mentioned in 4 (57%) of the 7 studies among health care professionals as an important factor for the acceptability, acceptance, and adoption of CAs among health care professionals. Health care professionals wanted an accessible and easy-to-use CA that offers easy-to-read information [
, , , ]. CAs were considered easier to use by physicians than the currently available databases for health care professionals [ ].Facilitating Conditions
Facilitating conditions are defined as an individual’s perception of the resources and support available to execute and use a system [
, ]. Facilitating conditions also represent an important factor for the acceptability, acceptance, and adoption of CAs among patients and were mentioned in 18 (25%) of 73 studies among patients. It was frequently pointed out that it is crucial to have the necessary resources to use a CA, such as a cell phone or computer, and to be able to obtain help from others when needed. Thus, higher acceptability, acceptance, and adoption of CAs in health care have been demonstrated when patients possess such resources and support [ , , , , , , , , ]. Furthermore, the studies pointed out that a reliable internet connection is usually required to use CAs. Some patients had concerns about the internet connection being interrupted or not having any internet access [ , , , ]. Therefore, patients considered it important to continue traditional treatment methods in addition to the CA application, as this approach gives those who do not have reliable internet or smartphone access and those who are reluctant to use CAs the opportunity to receive medical care [ ].The compatibility of the CA with its environment was also important for patients. Compatibility in this context can be defined as the perception that the CA is well integrated into the user’s (health) environment. Especially in the health care context, the compatibility of the CA with the existing health care environment could influence the perception of the usefulness of the technology. Patients wanted a health care CA to be multimodal and accessible through various consumer devices that they already have, such as computers, tablets, cell phones, and televisions. In addition, the system should have the ability to interact with other digital services and home devices, such as calendars, smart home technology, and existing medical devices or applications [
, , , , , , ].Another important factor for patients was the perceived access to the health care system [
, ]. This can be defined as the availability of health care services. Some patients reported that they had quick access to physicians and that this discouraged them from using CAs. By contrast, long distances or the unavailability of health care services or physicians can lead patients to use CAs. Perceived access to the health care system may be limited by local, financial, or institutional factors and largely determines perceptions of the usefulness of CAs [ , ].For high acceptability, acceptance, and adoption of CAs among health care professionals, the systems should be easy to integrate into daily practice, whether before, during, or after a patient’s treatment. Moreover, the technology should be easily connected and combined with other devices [
, ]. However, a lack of internet access in medical facilities was a clear barrier to the acceptability, acceptance, adoption, and installation of CAs [ ].For the use of CAs by patients, health care professionals considered it crucial for the technology to be multimodal and accessible via multiple devices. A lack of internet access was also seen by health care professionals as a challenge and barrier to the use of CAs. Some suggested embedding the CA in a stand-alone program that does not require constant internet access. Thus, the patient would only need to go on the web at certain intervals to update the system [
, ].Hedonic Motivation
Hedonic motivation refers to “the fun or pleasure derived from using a technology” [
]. In 29% (n=21) of the 73 studies among patients, patients wanted CAs to have a self-fulfilling value (instrumental value) for them in addition to health benefits, that is, to be hedonic in nature. Thus, the enjoyment of using health CAs is also a crucial factor influencing their acceptability, acceptance, and adoption. Some studies (3/73, 4%) suggested that a lack of fun makes CAs boring to use and, therefore, decreases their acceptability, acceptance, and adoption [ , , ].However, Laumer et al [
] stated that hedonic motivation is not an important factor influencing the acceptability, acceptance, and adoption of CAs in health care. They argued that hedonic motivation is important when a CA serves entertainment purposes but is irrelevant when a CA serves a more serious purpose, such as in the health care domain.Hedonic motivation was not found to be an influencing factor for acceptability, acceptance, and adoption among health care professionals in the included studies.
Social Influence
Social influence describes the extent to which an individual perceives that important others (eg, family and friends) believe that the individual should use a particular technology [
, ]. In 11% (n=8) of the 73 studies among patients, social influence was suggested to be an important factor for the acceptability, acceptance, and adoption of CAs among patients. Furthermore, 1 (1%) study found that social influence could affect the performance expectancy of a CA [ ]. The results of our analysis showed that the definition of social influence according to Venkatesh et al [ , ] was not sufficient for the application of CAs in the health care sector. Not only was the request or expectation of a certain behavior important but also the recommendation and experience of a person whom the individual trusts [ ]. The collected studies showed that patients value the recommendations and experiences of trusted people and would accept and use a CA simply based on testimonials from their social environment [ , , , , , ].Social influence was also described by 29% (n=2) of the 7 studies among health care professionals as an important factor for the acceptability, acceptance, and adoption of CAs by health care professionals. Some health care professionals were convinced of the benefits of CAs in health care and would, therefore, recommend the technology to their colleagues. Furthermore, some studies (2/7, 29%) were able to establish a positive correlation between the perceptions of the CA by health care professionals and patients. If a patient perceived a CA as acceptable and useful, this was accompanied by a positive assessment of the CA by health care professionals [
, ].According to the previous descriptions, the definition of social influence by Venkatesh et al [
, ] must be extended to include recommendations and experiences of trusted persons to fully describe the social influence on the acceptability, acceptance, and adoption of CAs in health care. In terms of the application of CAs in the health sector, social influence should, therefore, be defined as follows: social influence refers to the extent to which a person perceives that significant others (eg, family and friends) believe that the person should use a particular technology or to which the person’s perception is influenced by others’ attitudes toward the use of, intention to use, and actual use of the new technology.Price Value
Price value is defined as “consumers’ cognitive trade-off between the perceived benefits of the applications and the monetary cost for using them” [
]. Value for money was described in 7 (10%) of the 73 studies among patients as an important factor influencing the acceptability, acceptance, and adoption of CAs among patients. It was found that the price value represents not only the cost-benefit trade-off but also a comparison of the cost of using a CA with the cost of other health services, such as visiting a physician [ , ].Health care professionals weighed the perceived benefits of CAs against the financial costs for them and patients. Overall, the systems were seen as a cost-effective extension of health care that can improve its quality [
]. Thus, price value also influences health care professionals’ acceptability, acceptance, and adoption of CAs.According to Laumer et al [
], the comparison between cost and alternative options could be a decisive factor in the acceptability, acceptance, and adoption of CAs, especially in countries with poor insurance coverage or high costs for the use of health care services. In countries with statutory health insurance, such as Germany, patients do not have to pay much for health services. In other countries, such as the United States, patients can incur significant costs depending on their insurance status. Therefore, it is important to compare not only the direct costs of a CA application with the benefits achieved but also the cost-benefit ratio of a CA with that of other health care services [ ]. Thus, in terms of the application of CAs in the health sector, the definition of Venkatesh et al [ ] should be expanded as follows: price value is consumers’ cognitive trade-off between the perceived benefits of the applications and the financial costs of using them as well as the trade-off between the cost of using a CA and the cost of using other health services.Habit
Habit refers to “the extent to which people tend to perform behaviors automatically because of learning” [
]. None of the identified studies mentioned habit as defined by Venkatesh et al [ ]. However, 1 (1%) of the 73 studies among patients reported that patients would use CAs in health care if they had the habit of using CAs in other areas of their lives [ ]. The definition of habit for current health care CA use must, therefore, be expanded to include the extent to which people tend to perform a behavior of interest automatically because they are used to performing a certain action that is close to the behavior of interest [ ]. Habit should thus be defined as follows: habit describes the extent to which people tend to perform a behavior of interest automatically because of learning and because they are used to performing a certain action that is close to the behavior of interest.Habit was not found to be an influencing factor for acceptability, acceptance, and adoption among health care professionals in the included studies.
Additional Factors
Perceived Risk
Perceived risk refers to users’ perceived uncertainty of the possible negative consequences of using health CAs [
]. The perceived risk in relation to the acceptability, acceptance, and adoption of CAs among patients was reported in 23 (32%) of 73 studies among patients as an important influencing factor and could be divided into the subtopics of perceived data privacy risk and perceived security risk.A significant barrier to the acceptability, acceptance, and adoption of health CAs was patients’ concern about potential data privacy risks [
, - , , , , - , , , , , , - ]. Users often lacked confidence in CAs’ privacy policies and data-sharing practices and in the ability (or inability) of these systems to maintain confidentiality so that their sensitive health-related information was protected from potential hacking or data leakage [ , , ]. In particular, the access of other data on the end device (eg, photographs, call logs, or location data) by the CA was viewed critically by patients. In addition, the rate of acceptability, acceptance, and adoption of CAs decreased if they were accessible via third-party services, such as Facebook (Meta Platforms, Inc), as there was a fear that the data would be passed on to third parties [ ]. Especially with regard to health issues, data protection was particularly important for users, as the data can be extremely sensitive [ , - ]. Because automated agent-assisted therapy, unlike conventional therapy, offers the prospect of patient anonymity, users expected their data and identity to be protected [ ]. Furthermore, privacy concerns were found to lower the performance expectancy for CAs in addition to acceptability, acceptance, and adoption [ ]. To alleviate privacy concerns, patients wanted the security of the CA to be made clearer and the CA to be offered via a trusted tool. In addition, access to data should always be password protected [ , , ].Moreover, there were concerns about the risk to user safety and well-being when using CAs in health care. Many patients were unsure about the quality and accuracy of the health information provided by CAs and feared that their use could lead to misdiagnosis. Furthermore, some studies (10/73, 14%) also highlighted criticism about how CAs could put users at risk when used for health issues. Misunderstandings could occur between a CA and its user, who may not be able to accurately describe their health problem or symptoms. In addition, the use of CAs may exacerbate the health problem instead of curing it. Finally, the use of CAs could also lead to increased loneliness and isolation, as it encourages users to seek help from a device rather than from a fellow human [
, , , , , , , , ]. There were also concerns that patients would be disadvantaged or penalized if they did not use the CA offered [ ].However, in 1 (1%) of the 73 studies among patients, patients had no concerns about data security or an individual security risk when using a CA. There were no privacy concerns, as patients felt that sharing and entering data was commonplace in today’s society. In addition, they felt safe using a CA and, therefore, would not be concerned about being harmed by it [
].Regarding health care professionals, 5 (71%) of the 7 studies among health care professionals stated that the risk associated with the use of CAs was an important factor in their acceptability, acceptance, and adoption. Their worries could be divided into the subtopics of perceived safety risk for patients, perceived risk for health care professionals, and perceived privacy risk.
Health care professionals feared safety risks from the use of CAs, both for patients and for themselves. They were concerned that CAs could compromise the quality of health care. They were also worried that patients would abuse CAs, incorrectly self-diagnose, and not properly understand the diagnoses displayed. They were also concerned that the systems could indirectly affect the safety and well-being of users by not knowing all the personal factors or not being able to properly clarify issues owing to inaccurate medical information [
]. Furthermore, many health care professionals believed that CAs would play an important role in health care in the future and feared that the systems could replace human workers [ , ]. In addition, there were significant concerns about whether sensitive health-related information was protected from potential hacking or data loss when using CAs [ , , ].Trust
In 49% (n=36) of the 73 studies among patients, trust was identified as another important factor in the acceptability, acceptance, and adoption of CAs among patients. Trust can be defined as “a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” [
]. In terms of the acceptability, acceptance, and adoption of health CAs among patients, trust is not a monolithic concept and should be differentiated into “trust in the provider” and “trust in the technology” [ ].For health CAs, patients’ trust in the provider was a critical factor and played a crucial role in whether the patients would ultimately accept, adopt, and use the CAs [
, - , , , ].To have confidence in the technology, patients had to be able to rely on the CA’s capabilities. The reliability, professional competence, and functionality of the system were particularly crucial in this regard. In health care, this means that the CA correctly diagnoses the disease and that the information comes from a credible and evidence-based source. For patients, the comparison between the existing relationship of trust with physicians and the relationship with a health care CA was particularly important. Therefore, it was crucial that patients establish a relationship of trust with a CA similar to that with a physician [
, , - , , , , - , - , , , , , , , , , - , ].In addition, trust was shown to influence other factors such as perceived risk, effort expectancy, performance expectancy, and hedonic motivation [
, , , , ]. The relationship between the 2 aspects of trust clearly showed that higher trust in the provider also increases initial trust in the CA [ ]. Philip et al [ ] assumed that credibility is the strongest dimension in terms of patient engagement.In 5 (71%) of the 7 studies among health care professionals, trust was found to be an important factor for the acceptability, acceptance, and adoption of CAs among health care professionals. Health care professionals trusted neither the technology, assuming that CAs could not correctly assess health problems and situations, nor the patients, with whom they associated the use of CAs with frequent self-diagnosis and lack of understanding of the results delivered [
, , , ]. For many health care professionals, mutual trust could only be built through face-to-face encounters. With a CA’s constant monitoring of a patient, they feared a negative impact on the trust relationship between them and the patient [ ]. Again, this indicates that trust is not a monolithic concept. With regard to health care professionals, trust should be divided into the categories of “trust in the technology,” “trust in the provider,” and “trust in the patients.”Anthropomorphism
Anthropomorphism can be described as the assignment of human-like attributes or traits to nonhuman agents or objects such as robots, computers, or animals. CAs are often attributed human-like characteristics owing to their unique ability to converse in natural language [
]. Anthropomorphism was shown in 67% (n=49) of the 73 studies among patients to be a critical influence on the acceptability, acceptance, and adoption of these systems among patients. According to Nadarzynski et al [ ], the lack of human presence is one of the main limitations to using CAs. For health CAs, anthropomorphism can be divided into the subthemes of empathy, intelligence level, personality, and visual features.For patients, it was important that health care CAs have empathic qualities. In this regard, they wanted CAs to be humorous, caring, friendly, empathetic, warm, honest, supportive, and compassionate. In addition, one of the points they liked most about the technology was that the CA is always there when needed and always listens to them. Many patients would, with increased use, even call the CA a friend [
- , , - , , , , , , , , , , , , , - , , , , , , ]. However, some patients were concerned about a lack of empathy and the CAs’ possible inability to understand emotional issues [ , , ]. Therefore, they perceived the system as nonemotional, rude, or unsympathetic and imagined the conversation as cold and inhuman [ , , , ]. However, it has already been demonstrated that the empathic abilities of health care CAs can be comparable with those of a real person [ , ].In many cases, CAs were attributed human-like personality traits by users [
, , , , , , , , , , , , , , , ]. A light-hearted, fun, and friendly personality was valued [ , , , ]. In addition, an authoritarian personality was not desired in a health care context [ , ]. Similar to how an authoritarian health care professional would be less accepted by patients, if the CA was perceived as an authority figure, the system was less accepted by patients [ ]. However, some patients had a negative perception of the CA’s personality. The fact that interacting with a CA feels like interacting with a real person caused them anxiety. Therefore, the systems were perceived by these individuals as creepy, scary, and strange [ , ].Visual features (appearance), for example, an avatar, were decisive in terms of the acceptability, acceptance, and adoption of a CA application [
, , , , , , , , - , , , , , , , , , , , , ]. However, there should be a match between the appearance of the CA and the expectations of the users. Whereas some patients preferred a serious human appearance to discuss important health issues, others preferred a funny character. Some preferred an avatar of a specific gender or age. Overall, it appeared that patients’ requirements for an avatar varied widely. Therefore, it was suggested that patients should be able to configure the appearance of the CA themselves [ , , , , , , , , , , , , , , ]. It was also shown that the appearance of an avatar has a crucial impact on whether the CA appears credible and intelligent [ , ]. Moreover, a visual representation of the system increased the perceived usefulness and enjoyment of the technology [ ].The intelligence level of a CA was reflected in its conversational responsiveness and ability to understand user input. A significant problem with CAs was their lack of intelligibility owing to limited vocabulary, accuracy of speech recognition, or error management of word input or output. In many cases, the inputs were not understood. Systems often needed to be asked more than one question to process the input. In addition, CAs’ responses were reported to be unnatural, impersonal, cold, limited, and repetitive, or arbitrary, scripted responses were given. Because system intelligence was considered important by patients, it has a critical impact on the acceptability, acceptance, and adoption of CAs in health care [
, , , , - , , , , , , , , , - , , , , , - , , , - , , , ]. Furthermore, 1 (1%) of the 73 studies found that the lack of system intelligence has a negative impact on intention to use, usefulness, and trust [ ]. In addition, the perfection of natural communication through congruence between verbal and nonverbal communication was crucial to the acceptability, acceptance, and adoption of the CA. Nonverbal cues, such as facial expressions, gestures, posture, and body movements, had a major impact on guided communication, as many individuals inferred the outcome and social meaning of the conversation from nonverbal behavior. Therefore, a CA should also be able to provide and understand nonverbal cues and respond appropriately [ , , , ].Attribution of human characteristics to a CA was mentioned in 5 (71%) of the 7 studies among health care professionals and is, therefore, also a critical factor for acceptability, acceptance, and adoption among health care professionals. Health care professionals ranked the intelligence of CAs as very important [
, , , ]. The prevailing lack of comprehension by CAs was also a severe impediment to the acceptability, acceptance, and adoption of the systems among health care professionals [ ]. In addition, health care professionals believed that CAs lack the intelligence and knowledge to accurately assess patients’ health concerns and fully address their needs [ ].Health care professionals expressed great enthusiasm for the use of avatars in health care treatment, as they could serve as a motivator for patients. It was crucial for them that the user can customize and personalize the avatar [
]. For their own use of the systems, health care professionals wanted the CAs to have a neutral and professional appearance that they could customize [ ]. Furthermore, health care professionals considered it important for the CA to have empathic properties. However, they believed that mutual empathy could only occur in face-to-face encounters and that CAs are currently unable to understand and represent emotions [ , ].Health Issue
Of the 73 studies among patients, 14 (19%) demonstrated that the acceptability, acceptance, and adoption of CAs in health care were also influenced by the severity and type of the health issue. The severity and type of a disease can be defined as the extent of impairment of physical, mental, and social well-being due to physical dysfunction and the reasons for the physical dysfunction. Mild health problems were found to increase the acceptability, acceptance, and adoption rate of CAs; however, for more serious problems, patients were less willing to use a CA and preferred to be treated or advised by a human [
, , , , , , ]. Nevertheless, CAs were perceived as more helpful and credible by patients with more severe diseases than those with less severe diseases [ , , ]. In addition to the severity of the disease, the type of disease could have a decisive effect on the acceptability, acceptance, and adoption of CAs among patients. In particular, patients who were afraid or embarrassed about their illness or symptoms tended to direct their inquiries to CAs owing to the anonymous and nonjudgmental nature of the interactions [ , , , ]. In addition, some studies (3/73, 4%) found that CAs could be a viable treatment method for stigmatized health problems [ , , ]. However, other studies (2/73, 3%) found no effect of the severity and type of the health issue on the acceptability, acceptance, and adoption of CAs [ , ].The severity and type of the health issue were not found to be influencing factors for the acceptability, acceptance, and adoption of CAs among health care professionals in the included studies.
Working Alliance
The therapeutic relationship that exists between a patient and a physician was also found to be crucial for the acceptability, acceptance, and adoption of CAs among patients in 27% (n=20) of the 73 studies among patients. Therefore, this relationship should also exist between users and the technology [
, , , , , , , , , , , , , , , - ]. The working alliance in this context can be defined as a therapeutic relationship between a user and a health CA to jointly achieve the desired (treatment) goal. Establishing a good relationship between a CA and the user was essential to encourage continued use of the technology and an important prerequisite for building a therapeutic alliance that benefits the patient [ ]. Moreover, this bond was a motivating factor for patients to continue interacting with the CA [ ]. A therapeutic alliance was found to be the result of the empathy, care, and trust that health care professionals demonstrated toward patients [ , ]. To build a patient-CA relationship, recall of past interactions with users and some variability in the systems’ verbal and nonverbal behaviors were critical elements [ ]. Patients could only build a relationship with a CA if it was human like. If no relationship could be established with the technology, patients did not value its opinion and would not follow its advice [ ].Of the 7 studies among health care professionals, 2 (29%) also described the therapeutic relationship that exists between a patient and a physician as crucial to the acceptability, acceptance, and adoption of CAs by health care professionals. Health care professionals were concerned that the increasing use of CAs would make patients feel less and insufficiently connected to health care professionals [
]. There was skepticism about whether CAs could help build a strong working alliance between patients and health care professionals. Health care professionals also questioned whether a relationship could be established between a CA and a patient, believing that a working alliance could only be established through face-to-face encounters [ ].User Characteristics
Of the 73 studies among patients, 40% (n=29) identified multiple user-related factors influencing the acceptability, acceptance, and adoption of CAs among patients. These included the UTAUT2 factor user experience with the technology [
] and demographic factors such as age, gender, origin, and level of education.Experience is defined as “the passage of time from the initial use of a technology by an individual” [
]. In the beginning, the patient was in an exploratory phase with the CA as a new technology, trying out the functions and not really knowing how to handle the device. After some time, the patient mastered the CA and knew exactly how to handle the device and use it specifically to improve their health. This experience made health care through a CA very efficient [ , , , , , , , ]. Moreover, increasing use made the interaction with the CA more familiar, which affected both the trust relationship and the therapeutic relationship between a patient and a CA [ , , , , ]. Thus, temporal use has a decisive influence on the acceptability, acceptance, and adoption of health care CAs.Furthermore, our review revealed that the given definition of experience is not sufficient for the use of CAs in health care, as, in addition to the time of use, patients’ experience with health care IT support and CAs in general [
, , , , ], as well as their individual technology knowledge, influenced the acceptability, acceptance, and adoption of CAs. Thus, the systems were less accepted and less widely adopted by individuals with low or moderate IT knowledge [ , , , , , , , , , , , ]. In addition, 1 (1%) of the 73 studies among patients found that patients who searched the internet more frequently for health information had more fun interacting with CAs and attributed more human-like characteristics to the systems [ ]. Experience could also be identified as a user-related factor influencing health care professionals’ acceptability, acceptance, and adoption of CAs in health care. Among health care professionals, acceptability, acceptance, and adoption were influenced by their experience with health care IT support and CAs in general, as well as their individual technology knowledge [ ]. Therefore, the definition of experience was extended with regard to the acceptability, acceptance, and adoption of CAs in health care as follows: experience is defined as the time that elapses since a person first uses a technology as well as their experience using similar technologies and their resulting individual knowledge.In addition to experience, the demographic factors age [
, , - , , , , ], gender [ , ], origin [ , , , , , ], and level of education [ , , ] influenced the acceptability, acceptance, and adoption of health CAs among patients. It was shown that older age was associated with greater use of CAs among patients and that older patients were more engaged and satisfied with the system than younger patients [ , , , , ]. By contrast, 1 (1%) of the 73 studies among patients showed that patients aged <30 years enjoyed interacting with a CA more than those aged >30 years [ ]. Furthermore, male patients perceived CAs to be more useful in the health context than female patients [ ]. In addition, patients who were less educated rated CAs as more useful than patients who were well educated [ , ]. It was also showed that Black patients used CAs less than patients of other races with otherwise similar characteristics [ ]. Furthermore, people of Asian descent perceived CAs as more useful [ , ].However, it should be noted that some studies (9/73, 12%) failed to identify any influence of user-related factors on acceptability, acceptance, and adoption among patients [
, , , , , , , , ].Demographic factors such as age, gender, origin, and education level were not identified as factors influencing acceptability, acceptance, and adoption among health care professionals in the included studies.
Discussion
Principal Findings
The objective of this IR was to identify the factors that influence the acceptability, acceptance, and adoption of CAs among patients and health care professionals. We identified 13 factors that influence the acceptability, acceptance, and adoption of CAs among patients and 10 factors that influence the acceptability, acceptance, and adoption of CAs among health care professionals.
We found that performance expectancy and effort expectancy are the most studied factors influencing the acceptability, acceptance, and adoption of CAs in health care. The findings are consistent with the literature on human-computer interaction (HCI). Perceived ease of use and perceived usefulness are described as key factors in predicting the use of technologies in general and CAs in particular [
, ]. Overall, both health care professionals and patients clearly recognize the benefits of CAs in health care, which has already been shown in a number of studies [ , ]. In addition, studies demonstrated that the health care provided by a CA is comparable with that provided by human physicians [ , ]. Thus, the systems represent a cost-effective alternative to the classic therapy option with the same benefits [ , ].One of the most interesting findings of the analysis was that, in addition to performance expectancy and effort expectancy, anthropomorphism, trust, perceived risk, and working alliance have been identified as having a decisive influence on the acceptability, acceptance, and adoption of CAs in health care and have not previously been considered in UTAUT or UTAUT2.
In accordance with the literature on the theory of anthropomorphism [
], this IR found that patients attribute human-like characteristics to health CAs and try to interact with them as if the systems were human. HCI research has also found that individuals interact with internet-based agents as if they were humans, even when they know that they are computer programs [ ]. In addition, previous work has shown that anthropomorphism has a positive effect in terms of continued use and increased satisfaction with the technology [ ]. Moreover, previous work on CAs has already indicated that perceived anthropomorphism can influence CA acceptability, acceptance, and adoption [ ].Nevertheless, it was found that the perceived anthropomorphism could also trigger fear and discomfort in some patients. They perceived the CA as creepy, scary, and strange. In the HCI literature, this is known as the “uncanny valley effect.” The uncanny valley theory states that a technology that appears almost human can evoke negative affective reactions in users [
]. The findings obtained are also consistent with the results of other studies on this topic. Although some studies have reported positive effects of anthropomorphic CAs [ ], others have shown that anthropomorphism can lead to frustration, confusion, and even a sense of eeriness [ ]. Another criticism of anthropomorphism is that users can be deceived into thinking that they are interacting with a real person instead of a system [ ]. Therefore, a CA should always be labeled as a machine.Furthermore, our results support previous literature on trust by showing that trust is not a monolithic concept but must be differentiated into “trust in the provider” and “trust in the technology” from the patient’s perspective. Whereas trust in the provider refers to patients’ beliefs about the provider’s benevolence, integrity, and competence, trust in the technology refers to patients’ beliefs about the system’s benevolence, functionality, helpfulness, and reliability [
, ]. With regard to health care professionals, it was found that they also have little trust in their patients to use the CA correctly and to interpret the given information correctly. Thus, our results show that trust is represented by the categories “trust in the technology,” “trust in the provider,” and “trust in the patient” from the perspective of the health care professionals. To increase the trustworthiness of CAs, it is suggested in relation to research on AI-driven intelligent systems that responses be presented in a meaningful, understandable, and trustworthy format. In addition, users should be provided with a variety of system-related information, including data on the reliability and performance of the system and the source used for the response output. This should enable users to better understand the information displayed and its origin and then decide whether to trust the technology’s recommendation [ ]. It is further suggested that credibility can be demonstrated to users through expert vocabulary and appropriate presentation [ ]. Nonmedical studies have also demonstrated that credibility in the form of systems’ functionality, capability, reliability, and benevolence can predict the acceptability, acceptance, and adoption of wearable technologies such as CAs [ ]. Furthermore, it was shown that for trust building, the user should have a positive impression of the technology. These impressions are influenced by static and dynamic features. Static features include the appearance of the system, and dynamic features include the verbal and nonverbal behaviors of the system [ ]. Moreover, in line with the literature on trust, the results show that building trust in automated systems is a major challenge for developers [ ]. Furthermore, it is assumed that patients will use CAs only if they trust them. These explanations show that trust is one of the key factors influencing CA acceptability, acceptance, and adoption [ ]. Therefore, it is suggested that trust in health CAs should always be systematically assessed before deployment. However, standardized and validated scales to measure trust are lacking, especially in medicine [ ].Another key barrier to the acceptability, acceptance, and adoption of health CAs is the perceived risk of the technology, stemming from the uncertainty around the protection of personal data and the risk to users’ lives and well-being. Concern about data privacy and the fear of misuse of sensitive information are key barriers to the acceptability, acceptance, and adoption as well as to the widespread use of digital health applications [
]. Studies have shown that privacy concerns can be addressed by the automatic transfer of data from an electronic health record and the regular addition of information by health care professionals. In addition, concerns may be addressed by explaining the measures succinctly and presenting them in layperson’s terms [ ].The literature on digital health applications also frequently discussed whether patient safety is compromised [
] and who is responsible if the CA misdiagnoses someone [ ]. The results show that health care professionals fear a safety risk not only for patients but also for themselves. They fear that CAs will play such an important role in the future that they could replace human workers and compromise the quality of health care. We believe that this fear is one of the key barriers to the acceptability, acceptance, and adoption of CAs by health care professionals. In line with the literature, our results clearly show that the development of CAs is still in the early stages, is rudimentary, and thus does not jeopardize jobs [ ]. Patients are more willing to share confidential information with a CA than with a health care professional because of its anonymous and nonjudgmental nature of the interactions. However, the preferred use of the systems is for minor illnesses. For more serious conditions, patients prefer to seek advice and treatment from a physician [ , ]. Thus, the use of CAs is purely supportive and does not jeopardize employment. This should be clearly communicated to increase the acceptability, acceptance, and adoption of the technology among health care professionals.A therapeutic relationship is crucial for the success of a treatment [
]. Such a relationship is the result of empathy, care, and trust and can significantly improve the benefits of a health interaction [ ]. Empathy is the most important factor in building a working relationship [ ]. We were able to identify the factors of empathy, care, and trust as crucial for the acceptability, acceptance, and adoption of CAs in health care. It has already been demonstrated in some studies that a working alliance can be formed between a CA and a user [ , , , ]. For establishing and maintaining a relationship between a patient and a CA, memory of past interactions and variability in verbal and nonverbal responses are crucial elements. This finding is consistent with previous research and shows that, for the correct application of relational behaviors, it is necessary to talk about the past and the future [ ] and the time spent apart [ ]. Bickmore et al [ ] suggested designing health CAs such that interactions are initially relatively distant and professional but gradually become more personal, social, and familiar over time. In addition, systems should have a sense of humor as well as empathy and talk to the user about the present relationship to maintain it [ ].Another crucial barrier to the acceptability, acceptance, and adoption of CAs is their lack of comprehensibility and limited communication capabilities. The literature showed that language skills are a major problem and should be urgently improved [
]. Owing to language limitations, CAs currently use predetermined response options because, unlike free-text entry, they can ensure data validity and accuracy and minimize speech recognition errors. This approach is particularly important in a health-related context, as the multiple-choice input modality avoids potentially dangerous effects of misunderstandings due to ambiguous utterances about medical topics in unrestricted text and speech input. At the same time, it clearly communicates to users how they should respond to the system’s output and ensures that the system can understand and process input with high accuracy. It also enables the CA to be more easily accepted and used by people with different computer and language skills [ , ]. However, our analysis shows that many patients did not want a user input restriction while communicating with systems. Instead of choosing between predefined answers, they would like to be able to answer with a free-text entry to describe their health complaints as precisely as possible.Furthermore, most patients preferred voice-based communication with a CA over text-based communication. The preferred method of communication of CAs was also discussed controversially in the literature. Even within the definition of CAs, there is no consensus on the preferred mode of communication. The advantages of text-based communication are, for example, that text can be indexed, searched, and translated and that it can be easily corrected or improved after completion. Proponents of acoustic communication are of the opinion that speech is more natural and faster than text. In addition, the use of speech can enhance the perceived personality of a CA. Furthermore, systems that allow acoustic communication can also be used by patients with low or no literacy skills [
, ]. Moreover, we found that nonverbal communication also has a decisive influence on the acceptability, acceptance, and adoption of the systems. Nonverbal cues such as facial expressions, gestures, posture, and body movements have a significant impact on guided communication, as they convey empathy, thereby strengthening the therapeutic alliance and trust relationship between patients and CAs [ , , , ]. The 55-38-7 rule proposed by Mehrabian and Ferris [ ] shows the importance of nonverbal communication and behavior. Communication can be improved only through a combination of verbal and nonverbal behaviors [ ]. We believe that all types of communication will be important in health care in the future. Whether written or oral communication is advantageous will depend on the situation in which the CA is used. For example, whereas an oral dialog with a CA may be easier for a human who is severely injured or paraplegic or a human who is illiterate, a written conversation may be beneficial for a prescription transfer or a patient with speech impairment.One of the main criticisms of CAs in the literature is that they would not be able to develop empathy, recognize users’ emotional states, or tailor their responses to them. A lack of empathy can affect the use of CAs in the health care sector [
]. To increase the acceptability, acceptance, and adoption, as well as effectiveness, of CAs among patients, it is, therefore, important that the systems have the same interpersonal and social characteristics as health care professionals. In addition, empathic responses help create a trusting relationship between the technology and the user, which guarantees continuous and long-term use of the system and increases the benefits for patients [ ]. Consistent with the broader literature, our results show that CAs can be empathic [ , , ]. Some studies even showed that the empathic abilities of CAs can be compared with those of a real person [ ]. In health care, empathy as part of anthropomorphism is critical for the success of CAs [ ]. It was found that the visualization of a CA in the form of an avatar makes it more credible, comfortable, sympathetic, and useful than a CA without an avatar [ ].The COVID-19 pandemic has had a significant impact on normal health care delivery and has demonstrated the urgent need for alternative approaches that can overcome geographic, temporal, and organizational barriers. The pandemic resulted in limited access to outpatient clinics, and the high rate of infection posed significant challenges to medical facilities, which affected the delivery of health services [
, ]. This situation has clearly demonstrated that the short-term unavailability of health services can occur even when rapid access to services is basically guaranteed. In this regard, technological systems such as CAs are a good alternative for the continued provision of quality care. It has been shown that CAs can improve and facilitate access to health care [ ]. In addition, there are concerns about what happens once the internet connection is lost or individuals do not have the necessary resources such as a smartphone or internet access [ ]. Services that can be accessed only through technology may lead to a digital divide and inequity in health care. This would limit access to health services, potentially for the very people who need the services most. For example, digital searching for health information is uncommon among older adults and other underserved groups. However, it should be noted that digital technologies expand the availability of health information and resources to many individuals and improve the quality of care [ , ]. Therefore, we propose that health care providers always offer traditional access to health care services alongside technology to provide quality care for everyone and prevent a 2-tier society. Solutions should also be sought to improve the access to digital resources such as the internet that are necessary to access emerging health technologies.Consistent with Ling et al [
], we found that user-related factors influence the acceptability, acceptance, and adoption of CAs. These include the demographic factors age, gender, origin, and education level as well as the UTAUT2 factor user experience with the technology. Regarding the factors age, gender, and education level, we found different results within the analyzed studies as to whether they influence the acceptability, acceptance, and adoption of health CAs among patients. Other studies on this topic also provided different findings. Although some studies demonstrated the presence of these factors, other studies were unable to do so [ , ]. Furthermore, origin was found to influence the acceptability, acceptance, and adoption of CAs. However, overall, this is an understudied area. Little is known about ethical differences in technology acceptability, acceptance, and adoption. However, in line with previous research, our results show that it is crucial to tailor the technology to the target population and its cultural characteristics [ ].Moreover, it was found that the identified influencing factors influence each other and cannot always be clearly separated. At the same time, our results indicate that the importance of an influencing factor also depends on the purpose of the CA used and the health domain concerned.
provides a summary of influencing factors by health domains and health categories (ie, aggregated domains). Whereas in the categories “mental health” and “specific diseases,” anthropomorphism is the most important factor in addition to performance and effort expectancy, credibility and the severity and type of health issue are crucial for CAs as general health advisers and promoters. In the category “pregnancy care and healthcare for children,” by contrast, hedonic motivation is the key influencing variable along with performance and effort expectancy. The importance of the individual determinants based on the purpose of a CA application is, therefore, understandable. However, owing to the small number of studies per health domain and category, this can only be generalized to a limited extent. The mutual influence and not-always-clear separation of the influencing factors as well as their variability and importance depending on the health care domain make the research on the acceptability, acceptance, and adoption of CAs in health care so extensive.Strengths and Limitations
As with all studies, this IR has some limitations. One potential limitation is related to the search strategy. It is possible that not all studies on the topic were found despite our comprehensive search strategy, as studies may have discussed the acceptability, acceptance, or adoption of CAs and the influencing factors but used different terms than those we found. In addition, this review included studies published only in English and German, and this approach may have excluded relevant evidence published in other languages. Furthermore, the IR included only primary studies that had already been published, which also excluded relevant studies such as gray literature.
For quality appraisal, we followed the guidelines for rapid reviews [
, ]. A rapid review is a form of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce evidence-based information in a timely manner [ ]. As a result, the screening of the studies for the quality assessment of the papers was fully performed by only 1 researcher. A second researcher assessed only 10% (8/76) of the studies. Nevertheless, the expedited process may have introduced biases in quality assessment.In addition, as the original studies did not consistently define and describe whether they analyzed acceptability, acceptance, or adoption, it was impossible for us to differentiate between these 3 outcomes in our synthesis. Hence, we cannot provide an answer to the question of whether some factors have been researched more frequently or are more influential for one of the outcomes than for the others.
Finally, the findings regarding acceptability, acceptance, and adoption among health care professionals are almost impossible to generalize, as we could only find and evaluate 7 studies on this topic. Owing to the rapid increase in the research literature on this topic, it is possible that new findings already emerged during the preparation and publication of our results and that the review, therefore, no longer reflects the current state of research.
Despite these potential limitations, this IR has several strengths. To our knowledge, this is the first review to provide a comprehensive picture of the acceptability, acceptance, and adoption of CAs and their influencing factors in health care. We described the factors influencing the acceptability, acceptance, and adoption of CAs in health care from the perspectives of patients and health care professionals and created a thematic map that clearly summarizes the findings. Furthermore, the IR follows the same scientific rigor as primary research in that we used Cooper’s [
] 5-step IR method modified by Whittemore and Knafl [ ] for its construction. The review was developed, conducted, and reported in accordance with the PRISMA selection process, which allowed us to produce a high-quality review [ ]. A total of 5 well-known and frequently used databases in the field of health were searched to retrieve as many studies as possible. The keywords for the search terms used for this purpose were derived from the main research question. Synonyms for the identified keywords were generated using the Medical Subject Headings terms of the 5 databases, a web-based search, and previously published literature on CAs. Freehand searching and forward-backward reference list checks allowed us to identify additional literature missed by the database search and minimize the risk of publication bias. As no restrictions were made with regard to study design, study setting, and country of publication, this review can be considered comprehensive.Implication and Future Directions
This IR provides the first comprehensive overview of the acceptability, acceptance, and adoption of CAs in health care and their influencing factors from the perspectives of patients and health care professionals. From the results, it is clear that the acceptability, acceptance, and adoption of CAs from the perspective of health care professionals are significantly underresearched. Other reviews have also found that few studies on CAs have focused on health care professionals [
]. Therefore, future research should urgently explore the acceptability, acceptance, and adoption of CAs among this user group. For this purpose, a survey could be designed based on our theoretical model to confirm the identified influencing factors and determine new ones. Furthermore, health care professionals should also test currently available CAs and provide feedback. In particular, it is crucial to explore the acceptability, acceptance, and adoption of CAs and their influencing factors from physicians’ point of view. Moreover, physicians will only recommend or prescribe CAs if they accept and adopt the technology and are convinced of its benefits. In addition, we demonstrated that health care professionals’ opinions about CAs significantly influence patients. With knowledge about acceptability, acceptance, and adoption among health care professionals, CAs could be sustainably established in health care.We succeeded in creating a comprehensive thematic map of the factors influencing the acceptability, acceptance, and adoption of CAs. However, the influence of the identified factors on acceptability, acceptance, and adoption as well as the interrelationship among them was not quantitatively validated. Future studies could build on the theoretical model and examine the relative influence of the factors on acceptability, acceptance, and adoption and the dynamics among the factors. Furthermore, the results show that the influence of facilitators and barriers depends on the intended use of a CA and the health domain in which it is used. However, nothing about the strength and importance of the identified factors was mentioned in the analyzed studies. Therefore, future research should also investigate the importance of the individual factors and their interactions with each other for individual areas of care.
We found that CAs have been tested almost exclusively in controlled environments that do not simulate the realistic interactions in clinical practice. It has already been demonstrated that the environment in which the interaction occurs influences technological acceptability, acceptance, and adoption [
, ]. The broader literature also criticized the fact that, to date, most studies have examined the use of CAs in controlled environments rather than in real-world contexts [ , ]. Moreover, most studies into the acceptability, acceptance, and adoption of CAs in health care are short-term studies. However, it has been shown that factors such as habit only develop when the technology is used over a longer period. Therefore, we consider it necessary that CAs be increasingly tested in real environments and over the long term in the future.Consistent with the current review of Camile et al [
], we noted that, in relation to the technology, researchers attach different meanings to the terms “acceptability,” “acceptance,” and “adoption” and often use them synonymously without referring to established models and definitions from the literature. None of the included studies defined the terms used appropriately or distinguished them from each other. The definitions are often misunderstood, or researchers establish their own definitions. The inconsistent use of the terms “acceptability,” “acceptance,” and “adoption” makes it immensely difficult to compare the results of these studies. For future research, we, therefore, consider it necessary to follow the definitions and established models from the literature on acceptability, acceptance, and adoption, which clearly show the differences among the terms, to achieve consistency, which will allow comparisons across studies and the development of targeted implementation strategies.Conclusions
In this review, we identified 13 factors that influence the acceptability, acceptance, and adoption of CAs among patients and 10 factors that influence the acceptability, acceptance, and adoption of CAs among health care professionals. On the basis of the identified influencing factors shown individually for acceptability, acceptance, and adoption, a comprehensive thematic map that explains the acceptability, acceptance, and adoption of CAs in health care was created. Overall, a high level of acceptability, acceptance, and adoption of CAs in health care was observed. This review shows the variety and complexity of influencing factors. Thus, it presents a comprehensive set of factors that can be implemented, improved, or steered to increase the acceptability, acceptance, and adoption of CAs in health care.
To the best of our knowledge, this IR extends the literature by providing the first overview of the research on the acceptability, acceptance, and adoption of CAs in health care. The findings of this review can, therefore, serve as the groundwork for future implementation studies of CAs in health care. Future research should focus on exploring acceptability, acceptance, and adoption from the perspective of health care professionals. Furthermore, it is crucial to test already developed CAs under real conditions and through long-term studies.
Acknowledgments
The authors acknowledge support from the Open Access Publication Fund of the University of Wuppertal.
Data Availability
All data generated or analyzed during this study are included in this published manuscript and its supplementary files.
Conflicts of Interest
None declared.
Definition, synonyms, and variants of conversational agents.
PDF File (Adobe PDF File), 91 KBPRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.
PDF File (Adobe PDF File), 61 KBSearch terms.
PDF File (Adobe PDF File), 50 KBQuality appraisal.
XLSX File (Microsoft Excel File), 20 KBCharacteristics of the included studies.
XLSX File (Microsoft Excel File), 21 KBNumerical listing of the influencing factors for patients.
PDF File (Adobe PDF File), 109 KBNumerical listing of the influencing factors for health care professionals.
PDF File (Adobe PDF File), 97 KBInfluencing factors by health domains and health categories.
XLSX File (Microsoft Excel File), 18 KBReferences
- Brandtzaeg PB, Følstad A. Why people use Chatbots. In: Kompatsiaris I, Cave J, Satsiou A, Carle G, Passani A, Kontopoulos E, et al, editors. Internet Science. Volume 10673. Cham, Switzerland. Springer; 2017;377-392.
- Almalki M. Exploring the influential factors of consumers' willingness toward using COVID-19 related chatbots: an empirical study. Med Arch. Feb 2021;75(1):50-55. [FREE Full text] [CrossRef] [Medline]
- Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. Sep 01, 2018;25(9):1248-1258. [FREE Full text] [CrossRef] [Medline]
- Vaidyam AN, Wisniewski H, Halamka JD, Kashavan MS, Torous JB. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry. Jul 2019;64(7):456-464. [FREE Full text] [CrossRef] [Medline]
- Abdul-Kader SA, Woods J. Survey on chatbot design techniques in speech conversation systems. Int J Adv Comput Sci Appl. 2015;6(7) [FREE Full text] [CrossRef]
- DALE R. The return of the chatbots. Nat Lang Eng. Sep 13, 2016;22(5):811-817. [FREE Full text] [CrossRef]
- McTear M, Callejas Z, Griol D. The Conversational Interface: Talking to Smart Devices. Cham, Switzerland. Springer; 2016.
- Brandão TK, Wolfram G. Digital Connection: Die bessere Customer Journey mit smarten Technologien – Strategie und Praxisbeispiele. Wiesbaden, Germany. Springer Gabler; 2018.
- Chaves AP, Gerosa MA. How should my chatbot interact? A survey on social characteristics in human–chatbot interaction design. Int J Hum Comput Interact. Nov 08, 2020;37(8):729-758. [FREE Full text] [CrossRef]
- Ponathil A, Ozkan F, Welch B, Bertrand J, Chalil Madathil K. Family health history collected by virtual conversational agents: an empirical study to investigate the efficacy of this approach. J Genet Couns. Dec 2020;29(6):1081-1092. [CrossRef] [Medline]
- Adamopoulou E, Moussiades L. An overview of chatbot technology. In: Proceedings of the 16th IFIP WG 12.5 International Conference on Artificial Intelligence Applications and Innovations. Presented at: AIAI '20; June 5–7, 2020, 2020;373-383; Neos Marmaras, Greece. URL: https://link.springer.com/chapter/10.1007/978-3-030-49186-4_31
- Rhee H, Allen J, Mammen J, Swift M. Mobile phone-based asthma self-management aid for adolescents (mASMAA): a feasibility study. Patient Prefer Adherence. Jan 2014;8:63-72. [FREE Full text] [CrossRef] [Medline]
- Sin J, Munteanu C. An empirically grounded sociotechnical perspective on designing virtual agents for older adults. Hum Comput Interact. 2020;35(5-6):481-510. [FREE Full text] [CrossRef]
- Dworkin M, Chakraborty A, Lee S, Monahan C, Hightow-Weidman L, Garofalo R, et al. A realistic talking human embodied agent mobile phone intervention to promote HIV medication adherence and retention in care in young HIV-positive African American men who have sex with men: qualitative study. JMIR Mhealth Uhealth. Jul 31, 2018;6(7):e10211. [FREE Full text] [CrossRef] [Medline]
- Bickmore TW, Caruso L, Clough-Gorr K, Heeren T. ‘It's just like you talk to a friend’ relational agents for older adults. Interact Comput. Dec 2005;17(6):711-735. [FREE Full text] [CrossRef]
- Bickmore TW, Puskar K, Schlenk EA, Pfeifer LM, Sereika SM. Maintaining reality: relational agents for antipsychotic medication adherence. Interact Comput. Jul 2010;22(4):276-288. [FREE Full text] [CrossRef]
- Thompson D, Callender C, Gonynor C, Cullen KW, Redondo MJ, Butler A, et al. Using relational agents to promote family communication around type 1 diabetes self-management in the diabetes family teamwork online intervention: longitudinal pilot study. J Med Internet Res. Sep 13, 2019;21(9):e15318. [FREE Full text] [CrossRef] [Medline]
- Guhl E, Althouse AD, Pusateri AM, Kimani E, Paasche-Orlow MK, Bickmore TW, et al. The atrial fibrillation health literacy information technology trial: pilot trial of a mobile health app for atrial fibrillation. JMIR Cardio. Sep 04, 2020;4(1):e17162. [FREE Full text] [CrossRef] [Medline]
- Sillice MA, Morokoff PJ, Ferszt G, Bickmore T, Bock BC, Lantini R, et al. Using relational agents to promote exercise and sun protection: assessment of participants' experiences with two interventions. J Med Internet Res. Feb 07, 2018;20(2):e48. [FREE Full text] [CrossRef] [Medline]
- Wang C, Bickmore T, Bowen DJ, Norkunas T, Campion M, Cabral H, et al. Acceptability and feasibility of a virtual counselor (VICKY) to collect family health histories. Genet Med. Oct 2015;17(10):822-830. [FREE Full text] [CrossRef] [Medline]
- Bendig E, Erb B, Meißner D, Bauereiß N, Baumeister H. Feasibility of a software agent providing a brief intervention for self-help to uplift psychological wellbeing ("SISU"). A single-group pretest-posttest trial investigating the potential of SISU to act as therapeutic agent. Internet Interv. Feb 24, 2021;24:100377. [FREE Full text] [CrossRef] [Medline]
- Dupuy L, Micoulaud-Franchi JA, Philip P. Acceptance of virtual agents in a homecare context: evaluation of excessive daytime sleepiness in apneic patients during interventions by continuous positive airway pressure (CPAP) providers. J Sleep Res. Apr 2021;30(2):e13094. [CrossRef] [Medline]
- Bresó A, Martínez-Miranda J, Botella C, Baños RM, García-Gómez JM. Usability and acceptability assessment of an empathic virtual agent to prevent major depression. Expert Syst. May 25, 2016;33(4):297-312. [FREE Full text] [CrossRef]
- Easton K, Potter S, Bec R, Bennion M, Christensen H, Grindell C, et al. A virtual agent to support individuals living with physical and mental comorbidities: co-design and acceptability testing. J Med Internet Res. May 30, 2019;21(5):e12996. [FREE Full text] [CrossRef] [Medline]
- Philip P, Dupuy L, Morin CM, de Sevin E, Bioulac S, Taillard J, et al. Smartphone-based virtual agents to help individuals with sleep concerns during COVID-19 confinement: feasibility study. J Med Internet Res. Dec 18, 2020;22(12):e24268. [FREE Full text] [CrossRef] [Medline]
- LeRouge C, Dickhut K, Lisetti C, Sangameswaran S, Malasanos T. Engaging adolescents in a computer-based weight management program: avatars and virtual coaches could help. J Am Med Inform Assoc. Jan 2016;23(1):19-28. [FREE Full text] [CrossRef] [Medline]
- Nallam P, Bhandari S, Sanders J, Martin-Hammond A. A question of access: exploring the perceived benefits and barriers of intelligent voice assistants for improving access to consumer health resources among low-income older adults. Gerontol Geriatr Med. Dec 29, 2020;6:2333721420985975. [FREE Full text] [CrossRef] [Medline]
- Carlin A, Logue C, Flynn J, Murphy MH, Gallagher AM. Development and feasibility of a family-based health behavior intervention using intelligent personal assistants: randomized controlled trial. JMIR Form Res. Jan 28, 2021;5(1):e17501. [FREE Full text] [CrossRef] [Medline]
- Palanica A, Flaschner P, Thommandram A, Li M, Fossat Y. Physicians' perceptions of chatbots in health care: cross-sectional web-based survey. J Med Internet Res. Apr 05, 2019;21(4):e12887. [FREE Full text] [CrossRef] [Medline]
- Bickmore TW, Pfeifer LM, Jack BW. Taking the time to care: empowering low health literacy hospital patients with virtual nurse agents. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Presented at: CHI '09; April 4-9, 2009, 2009;1265-1274; Boston, MA. URL: https://dl.acm.org/doi/10.1145/1518701.1518891 [CrossRef]
- Mehrabian A, Ferris SR. Inference of attitudes from nonverbal communication in two channels. J Consult Psychol. Jun 1967;31(3):248-252. [CrossRef] [Medline]
- Ruttkay Z, Dormann C, Noot H. Embodied conversational agents on a common ground. In: Ruttkay Z, Pelachaud C, editors. From Brows to Trust: Evaluating Embodied Conversational Agents. Dordrecht, The Netherlands. Springer; 2004;27-66.
- Vardoulakis L, Ring L, Barry B, Sidner C, Bickmore T. Designing relational agents as long term social companions for older adults. In: Proceedings of the 12th International Conference on Intelligent Virtual Agents. Presented at: IVA '12; September 12-14, 2012, 2012;289-302; Santa Cruz, CA. URL: https://link.springer.com/chapter/10.1007/978-3-642-33197-8_30 [CrossRef]
- Bickmore TW, Picard RW. Establishing and maintaining long-term human-computer relationships. ACM Trans Comput Hum Interact. Jun 01, 2005;12(2):293-327. [FREE Full text] [CrossRef]
- Ter Stal S, Tabak M, op den Akker H, Beinema T, Hermens H. Who do you prefer? The effect of age, gender and role on users’ first impressions of embodied conversational agents in eHealth. Int J Hum Comput Interact. Dec 16, 2019;36(9):881-892. [FREE Full text] [CrossRef]
- Philip P, Dupuy L, Auriacombe M, Serre F, de Sevin E, Sauteraud A, et al. Trust and acceptance of a virtual psychiatric interview between embodied conversational agents and outpatients. NPJ Digit Med. Jan 07, 2020;3:2. [FREE Full text] [CrossRef] [Medline]
- Stara V, Vera B, Bolliger D, Rossi L, Felici E, Di Rosa M, et al. Usability and acceptance of the embodied conversational agent Anne by people with dementia and their caregivers: exploratory study in home environment settings. JMIR Mhealth Uhealth. Jun 25, 2021;9(6):e25891. [FREE Full text] [CrossRef] [Medline]
- Martínez-Miranda J, Martínez A, Ramos R, Aguilar H, Jiménez L, Arias H, et al. Assessment of users' acceptability of a mobile-based embodied conversational agent for the prevention and detection of suicidal behaviour. J Med Syst. Jun 25, 2019;43(8):246. [CrossRef] [Medline]
- Andersson G. Internet-delivered psychological treatments. Annu Rev Clin Psychol. 2016;12:157-179. [CrossRef] [Medline]
- Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng YL, et al. Conversational agents in health care: scoping review and conceptual analysis. J Med Internet Res. Aug 07, 2020;22(8):e17158. [FREE Full text] [CrossRef] [Medline]
- Andersson G, Cuijpers P, Carlbring P, Riper H, Hedman E. Guided Internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: a systematic review and meta-analysis. World Psychiatry. Oct 2014;13(3):288-295. [FREE Full text] [CrossRef] [Medline]
- Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health. Dec 13, 2018;5(4):e64. [FREE Full text] [CrossRef] [Medline]
- Gaffney H, Mansell W, Tai S. Conversational agents in the treatment of mental health problems: mixed-method systematic review. JMIR Ment Health. Oct 18, 2019;6(10):e14166. [FREE Full text] [CrossRef] [Medline]
- Lucas GM, Gratch J, King A, Morency L. It’s only a computer: virtual humans increase willingness to disclose. Comput Human Behav. Aug 2014;37:94-100. [FREE Full text] [CrossRef]
- Lucas GM, Rizzo A, Gratch J, Scherer S, Stratou G, Boberg J, et al. Reporting mental health symptoms: breaking down barriers to care with virtual human interviewers. Front Robot AI. Oct 12, 2017;4:1-9. [FREE Full text] [CrossRef]
- Camille N, Corina S, Gavin D. Technology acceptability, acceptance and adoption - definitions and measurement. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Presented at: WISH '19; May 4-9, 2019, 2019; Glasgow, UK. URL: https://eprints.lancs.ac.uk/id/eprint/131906/1/WISH_extended_abstract.pdf
- Wisdom JP, Chor KH, Hoagwood KE, Horwitz SM. Innovation adoption: a review of theories and constructs. Adm Policy Ment Health. Jul 2014;41(4):480-502. [FREE Full text] [CrossRef] [Medline]
- Rogers EM. Diffusion of Innovations, 4th edition. New York, NY. Free Press; 1995.
- Davis FD. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. Sep 1989;13(3):319-340. [FREE Full text] [CrossRef]
- Davis FD. User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. Int J Man Mach Stud. Mar 1993;38(3):475-487. [FREE Full text] [CrossRef]
- Venkatesh V, Morris MG, Davis GB, Davis FD. User acceptance of information technology: toward a unified view. MIS Q. Sep 2003;27(3):425-478. [FREE Full text] [CrossRef]
- Venkatesh V, Thong JY, Xu X. Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Q. Mar 2012;36(1):157-178. [FREE Full text] [CrossRef]
- Kijsanayotin B, Pannarunothai S, Speedie SM. Factors influencing health information technology adoption in Thailand's community health centers: applying the UTAUT model. Int J Med Inform. Jun 2009;78(6):404-416. [CrossRef] [Medline]
- Nadarzynski T, Miles O, Cowie A, Ridge D. Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: a mixed-methods study. Digit Health. Aug 21, 2019;5:2055207619871808. [FREE Full text] [CrossRef] [Medline]
- Laumer S, Maier C, Gubler FT. Chatbot acceptance in healthcare: explaining user adoption of conversational agents for disease diagnosis. In: Proceedings of the 27th European Conference on Information Systems. Presented at: ECIS '19; June 8-14, 2019, 2019; Stockholm, Sweden. URL: https://aisel.aisnet.org/ecis2019_rp/88/ [CrossRef]
- Prakash AV, Das S. Intelligent conversational agents in mental healthcare services: a thematic analysis of user perceptions. Pac Asia J Assoc Inf Syst. Jun 30, 2020;12(2):1-34. [FREE Full text] [CrossRef]
- Nadarzynski T, Bayley J, Llewellyn C, Kidsley S, Graham CA. Acceptability of artificial intelligence (AI)-enabled chatbots, video consultations and live webchats as online platforms for sexual health advice. BMJ Sex Reprod Health. Jul 2020;46(3):210-217. [FREE Full text] [CrossRef] [Medline]
- Kim S, Lee KH, Hwang H, Yoo S. Analysis of the factors influencing healthcare professionals' adoption of mobile electronic medical record (EMR) using the unified theory of acceptance and use of technology (UTAUT) in a tertiary hospital. BMC Med Inform Decis Mak. Jan 30, 2016;16:12. [FREE Full text] [CrossRef] [Medline]
- Maillet É, Mathieu L, Sicotte C. Modeling factors explaining the acceptance, actual use and satisfaction of nurses using an electronic patient record in acute care settings: an extension of the UTAUT. Int J Med Inform. Jan 2015;84(1):36-47. [CrossRef] [Medline]
- Fishbein M, Ajzen I. Belief, attitude, intention and behavior: An introduction to theory and research. Reading, Mass. Boston, MA. Addison-Wesley; 1975.
- Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. Dec 1991;50(2):179-211. [FREE Full text] [CrossRef]
- Thompson RL, Higgins CA, Howell JM. Personal computing: toward a conceptual model of utilization. MIS Q. Mar 1991;15(1):125. [FREE Full text] [CrossRef]
- Taylor S, Todd P. Assessing IT usage: the role of prior experience. MIS Q. Dec 1995;19(4):561. [FREE Full text] [CrossRef]
- Vallerand RJ. Toward a hierarchical model of intrinsic and extrinsic motivation. Adv Exp Soc Psychol. 1997;29:271-260. [FREE Full text] [CrossRef]
- Compeau D, Higgins CA, Huff S. Social cognitive theory and individual reactions to computing technology: a longitudinal study. MIS Q. Jun 1999;23(2):145-158. [FREE Full text] [CrossRef]
- Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage Sci. Feb 2000;46(2):186-204. [FREE Full text] [CrossRef]
- Cooper H. Integrating Research: A Guide for Literature Reviews. Newbury Park, CA. Sage Publications; 1989.
- Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs. Dec 2005;52(5):546-553. [CrossRef] [Medline]
- Russell CL. An overview of the integrative research review. Prog Transplant. Mar 2005;15(1):8-13. [CrossRef] [Medline]
- Souza MT, Silva MD, Carvalho RD. Integrative review: what is it? How to do it? Einstein (Sao Paulo). Mar 2010;8(1):102-106. [FREE Full text] [CrossRef] [Medline]
- Broome ME. Integrative literature reviews for the development of concepts. In: Rodgers BL, Knafl KA, editors. Concept Development in Nursing: Foundations, Techniques, and Applications. 2nd edition. Philadelphia, PA. W B Saunders; 1993;231-250.
- Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. Mar 29, 2021;372:n71. [FREE Full text] [CrossRef] [Medline]
- McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS peer review of electronic search strategies: 2015 guideline statement. J Clin Epidemiol. Jul 2016;75:40-46. [FREE Full text] [CrossRef] [Medline]
- Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. Jul 21, 2009;6(7):e1000097. [FREE Full text] [CrossRef] [Medline]
- Hong QN, Pluye P, Fàbregues S, Bartlett G, Boardman F, Cargo M, et al. Mixed Methods Appraisal Tool (MMAT), version 2018. McGill University. 2018. URL: http://mixedmethodsappraisaltoolpublic.pbworks.com/w/file/fetch/127916259/MMAT_2018_criteria-manual_2018-08-01_ENG.pdf [accessed 2021-12-21]
- Kowatsch T, Lohse KM, Erb V, Schittenhelm L, Galliker H, Lehner R, et al. Hybrid ubiquitous coaching with a novel combination of mobile and holographic conversational agents targeting adherence to home exercises: four design and evaluation studies. J Med Internet Res. Feb 22, 2021;23(2):e23612. [FREE Full text] [CrossRef] [Medline]
- Potts C, Ennis E, Bond RB, Mulvenna MD, McTear MF, Boyd K, et al. Chatbots to support mental wellbeing of people living in rural areas: can user groups contribute to co-design? J Technol Behav Sci. 2021;6(4):652-665. [FREE Full text] [CrossRef] [Medline]
- Baptista S, Wadley G, Bird D, Oldenburg B, Speight J, My Diabetes Coach Research Group. Acceptability of an embodied conversational agent for type 2 diabetes self-management education and support via a smartphone app: mixed methods study. JMIR Mhealth Uhealth. Jul 22, 2020;8(7):e17038. [FREE Full text] [CrossRef] [Medline]
- Fan X, Chao D, Zhang Z, Wang D, Li X, Tian F. Utilization of self-diagnosis health chatbots in real-world settings: case study. J Med Internet Res. Jan 06, 2021;23(1):e19928. [FREE Full text] [CrossRef] [Medline]
- Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. Nov 23, 2018;6(11):e12106. [FREE Full text] [CrossRef] [Medline]
- Crutzen R, Peters GJ, Portugal SD, Fisser EM, Grolleman JJ. An artificially intelligent chat agent that answers adolescents' questions related to sex, drugs, and alcohol: an exploratory study. J Adolesc Health. May 2011;48(5):514-519. [CrossRef] [Medline]
- Hauser-Ulrich S, Künzli H, Meier-Peterhans D, Kowatsch T. A smartphone-based health care chatbot to promote self-management of chronic pain (SELMA): pilot randomized controlled trial. JMIR Mhealth Uhealth. Apr 03, 2020;8(4):e15806. [FREE Full text] [CrossRef] [Medline]
- Apergi LA, Bjarnadottir MV, Baras JS, Golden BL, Anderson KM, Chou J, et al. Voice interface technology adoption by patients with heart failure: pilot comparison study. JMIR Mhealth Uhealth. Apr 01, 2021;9(4):e24646. [FREE Full text] [CrossRef] [Medline]
- Prochaska JJ, Vogel EA, Chieng A, Kendra M, Baiocchi M, Pajarito S, et al. A therapeutic relational agent for reducing problematic substance use (Woebot): development and usability study. J Med Internet Res. Mar 23, 2021;23(3):e24850. [FREE Full text] [CrossRef] [Medline]
- Edwards KJ, Jones RB, Shenton D, Page T, Maramba I, Warren A, et al. The use of smart speakers in care home residents: implementation study. J Med Internet Res. Dec 20, 2021;23(12):e26767. [FREE Full text] [CrossRef] [Medline]
- Chung J, Bleich M, Wheeler DC, Winship JM, McDowell B, Baker D, et al. Attitudes and perceptions toward voice-operated smart speakers among low-income senior housing residents: comparison of pre- and post-installation surveys. Gerontol Geriatr Med. Mar 26, 2021;7:23337214211005869. [FREE Full text] [CrossRef] [Medline]
- Ter Stal S, Broekhuis M, van Velsen L, Hermens H, Tabak M. Embodied conversational agent appearance for health assessment of older adults: explorative study. JMIR Hum Factors. Sep 04, 2020;7(3):e19987. [FREE Full text] [CrossRef] [Medline]
- Chung K, Cho HY, Park JY. A chatbot for perinatal women's and partners' obstetric and mental health care: development and usability evaluation study. JMIR Med Inform. Mar 03, 2021;9(3):e18607. [FREE Full text] [CrossRef] [Medline]
- Miles O, West R, Nadarzynski T. Health chatbots acceptability moderated by perceived stigma and severity: a cross-sectional survey. Digit Health. Dec 08, 2021;7:20552076211063012. [FREE Full text] [CrossRef] [Medline]
- Vilaro MJ, Wilson-Howard DS, Zalake MS, Tavassoli F, Lok BC, Modave FP, et al. Key changes to improve social presence of a virtual health assistant promoting colorectal cancer screening informed by a technology acceptance model. BMC Med Inform Decis Mak. Jun 22, 2021;21(1):196. [FREE Full text] [CrossRef] [Medline]
- Fadhil A, Wang Y, Reiterer H. Assistive conversational agent for health coaching: a validation study. Methods Inf Med. Jun 2019;58(1):9-23. [CrossRef] [Medline]
- Kim AJ, Yang J, Jang Y, Baek JS. Acceptance of an informational antituberculosis chatbot among Korean adults: mixed methods research. JMIR Mhealth Uhealth. Nov 09, 2021;9(11):e26424. [FREE Full text] [CrossRef] [Medline]
- Boustani M, Lunn S, Visser U, Lisetti C. Development, feasibility, acceptability, and utility of an expressive speech-enabled digital health agent to deliver online, brief motivational interviewing for alcohol misuse: descriptive study. J Med Internet Res. Sep 29, 2021;23(9):e25837. [FREE Full text] [CrossRef] [Medline]
- Cerda Diez M, E Cortés D, Trevino-Talbot M, Bangham C, Winter MR, Cabral H, et al. Designing and evaluating a digital family health history tool for Spanish speakers. Int J Environ Res Public Health. Dec 07, 2019;16(24):4979. [FREE Full text] [CrossRef] [Medline]
- Maeda E, Miyata A, Boivin J, Nomura K, Kumazawa Y, Shirasawa H, et al. Promoting fertility awareness and preconception health using a chatbot: a randomized controlled trial. Reprod Biomed Online. Dec 2020;41(6):1133-1143. [CrossRef] [Medline]
- Amith M, Zhu A, Cunningham R, Lin R, Savas L, Shay L, et al. Early usability assessment of a conversational agent for HPV vaccination. Stud Health Technol Inform. 2019;257:17-23. [FREE Full text] [Medline]
- Nadarzynski T, Puentes V, Pawlak I, Mendes T, Montgomery I, Bayley J, et al. Barriers and facilitators to engagement with artificial intelligence (AI)-based chatbots for sexual and reproductive health advice: a qualitative analysis. Sex Health. Nov 2021;18(5):385-393. [CrossRef] [Medline]
- Bray L, Sharpe A, Gichuru P, Fortune PM, Blake L, Appleton V. The acceptability and impact of the Xploro digital therapeutic platform to inform and prepare children for planned procedures in a hospital: before and after evaluation study. J Med Internet Res. Aug 11, 2020;22(8):e17367. [FREE Full text] [CrossRef] [Medline]
- Dhinagaran DA, Sathish T, Soong A, Theng Y, Best J, Tudor Car L. Conversational agent for healthy lifestyle behavior change: web-based feasibility study. JMIR Form Res. Dec 03, 2021;5(12):e27956. [FREE Full text] [CrossRef] [Medline]
- To QG, Green C, Vandelanotte C. Feasibility, usability, and effectiveness of a machine learning-based physical activity chatbot: quasi-experimental study. JMIR Mhealth Uhealth. Nov 26, 2021;9(11):e28577. [FREE Full text] [CrossRef] [Medline]
- Dhinagaran DA, Sathish T, Kowatsch T, Griva K, Best JD, Tudor Car L. Public perceptions of diabetes, healthy living, and conversational agents in Singapore: needs assessment. JMIR Form Res. Nov 11, 2021;5(11):e30435. [FREE Full text] [CrossRef] [Medline]
- Dennis AR, Kim A, Rahimi M, Ayabakan S. User reactions to COVID-19 screening chatbots from reputable providers. J Am Med Inform Assoc. Nov 01, 2020;27(11):1727-1731. [FREE Full text] [CrossRef] [Medline]
- Jang S, Kim J, Kim S, Hong J, Kim S, Kim E. Mobile app-based chatbot to deliver cognitive behavioral therapy and psychoeducation for adults with attention deficit: a development and feasibility/usability study. Int J Med Inform. Jun 2021;150:104440. [CrossRef] [Medline]
- Oh J, Jang S, Kim H, Kim JJ. Efficacy of mobile app-based interactive cognitive behavioral therapy using a chatbot for panic disorder. Int J Med Inform. Aug 2020;140:104171. [CrossRef] [Medline]
- Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. Jun 06, 2017;4(2):e19. [FREE Full text] [CrossRef] [Medline]
- Klos MC, Escoredo M, Joerin A, Lemos VN, Rauws M, Bunge EL. Artificial intelligence-based chatbot for anxiety and depression in university students: pilot randomized controlled trial. JMIR Form Res. Aug 12, 2021;5(8):e20678. [FREE Full text] [CrossRef] [Medline]
- Mauriello ML, Tantivasadakarn N, Mora-Mendoza MA, Lincoln ET, Hon G, Nowruzi P, et al. A suite of mobile conversational agents for daily stress management (Popbots): mixed methods exploratory study. JMIR Form Res. Sep 14, 2021;5(9):e25294. [FREE Full text] [CrossRef] [Medline]
- Loveys K, Sagar M, Pickering I, Broadbent E. A digital human for delivering a remote loneliness and stress intervention to at-risk younger and older adults during the COVID-19 pandemic: randomized pilot trial. JMIR Ment Health. Nov 08, 2021;8(11):e31586. [FREE Full text] [CrossRef] [Medline]
- Beilharz F, Sukunesan S, Rossell SL, Kulkarni J, Sharp G. Development of a positive body image chatbot (KIT) with young people and parents/carers: qualitative focus group study. J Med Internet Res. Jun 16, 2021;23(6):e27807. [FREE Full text] [CrossRef] [Medline]
- Bahja M, Abuhwaila N, Bahja J. An antenatal care awareness prototype chatbot application using a user-centric design approach. In: Proceedings of the 22nd HCI International Conference on HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence. Presented at: HCII '20; July 19-24, 2020, 2020;20-31; Copenhagen, Denmark. URL: https://link.springer.com/chapter/10.1007/978-3-030-60117-1_2 [CrossRef]
- Schmidlen T, Schwartz M, DiLoreto K, Kirchner HL, Sturm AC. Patient assessment of chatbots for the scalable delivery of genetic counseling. J Genet Couns. Dec 2019;28(6):1166-1177. [CrossRef] [Medline]
- Magnani JW, Schlusser CL, Kimani E, Rollman BL, Paasche-Orlow MK, Bickmore TW. The atrial fibrillation health literacy information technology system: pilot assessment. JMIR Cardio. Jul 2017;1(2):e7. [FREE Full text] [CrossRef] [Medline]
- Abdullah AS, Gaehde S, Bickmore T. A tablet based embodied conversational agent to promote smoking cessation among veterans: a feasibility study. J Epidemiol Glob Health. Dec 2018;8(3-4):225-230. [FREE Full text] [CrossRef] [Medline]
- Prochaska JJ, Vogel EA, Chieng A, Baiocchi M, Maglalang DD, Pajarito S, et al. A randomized controlled trial of a therapeutic relational agent for reducing substance misuse during the COVID-19 pandemic. Drug Alcohol Depend. Oct 01, 2021;227:108986. [FREE Full text] [CrossRef] [Medline]
- Issom DZ, Hardy-Dessources MD, Romana M, Hartvigsen G, Lovis C. Toward a conversational agent to support the self-management of adults and young adults with sickle cell disease: usability and usefulness study. Front Digit Health. Jan 29, 2021;3:600333. [FREE Full text] [CrossRef] [Medline]
- Koman J, Fauvelle K, Schuck S, Texier N, Mebarki A. Physicians' perceptions of the use of a chatbot for information seeking: qualitative study. J Med Internet Res. Nov 10, 2020;22(11):e15185. [FREE Full text] [CrossRef] [Medline]
- Greer S, Ramo D, Chang Y, Fu M, Moskowitz J, Haritatos J. Use of the chatbot "Vivibot" to deliver positive psychology skills and promote well-being among young people after cancer treatment: randomized controlled feasibility trial. JMIR Mhealth Uhealth. Oct 31, 2019;7(10):e15018. [FREE Full text] [CrossRef] [Medline]
- Arem H, Scott R, Greenberg D, Kaltman R, Lieberman D, Lewin D. Assessing breast cancer survivors' perceptions of using voice-activated technology to address insomnia: feasibility study featuring focus groups and in-depth interviews. JMIR Cancer. May 26, 2020;6(1):e15859. [FREE Full text] [CrossRef] [Medline]
- Rodríguez MD, Beltrán J, Valenzuela-Beltrán M, Cruz-Sandoval D, Favela J. Assisting older adults with medication reminders through an audio-based activity recognition system. Pers Ubiquitous Comput. 2021;25(2):337-351. [FREE Full text] [CrossRef]
- Santini S, Stara V, Galassi F, Merizzi A, Schneider C, Schwammer S, et al. User requirements analysis of an embodied conversational agent for coaching older adults to choose active and healthy ageing behaviors during the transition to retirement: a cross-national user centered design study. Int J Environ Res Public Health. Sep 14, 2021;18(18):9681. [FREE Full text] [CrossRef] [Medline]
- Balasubramanian GV, Beaney P, Chambers R. Digital personal assistants are smart ways for assistive technology to aid the health and wellbeing of patients and carers. BMC Geriatr. Nov 15, 2021;21(1):643. [FREE Full text] [CrossRef] [Medline]
- Bennion MR, Hardy GE, Moore RK, Kellett S, Millings A. Usability, acceptability, and effectiveness of web-based conversational agents to facilitate problem solving in older adults: controlled study. J Med Internet Res. May 27, 2020;22(5):e16794. [FREE Full text] [CrossRef] [Medline]
- Rousseau DM, Sitkin SB, Burt RS, Camerer C. Not so different after all: a cross-discipline view of trust. Acad Manage Rev. Jul 01, 1998;23(3):393-404. [FREE Full text] [CrossRef]
- Araujo T. Living up to the chatbot hype: the influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput Human Behav. Aug 2018;85:183-189. [FREE Full text] [CrossRef]
- Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inf Syst Res. Sep 1991;2(3):192-222. [FREE Full text] [CrossRef]
- Mcknight DH, Carter M, Thatcher JB, Clay PF. Trust in a specific technology: an investigation of its components and measures. ACM Trans Manag Inf Syst. Jul 01, 2011;2(2):1-25. [FREE Full text] [CrossRef]
- Heerink M, Kröse B, Evers V, Wielinga B. Assessing acceptance of assistive social agent technology by older adults: the almere model. Int J Soc Robot. 2010;2(4):361-375. [FREE Full text] [CrossRef]
- Abd-Alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M. Perceptions and opinions of patients about mental health chatbots: scoping review. J Med Internet Res. Jan 13, 2021;23(1):e17828. [FREE Full text] [CrossRef] [Medline]
- Waytz A, Cacioppo J, Epley N. Who sees human? The stability and importance of individual differences in anthropomorphism. Perspect Psychol Sci. May 2010;5(3):219-232. [FREE Full text] [CrossRef] [Medline]
- Melián-González S, Gutiérrez-Taño D, Bulchand-Gidumal J. Predicting the intentions to use chatbots for travel and tourism. Curr Issues Tour. 2021;24(2):192-210. [FREE Full text] [CrossRef]
- MacDorman KF, Chattopadhyay D. Categorization-based stranger avoidance does not explain the uncanny valley effect. Cognition. Apr 2017;161:132-135. [CrossRef] [Medline]
- Pfeuffer N, Benlian A, Gimpel H, Hinz O. Anthropomorphic information systems. Bus Inf Syst Eng. May 27, 2019;61(4):523-533. [FREE Full text] [CrossRef]
- Cassell J, Bickmore T. External manifestations of trustworthiness in the interface. Commun ACM. Dec 2000;43(12):50-56. [FREE Full text] [CrossRef]
- Lee JD, See KA. Trust in automation: designing for appropriate reliance. Hum Factors. 2004;46(1):50-80. [CrossRef] [Medline]
- Zhang M, Luo M, Nie R, Zhang Y. Technical attributes, health attribute, consumer attributes and their roles in adoption intention of healthcare wearable technology. Int J Med Inform. Dec 2017;108:97-109. [CrossRef] [Medline]
- Cafaro A, Vilhjálmsson HH, Bickmore T. First impressions in human--agent virtual encounters. ACM Trans. Comput.-Hum. Interact. Aug 13, 2016;23(4):1-40. [FREE Full text] [CrossRef]
- Hengstler M, Enkel E, Duelli S. Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technol Forecast Soc Change. Apr 2016;105:105-120. [FREE Full text] [CrossRef]
- Henson P, Wisniewski H, Hollis C, Keshavan M, Torous J. Digital mental health apps and the therapeutic alliance: initial review. BJPsych Open. Jan 2019;5(1):e15. [FREE Full text] [CrossRef] [Medline]
- Wutz M, Walther P, Gsell H. Dialogsystem im Krankenhaus. Health & Care Management. 2020. URL: https://www.hcm-magazin.de/potenziale-von-chatbots-271831 [accessed 2023-09-08]
- Featherman MS, Pavlou PA. Predicting e-services adoption: a perceived risk facets perspective. Int J Hum Comput Stud. Oct 2003;59(4):451-474. [FREE Full text] [CrossRef]
- Martinez-Martin N, Kreitmair K. Ethical issues for direct-to-consumer digital psychotherapy apps: addressing accountability, data protection, and consent. JMIR Ment Health. Apr 23, 2018;5(2):e32. [FREE Full text] [CrossRef] [Medline]
- Martin DJ, Garske JP, Davis MK. Relation of the therapeutic alliance with outcome and other variables: a meta-analytic review. J Consult Clin Psychol. Jun 2000;68(3):438-450. [Medline]
- Nienhuis JB, Owen J, Valentine JC, Winkeljohn Black S, Halford TC, Parazak SE, et al. Therapeutic alliance, empathy, and genuineness in individual adult psychotherapy: a meta-analytic review. Psychother Res. Jul 2018;28(4):593-605. [CrossRef] [Medline]
- Gelso C, Hayes J. The Psychotherapy Relationship: Theory, Research and Practice. New York, NY. John Wiley and Sons; 1998.
- Planalp S, Benson A. Friends' and acquaintances' conversations I: perceived differences. J Soc Pers Relat. Nov 1992;9(4):483-506. [FREE Full text] [CrossRef]
- Gilbertson J, Dindia K, Allen M. Relational continuity constructional units and the maintenance of relationships. J Soc Pers Relat. Nov 1998;15(6):774-790. [FREE Full text] [CrossRef]
- Bickmore T, Schulman D. Practical approaches to comforting users with relational agents. In: Proceedings of the 2007 Extended Abstracts on Human Factors in Computing Systems. Presented at: CHI EA '07; April 28-May 3, 2007, 2007;2291-2296; San Jose, CA. URL: https://dl.acm.org/doi/10.1145/1240866.1240996 [CrossRef]
- ter Stal S, Kramer LL, Tabak M, op den Akker H, Hermens H. Design features of embodied conversational agents in eHealth: a literature review. Int J Hum Comput Stud. Jun 2020;138:102409. [FREE Full text] [CrossRef]
- Scholten MR, Kelders SM, Van Gemert-Pijnen JE. Self-guided web-based interventions: scoping review on user needs and the potential of embodied conversational agents to address them. J Med Internet Res. Nov 16, 2017;19(11):e383. [FREE Full text] [CrossRef] [Medline]
- Comendador BE, Francisco BM, Medenilla JS, Nacion SM, Serac TB. Pharmabot: a pediatric generic medicine consultant chatbot. J Autom Control Eng. 2015;3(2):137-140. [FREE Full text] [CrossRef]
- Fang ML, Siden E, Korol A, Demestihas M, Sixsmith J, Sixsmith A. A Scoping review exploration of the intended and unintended consequences of eHealth on older people: a health equity impact assessment. Hum Technol. Nov 2018;14(3):297-323. [FREE Full text] [CrossRef]
- Ling EC, Tussyadiah I, Tuomi A, Stienmetz J, Ioannou A. Factors influencing users' adoption and use of conversational agents: a systematic review. Psychol Mark. Apr 08, 2021;38(7):1031-1051. [FREE Full text] [CrossRef]
- Tricco AC, Langlois EV, Straus SE. Rapid reviews to strengthen health policy and systems: a practical guide. Alliance for Health Policy and Systems Research and World Health Organization. 2017. URL: https://apps.who.int/iris/bitstream/handle/10665/258698/9789241512763-eng.pdf [accessed 2022-06-20]
- Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133-139. [CrossRef] [Medline]
- Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. Feb 10, 2012;1:10. [FREE Full text] [CrossRef] [Medline]
- Rogers WA, Fisk AD. Toward a psychological science of advanced technology design for older adults. J Gerontol B Psychol Sci Soc Sci. Nov 2010;65(6):645-653. [FREE Full text] [CrossRef] [Medline]
- Kennedy CM, Powell J, Payne TH, Ainsworth J, Boyd A, Buchan I. Active assistance technology for health-related behavior change: an interdisciplinary review. J Med Internet Res. Jun 14, 2012;14(3):e80. [FREE Full text] [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
CA: conversational agent |
HCI: human-computer interaction |
IR: integrative review |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
UTAUT: Unified Theory of Acceptance and Use of Technology |
UTAUT2: Unified Theory of Acceptance and Use of Technology 2 |
Edited by T de Azevedo Cardoso, A Mavragani; submitted 15.02.23; peer-reviewed by C Bérubé, L Kremer; comments to author 20.04.23; revised version received 10.05.23; accepted 10.07.23; published 26.09.23.
Copyright©Maximilian Wutz, Marius Hermes, Vera Winter, Juliane Köberlein-Neu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 26.09.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.