Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/49387, first published .
Health Professionals’ Views on the Use of Conversational Agents for Health Care: Qualitative Descriptive Study

Health Professionals’ Views on the Use of Conversational Agents for Health Care: Qualitative Descriptive Study

Health Professionals’ Views on the Use of Conversational Agents for Health Care: Qualitative Descriptive Study

Original Paper

1Centre for Research in Integrated Care, University of New Brunswick, Saint John, NB, Canada

2Department of Nursing and Health Sciences, University of New Brunswick, Saint John, NB, Canada

Corresponding Author:

A Luke MacNeill, PhD

Centre for Research in Integrated Care

University of New Brunswick

355 Campus Ring Road

Saint John, NB, E2L 4L5

Canada

Phone: 1 506 648 5777

Email: luke.macneill@unb.ca


Background: In recent years, there has been an increase in the use of conversational agents for health promotion and service delivery. To date, health professionals’ views on the use of this technology have received limited attention in the literature.

Objective: The purpose of this study was to gain a better understanding of how health professionals view the use of conversational agents for health care.

Methods: Physicians, nurses, and regulated mental health professionals were recruited using various web-based methods. Participants were interviewed individually using the Zoom (Zoom Video Communications, Inc) videoconferencing platform. Interview questions focused on the potential benefits and risks of using conversational agents for health care, as well as the best way to integrate conversational agents into the health care system. Interviews were transcribed verbatim and uploaded to NVivo (version 12; QSR International, Inc) for thematic analysis.

Results: A total of 24 health professionals participated in the study (19 women, 5 men; mean age 42.75, SD 10.71 years). Participants said that the use of conversational agents for health care could have certain benefits, such as greater access to care for patients or clients and workload support for health professionals. They also discussed potential drawbacks, such as an added burden on health professionals (eg, program familiarization) and the limited capabilities of these programs. Participants said that conversational agents could be used for routine or basic tasks, such as screening and assessment, providing information and education, and supporting individuals between appointments. They also said that health professionals should have some oversight in terms of the development and implementation of these programs.

Conclusions: The results of this study provide insight into health professionals’ views on the use of conversational agents for health care, particularly in terms of the benefits and drawbacks of these programs and how they should be integrated into the health care system. These collective findings offer useful information and guidance to stakeholders who have an interest in the development and implementation of this technology.

J Med Internet Res 2024;26:e49387

doi:10.2196/49387

Keywords



Background

Global health spending has reached record levels and is expected to increase in the coming years [1,2]. Despite this enormous investment, health care systems around the world are under pressure as they struggle to cope with health worker shortages [3,4], health system overuse [5,6], and a growing burden of noncommunicable diseases [7,8], among other challenges. These challenges and the pressure that they place on health care systems can have consequences for people providing and accessing health services. For instance, health professionals face heavy workloads and high levels of exhaustion and burnout [9-12]. In addition, individuals who are seeking care often experience long wait times that can negatively impact their health and well-being [13,14].

To reduce strain on health services and ensure the safe and efficient delivery of care, health care systems need to find accessible and cost-effective ways to supplement and support traditional forms of health care. Conversational agents are an emerging technology that might help. Broadly speaking, conversational agents are automated software programs that are designed to engage in humanlike conversation with people [15,16]. Common examples include text-based chatbots [17], voice-activated virtual assistants [18], and “embodied” conversational agents that appear in virtual or animated bodies [19,20]. Whereas standard computer interfaces support a somewhat rudimentary and impersonal interaction with users, conversational agents provide users with the impression of interacting with a real human. This quality would presumably make these programs well suited for dealing with sensitive issues or topics, such as those that arise within a health context.

In recent years, conversational agents that are used for health care have become more common in clinical and commercial settings [16,21-23]. Although the literature on these programs is still growing, current evidence suggests that they are an effective tool for health promotion and service delivery across a range of health areas [24-26]. For instance, they have been used to promote physical activity [27,28] and weight loss [29,30], support chronic disease management [31,32], provide preconception care [33,34], and improve mental health [35,36]. Research also shows that people who use these programs tend to have positive perceptions and opinions about them, as well as a desire or intention to continue using them in the future [16,37,38].

One aspect of conversational agents that tends to get overlooked in the literature is the views of health professionals toward these programs. Health professionals have considerable experience with the management and delivery of health care and may be able to offer unique insight into the use of this technology for health purposes. To date, only a handful of studies have considered health professionals’ views in this area [39-41]. These studies had a narrow scope in terms of the specific types of health professionals that were assessed (eg, mental health professionals). In addition, much of this research captured participants’ views using structured questionnaires with predefined questions and response scales [40,41]. These questionnaires allowed for standardized and easily aggregated data, but they may not have covered topics or points that would have emerged organically through other means, such as qualitative interviews. Due to these limitations, there is currently a fragmented and incomplete understanding of health professionals’ views surrounding this technology.

Objectives

The purpose of this study was to gain a more comprehensive understanding of health professionals’ views on the use of conversational agents for health care. We recruited health professionals with a variety of professional backgrounds and asked them to share their views via semistructured open-ended interviews. We were interested in their views on 2 specific subjects. First, we wanted to learn more about the potential benefits and drawbacks of using conversational agents for health care. Second, we wanted to gain a better understanding of how to best integrate conversational agents into the health care system. Capturing health professionals’ views on these subjects was expected to provide useful insight into whether and how these programs should be implemented in practical settings.


Study Design

We used a qualitative descriptive design to explore health professionals’ views on the use of conversational agents for health care. The goal of qualitative description is to provide a rich, detailed description of a phenomenon, emphasizing surface versus more interpretive readings of the available data in an effort to preserve participants’ voices and perspectives [42,43]. It is viewed as a useful approach for answering questions relevant to practitioners and policy makers [42] and therefore well suited for this study. All data for this study were collected using cross-sectional semistructured interviews (details in the Procedure section). The study is reported in accordance with Standards for Reporting Qualitative Research guidelines [44].

Participants and Recruitment

We recruited Canadian health professionals for the study, specifically physicians, nurses, and regulated mental health professionals (eg, clinical psychologists). No prior experience with conversational agents was necessary for participation. Participants were recruited through social media posts, classified websites, e-newsletters, and emails to relevant organizations. The initial recruitment target was 24 participants, divided evenly between physicians, nurses, and regulated mental health professionals. Once these participants were recruited and interviewed, we evaluated the interview data to determine whether additional participants should be recruited. We decided that no further participants were required, as no new themes were identified in the last few interviews (ie, thematic saturation was reached) and we felt that the data set was sufficiently rich to address the research objectives.

Procedure

Prospective participants contacted the research team to learn more about the study and set up a web-based interview. Prior to the interview, participants were emailed a PDF document that provided information about conversational agents and illustrated several conversational agents that are used for health care. The purpose of the PDF document was to familiarize participants with these types of programs before the interview. The document covered conversational agents that support the prevention and treatment of specific health conditions, as well as conversational agents that promote healthy lifestyle habits (physical activity, etc). It included screenshots of both embodied and disembodied conversational agents. Screenshots were drawn from several publications in this area [34,45-50]. In some cases, the images from these publications were replaced with higher-quality versions of the same or similar images that were available from web-based sources (eg, researcher websites).

Participants were interviewed individually using the Zoom (Zoom Video Communications, Inc) videoconferencing platform. Interviews took place between March and September 2021. The same general procedure was used for all interviews. The interviews began with personal introductions and a review of the study information. Next, verbal consent was obtained from participants and the interview questions were administered. Question administration lasted between 10 and 35 minutes (depending on the participant) and followed a semistructured format. Participants were asked about the potential benefits and drawbacks of using conversational agents for health care, as well as the best way to integrate conversational agents into the health care system. See Multimedia Appendix 1 for a full list of the interview questions. At the end of the interviews, participants were given an opportunity to ask any questions that they had.

Data Analysis

A research assistant (SY) transcribed the interview recordings verbatim, and the transcripts were reviewed by the first author for accuracy. The transcripts were uploaded to NVivo (version 12; QSR International, Inc) for thematic analysis. The thematic analysis followed the 6 phases described by Braun and Clarke [51,52], as follows. The first and second authors familiarized themselves with the data set by reading and rereading the transcripts and making preliminary notes (phase 1). Next, they coded the data set via an iterative process (phase 2). They started by reviewing the transcripts of the first 3 interviews to generate preliminary codes and working definitions. This initial analysis guided the coding of the full data set, which was performed by the first author. The second author independently coded 25% of the transcripts and found a high level of agreement with the first author (Cohen κ=0.81; agreement weighted by source size=99.74%). The 2 coders refined the coding guidelines to address the few disagreements and applied the refined guidelines to the full data set. Next, they worked together to generate initial themes (phase 3); further develop and review the themes (phase 4); and refine, define, and name the themes (phase 5). Finally, the first author wrote the initial draft of the manuscript and all authors performed manuscript revisions (phase 6).

Trustworthiness

Trustworthiness is a broad concept that refers to the credibility, confirmability, dependability, and transferability of research [53,54]. We sought to establish trustworthiness in our study through the use of several strategies, including investigator triangulation (eg, interrater coding), the creation of an audit trail, purposive sampling of different participant groups, the use of verbatim quotes to support the themes, and peer debriefing with digital health researchers in both the academic and commercial spaces. To enhance methodological rigor, the data analysis was conducted by 2 researchers with different professional backgrounds, specifically the first and second authors. The first author is a health researcher with a focus on digital health research. He has led several studies on the use of conversational agents for health care, including an evaluation study and a scoping review. The second author is a health researcher with a more general focus on access to health services. Her knowledge of digital health and digital health research is more limited. Whereas the second author is primarily a qualitative researcher, the first author is a quantitative researcher with some qualitative experience.

Ethical Considerations

This study was approved by the research ethics board at the University of New Brunswick (009-2021). The research ethics board provided a delegated review due to the low-risk nature of the study. Informed consent was obtained from all study participants (details in the Procedure section). Transcripts were deidentified during the transcription process to protect participant confidentiality, and interview files were stored on a secure Microsoft SharePoint site. Participants received an entry into a draw for a CAD $100 (US $72.61) Amazon.ca gift card for their participation in the study.


Participant Characteristics

A total of 24 health professionals completed the study. Participants included 8 physicians, 8 nurses, and 8 regulated mental health professionals (2 clinical psychologists, 2 psychotherapists, 2 counselors, and 2 clinical social workers). The sample consisted of 19 women and 5 men with a mean age of 42.75 (SD 10.71) years. Participants were from 7 Canadian provinces: Alberta, British Columbia, New Brunswick, Nova Scotia, Ontario, Prince Edward Island, and Saskatchewan. All participants reported that they had used conversational agents in the past, although only 2 had used a conversational agent for health care specifically.

Benefits of Conversational Agents

We identified 2 themes related to benefits of using conversational agents for health care: increased access to care and workload support for health professionals (see Table 1). More details on these themes are provided in the following subsections.

Table 1. Outline of themes and subthemes grouped by research objective.
Research objectiveTheme (subthemes)
Benefits of conversational agents
  • Increased access to care (convenience and availability, rural access to care)
  • Workload support for health professionals
Drawbacks of conversational agents
  • Added burden on health professionals
  • Limited capabilities of conversational agents (technological limitations, inadequate care)
Integration into the health care system
  • Health professional oversight
  • Routine or basic tasks (screening and assessment, information and education, support between appointments)
Increased Access to Care

Participants said that conversational agents could benefit patients or clients by providing increased access to care. Most participants who discussed this theme focused on the convenience and availability of these programs, although some participants discussed rural access to care as well.

Convenience and Availability

According to participants, conversational agents are able to provide patients or clients with convenient access to care. In other words, people can access these programs whenever they want, regardless of the day or time. For instance, one participant pointed out that conversational agents are available “anytime … 24/7” (Participant 18, nurse), and another participant stated: “To have access to something 24 hours a day is certainly beneficial” (Participant 1, nurse). Some participants contrasted conversational agents with health professionals, many of whom adhere to scheduled office hours. For instance, one participant said that conversational agents “would be always on, not dependent on the hours that health care professionals keep” (Participant 13, physician). By providing convenient access to care, these programs would allow patient or clients to avoid the wait times that are often experienced when seeking an appointment with a care provider: “Wait times are certainly an issue for people, being able to access a provider…. Perhaps these particular types of programs … would improve access [to care] for patients that might have difficulty connecting with a health care provider” (Participant 11, nurse).

Rural Access to Care

Participants noted that people who live in rural areas often have more limited access to care than people who live in urban areas, and that conversational agents could be one means to address this disparity. For instance, one participant said, “I think in certain ways it [the use of conversational agents] can be helpful for people if they live in a rural area and they don't have access to a specialist or they don't have a health care provider that they need” (Participant 4, psychotherapist). Other participants reinforced this statement, saying that these programs “might be able to reach more geographical locations that we can’t reach” (Participant 5, clinical social worker), and that they would be useful “if you live way out in the boondocks and can’t get into town” (Participant 21, clinical psychologist). Notably, most of the participants who discussed rural access to care were regulated mental health professionals.

Workload Support for Health Professionals

Participants said that conversational agents could provide health professionals with workload support. Although this point was discussed by all 3 health professional groups, it was particularly common among physicians and nurses. Participants noted that health professionals are often overworked and exhausted, and that conversational agents may be able to ease their workload to a certain extent. As one participant said, “It can take some of the pressure off of actual providers, if people are using these chatbots and whatnot. Maybe some of the demand goes down” (Participant 4, psychotherapist). By easing demand for health services, these programs would allow health professionals to devote more time and resources to people who have greater or more urgent needs. For example, one participant said, “[You] may be able to filter some of the more basic health care concerns, if you will, to this type of platform and then allow for health care providers to spend time with patients that have greater needs or need more time in-office or in-clinic” (Participant 11, nurse). One participant illustrated this point by sharing a story about a patient who was seriously injured after failing to get an appointment for a mental health medication refill:

[The patient] said “I call my family doctor, I don’t get ahold of him. My psychiatrist left.” You know, here’s a guy where, maybe if we have AI taking the burden off the family doctor... maybe the guy calls, and the family doctor’s able to see him and this problem never happens.
[Participant 3, physician]

Drawbacks of Conversational Agents

We identified 2 themes related to drawbacks of using conversational agents for health care: added burden on health professionals and limited capabilities of conversational agents. More details on these themes are provided in the following subsections.

Added Burden on Health Professionals

Although many participants thought that conversational agents could help ease their existing workload, some were wary that these programs could create new burdens that would strain their time and attention. For instance, participants anticipated that they would be receiving questions about conversational agents from their patients or clients, and that answering these questions would take time out of their workday. As one participant said, she expected “more questions from the patients about it, which ends up taking a lot of time for me” (Participant 13, physician). Participants also recognized that they would need to familiarize themselves with these programs so that they could answer patient or client questions, which would require a further time investment: “People may have questions about it, and if you don't take the time to familiarize with everything, you may not be able to answer them” (Participant 12, physician). Integrating conversational agents into their daily workflow was a concern as well. For instance, one participant worried about keeping on top of patient-related reports that might be supplied by these programs: “It would be an extra layer of my daily work that is time sensitive, you know? Like, every day at three, I have to make sure that I've looked at the last 24 hours of these things” (Participant 23, physician). Finally, some participants were concerned that they would have to spend time and resources addressing health issues that arise from the improper or incorrect use of a conversational agent: “If they [patients or clients] self-selected inappropriately to be seen by a virtual interface, and they delayed care and deteriorated … that would make my job a lot more complex” (Participant 13, physician).

Limited Capabilities of Conversational Agents

Participants spent a great deal of time discussing the capabilities of conversational agents. They had many concerns about the technological limitations of these programs, as well as the consequences of these technological limitations, particularly the potential for inadequate care.

Technological Limitations

Participants highlighted several technological limitations of conversational agents. Some participants focused on the emotional limitations of these programs, noting that they are unable to experience or convey genuine empathy or compassion: “When we’re at our most vulnerable moments, we expect compassionate care, and there's a limit to what you can get from a … conversational agent” (Participant 19, nurse). Others touched on this same topic from an interpersonal standpoint, commenting on “that loss of human connectedness, that human element” (Participant 11, nurse). Participants also discussed the sensory limitations of conversational agents, particularly their inability to detect important nonverbal cues from patients or clients: “I would wonder about a chatbot’s ability to notice nuance, to hear a voice choking … or to see eyes welling with tears” (Participant 6, clinical social worker). These nonverbal cues include cues that signal deception or dishonesty: “They [patients or clients] say they’re fine, but they actually look like they just dragged themselves out of bed and they haven’t showered in six days … I don’t know how good computers are at picking up that kind of ‘faking it’ behavior” (Participant 21, clinical psychologist). Finally, participants discussed the conversational limitations of these programs, doubting their ability to handle the complex question and answer exchanges that characterize health care interactions: “There’s no actual ability to have context-driven questioning, follow-up, reflection on what the person is hearing” (Participant 8, clinical psychologist).

Inadequate Care

Participants were concerned that the technological limitations of conversational agents would result in inadequate care for patients or clients. They were especially worried about patients or clients who are dealing with serious health issues. One participant provided an example within a mental health context:

If someone has really mild anxiety or mild depression, I think something like that could be a really great resource for them…. Whereas on the other hand, someone who is dealing with complex trauma or really significant mental illnesses, there’s no possibility that something automated could provide the care for that.
[Participant 4, psychotherapist]

A further point of concern was the possibility that patients or clients might not recognize that they are receiving inadequate care, which could compound their health issues. As one participant said, “They feel like what they're getting is enough and it may not actually be enough” (Participant 14, psychotherapist). One participant speculated about a patient’s thought processes in these situations: “‘Robot nurse Becky told me that I’m totally fine. So I’m just not going to go in to the doctor. I’m not going to call telehealth’ … I think it can have the potential of, I guess, false reassurance” (Participant 10, nurse).

Integration Into the Health Care System

We identified 2 themes related to the integration of conversational agents into the health care system: health professional oversight and routine or basic tasks. More details on these themes are provided in the following subsections.

Health Professional Oversight

Participants said that health professionals should have some oversight of conversational agents that are used for health care. For instance, many participants suggested that health professionals should have input into the development process so that there is some assurance that the program content is up-to-date and correct. As one participant commented, “It needs to have the signoff of a health care professional, somebody that tests it and can ensure all the information is accurate” (Participant 10, nurse). Participants also said that health professionals should have clinical oversight of these programs, which would allow them to evaluate whether a particular conversational agent is suitable for a particular patient or client: “The patient or client should come and see the health care professional and be referred to the program as opposed to just self-referral…. That way, at least someone has evaluated ‘Is this appropriate for that person?’” (Participant 12, physician). Clinical oversight would also allow health professionals to evaluate whether the information supplied by a program is consistent with the information they themselves provide to a patient or client. For example, one participant stated, “I would want to see what kind of information they [patients or clients] were getting, and whether it fit with the way I saw things” (Participant 21, clinical psychologist).

Routine or Basic Tasks

Participants said that conversational agents would be most appropriate for conducting routine or basic tasks. In the words of participants, these tasks would include “tedious jobs” (Participant 15, nurse), “basic things” (Participant 19, nurse), and “routine care” (Participant 13, physician). Although participants mentioned several tasks that would be appropriate for conversational agents, their reports tended to center on 3 general areas: screening and assessment, information and education, and support between appointments.

Screening and Assessment

Participants said that conversational agents could be used for screening and assessment purposes. For instance, these programs could screen patients or clients who have reached a specific service to see if the service is aligned with their needs: “I see it as a front-end thing where … you’re doing some screening, like, ‘Is this person in the right place?’” (Participant 21, clinical psychologist). In a similar manner, conversational agents would be able to screen individuals who are trying to access a particular program to see whether they meet the program qualifications: “In the program that I work with, we … really have to scope it down to those clients who qualify. So I think it might help us a little bit with filtering clients” (Participant 7, nurse). Once patients or clients have reached an appropriate program or service, conversational agents could perform an initial assessment to determine the severity of their health issues and the urgency of their needs. As one participant said, “It’s like that initial step to gather information and assess sort of the immediacy of response” (Participant 23, physician).

Information and Education

Participants said that conversational agents could be used to provide basic health information and education to patients or clients. For example, one participant said, “I can see the role it can play in basic patient education” (Participant 19, nurse), and another participant stated, “I think if people were seeking general health information, then that might be certainly appropriate” (Participant 11, nurse). Several participants provided examples of health areas where conversational agents could be applied. Many of these participants thought that these programs could be used to educate people on the prevention and management of chronic diseases, particularly diabetes, and to provide information to patients prior to and after surgery. One participant touched on both of these points: “In terms of basic health care stuff like diabetic education, pre- and post-op teaching … I think that it would be okay there” (Participant 1, nurse). Participants also thought that conversational agents could be used for psychoeducation and for educating people on health promotion topics such as physical activity and smoking cessation.

Support Between Appointments

Participants said that conversational agents could be used to support patients or clients between appointments. This point was especially common among regulated mental health professionals, who thought that these programs would be useful for “between-session maintenance” (Participant 4, psychotherapist) and “outside-of-session work” (Participant 14, psychotherapist). One participant provided a detailed example of how they might be used:

They’d be great for homework things, I think. So often, you check in with people and they’re like, “Well, I kind of forgot what you told me” or “Oh, I didn’t do it” … Having something that would have reminders, different pieces of homework that would pop up, I think that would be great.
[Participant 9, counselor]

In addition to addressing the topic of between-session work, this participant’s quote highlights the importance of patient or client accountability. Participants thought that conversational agents could be used to remind individuals about any necessary work to be done between appointments with a care provider. For instance, one physician said, “I think that would be helpful, as like that little at-home reminder of ‘This is what I was supposed to do for my exercise,’ or whatever …. Something that keeps them accountable at home, once they leave my office” (Participant 24, physician).


Principal Findings

The results of this study provide insight into health professionals’ views on the use of conversational agents for health care, particularly in terms of the benefits and drawbacks of these programs and how they should be integrated into the health care system. Participants said that conversational agents would be useful for increasing access to care for the public, including rural populations that often face access issues. They also said that these programs could provide health professionals with some much-needed workload support. However, participants were wary of any added burden on health professionals, such as the need for program familiarization and having to integrate this technology into their workflow. They were also mindful of the technological limitations of these programs and the implications of these limitations for patients or clients. To help safeguard patient or client welfare, participants said that health professionals should have some oversight in terms of the development and implementation of these programs. They also suggested that the use of conversational agents should be limited to routine or basic health care tasks, such as screening and assessment, providing information and education, and supporting individuals between appointments.

Comparison With Prior Work

The current results reinforce many findings from past studies in this area, despite important differences in the scope and methodologies of these studies. For instance, similar insights were reported by Palanica et al [40], who administered a web-based survey to general practitioners; Sweeney et al [41], who distributed a web-based survey to mental health professionals; and Barnett et al [39], who conducted in-person focus groups with substance use counselors. Across these studies, participants said that conversational agents could benefit patients or clients by improving access to care. They also touched on the limited capabilities of these programs, particularly their inability to understand or convey genuine human emotions. With respect to implementation, the participants in these studies suggested that conversational agents would be best suited for routine tasks, such as providing information or education and supporting people between appointments (eg, sending reminders). All of these topics were major discussion points among the participants in our study.

There were some differences between the current results and past findings as well. Participants in our study emphasized several points that received little attention in previous studies, such as the impact of conversational agents on health professionals’ workloads and the importance of health professional oversight of this technology. At the same time, our participants failed to mention certain other points that emerged in past studies, particularly the studies by Palanica et al [40] and Sweeney et al [41]. For instance, participants in these latter studies expressed concern that patients or clients could abuse conversational agents by self-diagnosing too often, and that they might not feel connected to their actual health care providers if they use these programs. Neither of these points were raised by participants in our study. These divergent findings are likely due to methodological factors. Both Palanica et al [40] and Sweeney et al [41] assessed health professionals’ views using structured surveys with predefined questions and response scales. These surveys may have prompted participants to consider topics that they otherwise might not have thought about, while also limiting the scope and substance of their responses. By comparison, our study used semistructured interviews with open-ended questions, which captured participants’ thoughts more organically.

Differences Among Health Professionals

Although the results of this study were fairly consistent across physicians, nurses, and regulated mental health professionals, there were some differences between these participant groups. For instance, mental health professionals were more likely than other participants to discuss how conversational agents could improve rural access to care, an outcome that might reflect challenges in delivering conventional mental health services to rural populations [55-57]. Mental health professionals were also more likely than others to say that conversational agents have the potential to support patients or clients between appointments. Most of their comments focused on the topic of between-session work, which speaks to the importance of this work in mental health treatment [58-60]. Meanwhile, discussion of workload support for health professionals was more common among physicians and nurses than regulated mental health professionals. This outcome may reflect the considerable occupational demands of physicians and nurses and the importance of managing time pressures within these professions [61-63]. Taken together, these collective differences suggest that there are profession-specific uses and benefits with respect to conversational agents, in addition to the more general uses and benefits that apply across professions.

Practical Implications

The results of this study offer insight to stakeholders who have an interest in the adoption and use of conversational agents for health care. For instance, policy makers, institutional leaders, and health professionals will be able to draw on these results when weighing the potential benefits and drawbacks of adopting this technology in practice. These individuals can also look to our results for guidance on how to safely implement this technology; namely, by employing programs that perform routine or basic tasks and by ensuring that these programs have appropriate oversight. Researchers and companies that develop conversational agents for health care might find these results useful as well. Based on our findings, developers should include health professionals in the development process and focus their efforts on programs that pose minimal safety risk to users. They should also clearly communicate the limitations of these programs to users to help prevent instances of inadequate care. By promoting the safe and responsible implementation of conversational agents for health care, developers and other stakeholders can increase the adoption and use of this technology, which will ultimately benefit those individuals who are either accessing or providing care.

Limitations and Future Directions

Participants in this study worked within the Canadian health care system, and so their views on the benefits and drawbacks of using conversational agents for health care, as well as the best way to implement this technology in practice, were shaped by their experiences within this particular setting. It is unclear whether these results are transferable to people working in other health care systems. It should be noted that many of the current results are consistent with findings from previous studies involving health professionals from other countries [39-41], although these studies had a narrower scope and may not be indicative of broader views on conversational agents within these health care systems. In any case, researchers may want to replicate this study with health professionals in other countries to see whether their views differ.

Another limitation with this study is the fact that many participants had never used a conversational agent for health care. The views shared by these participants could reflect their inexperience and may lack a certain degree of depth or insight. To their credit, all of these participants did have experience with conversational agents in other areas of their lives (eg, customer service chatbots), and so they likely had a reasonable understanding of how this technology would function and perform in a health context. In addition, study participants were shown several examples of conversational agents that provide health care before their interviews to ensure that they had knowledge of these programs and their capabilities. Regardless, future studies should focus on health professionals who have more direct experience with conversational agents in this domain so that there is some assurance that their understanding of these programs is accurate and comprehensive.

A further limitation with this study concerns the demographics of the sample. Although the participants in this study covered a wide range of ages and professions, most of the sample (19/24, 79%) identified as women. This gender imbalance is not particularly surprising, as women occupy the majority of health care positions in Canada [64,65]. However, a sample with a greater number of men may have produced additional or alternate insights. In the future, researchers in this area may want to emphasize men in their recruitment strategies so that they achieve a more gender-balanced sample of participants.

Conclusions

The results of this study provide insight into how health professionals view the use of conversational agents for health care. The health professionals in this study saw distinct benefits and drawbacks to the use of conversational agents for health care, and they shared several suggestions for safely integrating these programs into the health care system. This collective feedback provides a useful indicator of how conversational agents will be received in health settings in the coming years, and it also offers some guidance on how these programs should be implemented to provide the most benefit to people who are either accessing or providing care.

Acknowledgments

The authors thank Sungmin Yi for her assistance with transcription. This research was supported in part by a Research Professionals Initiative grant from the New Brunswick Innovation Foundation (RPI2020-010).

Data Availability

The data for this study (eg, interview transcripts) are not publicly available as per guidelines from the host institution’s research ethics board. Data access is restricted to help safeguard the privacy and confidentiality of the participants.

Authors' Contributions

ALM, AL, and SD conceived and designed the study. ALM collected the interview data. ALM and LM conducted the analysis. ALM wrote the original draft of the manuscript and all authors contributed to manuscript revisions. No generative artificial intelligence tools were used in the preparation of this manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Interview questions.

DOCX File , 15 KB

  1. Global Burden of Disease Health Financing Collaborator Network. Trends in future health financing and coverage: future health spending and universal health coverage in 188 countries, 2016-40. Lancet. 2018;391(10132):1783-1798. [FREE Full text] [CrossRef] [Medline]
  2. Global expenditure on health: public spending on the rise? 2021. World Health Organization. URL: https://apps.who.int/iris/bitstream/handle/10665/350560/9789240041219-eng.pdf [accessed 2022-08-18]
  3. Boniol M, Kunjumen T, Nair TS, Siyam A, Campbell J, Diallo K. The global health workforce stock and distribution in 2020 and 2030: a threat to equity and 'universal' health coverage? BMJ Glob Health. 2022;7(6):e009316. [FREE Full text] [CrossRef] [Medline]
  4. Crisp N, Chen L. Global supply of health professionals. N Engl J Med. 2014;370(10):950-957. [CrossRef] [Medline]
  5. Brownlee S, Chalkidou K, Doust J, Elshaug AG, Glasziou P, Heath I, et al. Evidence for overuse of medical services around the world. Lancet. 2017;390(10090):156-168. [FREE Full text] [CrossRef] [Medline]
  6. Ellen ME, Wilson MG, Vélez M, Shach R, Lavis JN, Grimshaw JM, et al. Synthesis working group. Addressing overuse of health services in health systems: a critical interpretive synthesis. Health Res Policy Syst. 2018;16(1):48. [FREE Full text] [CrossRef] [Medline]
  7. Bloom DE, Cafiero ET, Jané-Llopis E, Abrahams-Gessel S, Bloom LR, Fathima S, et al. The Global Economic Burden of Non-communicable Diseases. Switzerland. World Economic Forum; 2011.
  8. GBD 2019 Diseases and Injuries Collaborators. Global burden of 369 diseases and injuries in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease study 2019. Lancet. 2020;396(10258):1204-1222. [FREE Full text] [CrossRef] [Medline]
  9. Humphries N, Morgan K, Conry MC, McGowan Y, Montgomery A, McGee H. Quality of care and health professional burnout: narrative literature review. Int J Health Care Qual Assur. 2014;27(4):293-307. [CrossRef] [Medline]
  10. O'Connor K, Muller Neff D, Pitman S. Burnout in mental health professionals: a systematic review and meta-analysis of prevalence and determinants. Eur Psychiatry. 2018;53:74-99. [CrossRef] [Medline]
  11. Shen X, Xu H, Feng J, Ye J, Lu Z, Gan Y. The global prevalence of burnout among general practitioners: a systematic review and meta-analysis. Fam Pract. 2022;39(5):943-950. [CrossRef] [Medline]
  12. Woo T, Ho R, Tang A, Tam W. Global prevalence of burnout symptoms among nurses: a systematic review and meta-analysis. J Psychiatr Res. 2020;123:9-20. [CrossRef] [Medline]
  13. McIntyre D, Chow CK. Waiting time as an indicator for health services under strain: a narrative review. Inquiry. 2020;57:46958020910305. [FREE Full text] [CrossRef] [Medline]
  14. Organisation for Economic Co-operation and Development. Waiting times for health services: next in line. 2020. URL: https:/​/www.​oecd-ilibrary.org/​social-issues-migration-health/​waiting-times-for-health-services_242e3c8c-en [accessed 2022-10-11]
  15. Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, et al. Conversational agents in healthcare: a systematic review. J Am Med Inform Assoc. 2018;25(9):1248-1258. [FREE Full text] [CrossRef] [Medline]
  16. Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng YL, et al. Conversational agents in health care: scoping review and conceptual analysis. J Med Internet Res. 2020;22(8):e17158. [FREE Full text] [CrossRef] [Medline]
  17. Rapp A, Curti L, Boldi A. The human side of human-chatbot interaction: a systematic literature review of ten years of research on text-based chatbots. Int J Hum Comput Stud. 2021;151:102630. [CrossRef]
  18. de Barcelos Silva A, Gomes MM, da Costa CA, da Rosa Righi R, Barbosa JLV, Pessin G, et al. Intelligent personal assistants: a systematic literature review. Expert Syst Appl. 2020;147(C):113193. [CrossRef]
  19. Loveys K, Sebaratnam G, Sagar M, Broadbent E. The effect of design features on relationship quality with embodied conversational agents: a systematic review. Int J of Soc Robotics. 2020;12(6):1293-1312. [CrossRef]
  20. ter Stal S, Kramer LL, Tabak M, op den Akker H, Hermens H. Design features of embodied conversational agents in eHealth: a literature review. Int J Hum Comput Stud. 2020;138:102409. [CrossRef]
  21. Jovanovic M, Baez M, Casati F. Chatbots as conversational healthcare services. IEEE Internet Comput. 2021;25(3):44-51. [CrossRef]
  22. Montenegro JLZ, da Costa CA, da Rosa Righi R. Survey of conversational agents in health. Expert Syst Appl. 2019;129(11):56-67. [CrossRef]
  23. Parmar P, Ryu J, Pandya S, Sedoc J, Agarwal S. Health-focused conversational agents in person-centered care: a review of apps. NPJ Digit Med. 2022;5:21. [FREE Full text] [CrossRef] [Medline]
  24. Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res. 2020;22(7):e16021. [FREE Full text] [CrossRef] [Medline]
  25. Milne-Ives M, de Cock C, Lim E, Shehadeh MH, de Pennington N, Mole G, et al. The effectiveness of artificial intelligence conversational agents in health care: systematic review. J Med Internet Res. 2020;22(10):e20346. [FREE Full text] [CrossRef] [Medline]
  26. Oh YJ, Zhang J, Fang ML, Fukuoka Y. A systematic review of artificial intelligence chatbots for promoting physical activity, healthy diet, and weight loss. Int J Behav Nutr Phys Act. 2021;18:160. [FREE Full text] [CrossRef] [Medline]
  27. Bickmore TW, Silliman RA, Nelson K, Cheng DM, Winter M, Henault L, et al. A randomized controlled trial of an automated exercise coach for older adults. J Am Geriatr Soc. 2013;61(10):1676-1683. [CrossRef] [Medline]
  28. Ellis T, Latham NK, DeAngelis TR, Thomas CA, Saint-Hilaire M, Bickmore TW. Feasibility of a virtual exercise coach to promote walking in community-dwelling persons with Parkinson disease. Am J Phys Med Rehabil. 2013;92(6):472-481. [FREE Full text] [CrossRef] [Medline]
  29. Graham SA, Pitter V, Hori JH, Stein N, Branch OH. Weight loss in a digital app-based diabetes prevention program powered by artificial intelligence. Digit Health. 2022;8:20552076221130619. [FREE Full text] [CrossRef] [Medline]
  30. Stein N, Brooks K. A fully automated conversational artificial intelligence for weight loss: longitudinal observational study among overweight and obese adults. JMIR Diabetes. 2017;2(2):e28. [FREE Full text] [CrossRef] [Medline]
  31. Baptista S, Wadley G, Bird D, Oldenburg B, Speight J, My Diabetes Coach Research Group. Acceptability of an embodied conversational agent for type 2 diabetes self-management education and support via a smartphone app: mixed methods study. JMIR Mhealth Uhealth. 2020;8(7):e17038. [FREE Full text] [CrossRef] [Medline]
  32. Guhl E, Althouse AD, Pusateri AM, Kimani E, Paasche-Orlow MK, Bickmore TW, et al. The atrial fibrillation health literacy information technology trial: pilot trial of a mobile health app for atrial fibrillation. JMIR Cardio. 2020;4(1):e17162. [FREE Full text] [CrossRef] [Medline]
  33. Gardiner P, Hempstead MB, Ring L, Bickmore T, Yinusa-Nyahkoon L, Tran H, et al. Reaching women through health information technology: the Gabby preconception care system. Am J Health Promot. 2013;27(3 Suppl):eS11-eS20. [FREE Full text] [CrossRef] [Medline]
  34. Jack BW, Bickmore T, Yinusa-Nyahkoon L, Reichert M, Julce C, Sidduri N, et al. Improving the health of young African American women in the preconception period using health information technology: a randomised controlled trial. Lancet Digit Health. 2020;2(9):e475-e485. [FREE Full text] [CrossRef] [Medline]
  35. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017;4(2):e19. [FREE Full text] [CrossRef] [Medline]
  36. Fulmer R, Joerin A, Gentile B, Lakerink L, Rauws M. Using psychological artificial intelligence (Tess) to relieve symptoms of depression and anxiety: randomized controlled trial. JMIR Ment Health. 2018;5(4):e64. [FREE Full text] [CrossRef] [Medline]
  37. Abd-Alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M. Perceptions and opinions of patients about mental health chatbots: scoping review. J Med Internet Res. 2021;23(1):e17828. [FREE Full text] [CrossRef] [Medline]
  38. Gabarron E, Larbi D, Denecke K, Årsand E. What do we know about the use of chatbots for public health? Stud Health Technol Inform. 2020;270:796-800. [CrossRef] [Medline]
  39. Barnett A, Savic M, Pienaar K, Carter A, Warren N, Sandral E, et al. Enacting 'more-than-human' care: clients' and counsellors' views on the multiple affordances of chatbots in alcohol and other drug counselling. Int J Drug Policy. 2021;94:102910. [FREE Full text] [CrossRef] [Medline]
  40. Palanica A, Flaschner P, Thommandram A, Li M, Fossat Y. Physicians' perceptions of chatbots in health care: cross-sectional web-based survey. J Med Internet Res. 2019;21(4):e12887. [FREE Full text] [CrossRef] [Medline]
  41. Sweeney C, Potts C, Ennis E, Bond R, Mulvenna MD, O’Neill S, et al. Can chatbots help support a person’s mental health? Perceptions and views from mental healthcare professionals and experts. ACM Trans Comput Healthc. 2021;2(3):25. [CrossRef]
  42. Sandelowski M. Whatever happened to qualitative description? Res Nurs Health. 2000;23(4):334-340. [CrossRef] [Medline]
  43. Sandelowski M. What's in a name? Qualitative description revisited. Res Nurs Health. 2010;33(1):77-84. [CrossRef] [Medline]
  44. O'Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245-1251. [FREE Full text] [CrossRef] [Medline]
  45. Bickmore TW, Mitchell SE, Jack BW, Paasche-Orlow MK, Pfeifer LM, O'Donnell J. Response to a relational agent by hospital patients with depressive symptoms. Interact Comput. 2010;22(4):289-298. [FREE Full text] [CrossRef] [Medline]
  46. Bickmore TW, Schulman D, Sidner C. Automated interventions for multiple health behaviors using conversational agents. Patient Educ Couns. 2013;92(2):142-148. [FREE Full text] [CrossRef] [Medline]
  47. Horsch CHG, Lancee J, Griffioen-Both F, Spruit S, Fitrianie S, Neerincx MA, et al. Mobile phone-delivered cognitive behavioral therapy for insomnia: a randomized waitlist controlled trial. J Med Internet Res. 2017;19(4):e70. [FREE Full text] [CrossRef] [Medline]
  48. Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR Mhealth Uhealth. 2018;6(11):e12106. [FREE Full text] [CrossRef] [Medline]
  49. Lisetti C, Amini R, Yasavur U, Rishe N. I can help you change! an empathic virtual agent delivers behavior change health interventions. ACM Trans Manage Inf Syst. 2013;4(4):19. [CrossRef]
  50. Richards D, Caldwell P. Improving health outcomes sooner rather than later via an interactive website and virtual specialist. IEEE J Biomed Health Inform. 2018;22(5):1699-1706. [CrossRef] [Medline]
  51. Braun V, Clarke V. Using thematic analysis in psychology. Qualitative Res Psychol. 2006;3(2):77-101. [CrossRef]
  52. Braun V, Clarke V. Thematic Analysis: A Practical Guide. New York, NY. SAGE Publications; 2022.
  53. Lincoln YS, Guba EG. Naturalistic Inquiry. New York, NY. SAGE Publications; 1985.
  54. Korstjens I, Moser A. Series: practical guidance to qualitative research. Part 4: trustworthiness and publishing. Eur J Gen Pract. 2018;24(1):120-124. [FREE Full text] [CrossRef] [Medline]
  55. Friesen E. The landscape of mental health services in rural Canada. Univ Tor Med J. 2019;96(2):47-52.
  56. Mental Health Commission of Canada. Rural and Remote Mental Health in Canada: Evidence Brief on Best and Promising Practices. Ottawa, Canada. Mental Health Commission of Canada; 2020.
  57. Novik N, McKee B, Pearce K. Social work practice and mental health services outside of urban settings. In: Jeffery B, Novik N, editors. Rural and Northern Social Work Practice: Canadian Perspectives. Regina, Canada. University of Regina; 2022:154-171.
  58. Kazantzis N, Dattilio FM. Definitions of homework, types of homework, and ratings of the importance of homework among psychologists with cognitive behavior therapy and psychoanalytic theoretical orientations. J Clin Psychol. 2010;66(7):758-773. [CrossRef] [Medline]
  59. Kazantzis N, Whittington C, Zelencich L, Kyrios M, Norton PJ, Hofmann SG. Quantity and quality of homework compliance: a meta-analysis of relations with outcome in cognitive behavior therapy. Behav Ther. 2016;47(5):755-772. [CrossRef] [Medline]
  60. Strunk DR. Homework. Cogn Behav Pract. 2022;29(3):560-563. [CrossRef]
  61. Goldsby E, Goldsby M, Neck CB, Neck CP. Under pressure: time management, self-leadership, and the nurse manager. Adm Sci. 2020;10(3):38. [CrossRef]
  62. Gordon CE, Borkan SC. Recapturing time: a practical approach to time management for physicians. Postgrad Med J. 2014;90(1063):267-272. [CrossRef] [Medline]
  63. Leis SJ, Anderson A. Time management strategies for new nurses. Am J Nurs. 2020;120(12):63-66. [CrossRef] [Medline]
  64. Khanam F, Langevin M, Savage K, Uppal S. Women Working in Paid Care Occupations. Canada. Statistics Canada; 2022.
  65. Health workforce in Canada: Overview. Canadian Institute for Health Information. 2022. URL: https://www.cihi.ca/en/health-workforce-in-canada-overview [accessed 2024-03-01]

Edited by A Mavragani; submitted 26.05.23; peer-reviewed by Y Xi, S Baptista, S Wei; comments to author 05.01.24; revised version received 01.03.24; accepted 01.06.24; published 25.09.24.

Copyright

©A Luke MacNeill, Lillian MacNeill, Alison Luke, Shelley Doucet. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.09.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.