Viewpoint
Abstract
The integration of artificial intelligence (AI) into health communication systems has introduced a transformative approach to public health management, particularly during public health emergencies, capable of reaching billions through familiar digital channels. This paper explores the utility and implications of generalist conversational artificial intelligence (CAI) advanced AI systems trained on extensive datasets to handle a wide range of conversational tasks across various domains with human-like responsiveness. The specific focus is on the application of generalist CAI within messaging services, emphasizing its potential to enhance public health communication. We highlight the evolution and current applications of AI-driven messaging services, including their ability to provide personalized, scalable, and accessible health interventions. Specifically, we discuss the integration of large language models and generative AI in mainstream messaging platforms, which potentially outperform traditional information retrieval systems in public health contexts. We report a critical examination of the advantages of generalist CAI in delivering health information, with a case of its operationalization during the COVID-19 pandemic and propose the strategic deployment of these technologies in collaboration with public health agencies. In addition, we address significant challenges and ethical considerations, such as AI biases, misinformation, privacy concerns, and the required regulatory oversight. We envision a future with leverages generalist CAI in messaging apps, proposing a multiagent approach to enhance the reliability and specificity of health communications. We hope this commentary initiates the necessary conversations and research toward building evaluation approaches, adaptive strategies, and robust legal and technical frameworks to fully realize the benefits of AI-enhanced communications in public health, aiming to ensure equitable and effective health outcomes across diverse populations.
J Med Internet Res 2025;27:e69007doi:10.2196/69007
Keywords
Introduction
Health communication and information dissemination are essential for global risk mitigation during public health emergencies. The pandemic highlighted the necessity for effective worldwide communication networks in support of the public health agencies (Center for World Health Organization [WHO] and Global Public Health Intelligence Network), and furthermore, the need for more integrated and global systems for timely warnings and responses to health crises [
]. For this purpose, messaging services have been vital tools for disseminating information, monitoring disease spread, and promoting informed health decisions [ ]. Historically, SMS- and app-based text messaging platforms have long been recognized for their potential to deliver scalable health interventions for diverse populations, exemplified by RCTs on substance abuse intervention and promoting the COVID-19 pandemic vaccination [ , ]. These examples illustrate the established value of messaging services in engaging populations and promoting health behaviors.The emergence of artificial intelligence (AI) has brought innovative and personalized methods in health communications, with chatbots and AI-driven messaging services presenting personalized, timely, and internet-based health interventions [
]. More specifically, conversational artificial Intelligence (CAI) interventions have exhibited significant positive effects on behavior change, such as smoking cessation, healthy eating, sleep quality, and physical activity [ , ]. CAI models have emerged to complete tasks through natural language using rule-based, hybrid, or unsupervised learning models [ , ]. However, as CAI has been widely used for personalized health intervention, a public health perspective has yet to be studied.During the pandemic, the emerging value of CAI has been observed as being a scalable, easy-to-use, and accessible dissemination tool [
- ]. In addition, the long-term value of CAI through chatbot tools and voice assistants in public health management has been emphasized [ , ]. A widely observed implementation was a WHO-deployed WhatsApp (Meta) chatbot, as a preventative mechanism, that disseminated critical health information to millions globally, providing multilingual support and real-time updates to address misinformation [ ]. However, none of the intelligent systems covered earlier has envisioned the impact of generative AI at scale where it can lead to highly accessible, decentralized, and scalable implementations. Recent applications of widely adapted generative AI models, large language models (LLMs), presented evidence of the effectiveness of generated responses to answer public health questions [ , ], contributing to the trend of using AI to augment the impact of public health interventions. Envisioning the future, its impact could exponentially grow by wide-scale adoption and implementation across the population by its integration into our daily communication tools (ie, messaging apps). We hypothesize that such LLM-based CAI services (namely, “Generalist CAI” from hereon) through messaging apps, the most commonly used tools on a daily basis, may overtake other means of internet-based information seeking and public information dissemination and sharing mechanisms, toward improving public health communications.Unlike previous implementations of CAI before the wide adoption of LLMs, Generalist CAI models rely on generative AI (ie, LLMs) or foundation models that are trained on extensive and diverse datasets to perform a myriad of natural language tasks with human-like interaction and a broad understanding across various domains, including health care. These models transcend the limitations of their predecessors, which were constrained to specific tasks, by exhibiting proficiency in varied domains, from casual dialogue to expert-level engagements. These models’ broad applicability is underpinned by their capacity to comprehend and respond in a human-like manner across different conversational contexts. As such, generalist CAI models are not only poised to enhance user interactions but are also likely to supplant multiple task-specific agents (multiagents), heralding a shift toward more unified and contextually aware conversational AI systems in diverse applications.
outlines the key features of generalist AI.Key features | Descriptions |
Natural language understanding and generation | They can understand a broad range of user prompts that can be expressed in various different ways, allowing a higher level of flexibility and ease-of-use. |
Generalist | They can converse about a large range of topics and transition between them smoothly. |
Dialog management | They can excel in multiturn conversations more effectively than their rule-based alternatives with the ability to respond to follow-up questions. |
Emotional intelligence | They can detect subtle cues in language that indicate a user’s emotional state, allowing the chatbot to provide empathetic responses. |
Personalization | Generalist CAI can extract user profile information and use it to generate tailored responses effectively. |
Multilingual support | They can support multiple languages, making them accessible to a wider audience. |
Scalability | They can handle an unlimited number of queries simultaneously, providing instant information on a wide range of public health issues. |
The Advantage of Generalist CAI in Messaging Services
AI has already transformed the ways in which individuals seek health information. Today, we observe that generalist CAI or LLM applications (eg, chatGPT) simplify access to health advice [
], by breaking down barriers associated with traditional web searches.The next step appears to be messaging tools with generalist CAI assistance, like WhatsApp, Signal (Signal Technology Foundation), iMessage (Apple Inc.), and Telegram (Telegram FZ-LLC Telegram Messenger Inc), which would offer a “familiar” platform to billions of people, with the potential to improve access to health information and services. Messaging apps’ simplicity in terms of user interface and the use of natural language inherently reduce digital literacy and accessibility barriers. Therefore, messaging services with generalist CAI can surpass previous implementations of task-specific CAI services regarding capabilities, user access and adoption, lower barriers to use and provide personalization through self-learning and conversation adaptation (
). For example, instead of just providing preprogrammed answers, a generalist CAI can engage in a dynamic dialogue to understand a user’s specific concerns, clarify ambiguities, and provide tailored advice based on the conversation’s context. Furthermore, mobile device-based dialogue services (eg, automatic speech recognition and text-to-speech) further improve the usability of the service through natural language conversations. This means, in a public health emergency, the widespread use of apps like iMessage and WhatsApp (billions of active users daily) can facilitate immediate access to personalized health advice, circumventing barriers to accessing urgent public health information. The integration with generalist CAI, as seen in Meta AI’s LLM-powered generalist CAI or personal assistant, positions these services in a favorable spot as a convenient tool for information seeking and health communication, potentially offering equitable access to personalized health information across diverse populations. To further extend these opportunities, small language models (eg, Phi-3) [ ] can be leveraged to build capable generalist CAI for individuals in remote areas or those with limited access to internet bandwidth, who could still benefit from text-based interactions with the CAI.Envisioning Generalist CAI Use in Public Health Emergencies
As the AI tools and applications are rapidly growing and immersing in our daily lives, it is necessary to plan and strategize for the use of generalist CAI services in public health emergencies. Given the fact that the current LLMs and CAI are trained on publicly available data on a massive scale with less attention to information quality or specific domains like health care, we need to approach cautiously in promoting safe and reliable information pipelines over these messaging platforms for public health information sharing, communication, and dissemination. To improve CAI accuracy and reliability in public health communications, a multiagent approach could be used [
, ]. This includes specialized AI agents collaborating, some provide general health information, while others ensure compliance with health guidelines. We envision the integration of generalist CAI assistants into public health messaging services through such a multiagent approach to streamline future interventions. illustrates a 2-agent arrangement where a CAI agent in a messaging app uses the public health information provided by the Centers for Disease Control and Prevention (CDC) to compose its response.
Public Health Emergency Case: the COVID-19 Pandemic
During the COVID-19 pandemic, we observed that the dissemination of comprehensive and accurate information is crucial for educating the public and combating the pandemic [
]. This period highlighted the potential of basic chatbot applications in enhancing public health communications, especially in diverse and low-resource settings. As aforementioned, WHO released a chatbot over WhatsApp providing up-to-date COVID-19 information in multiple local languages extending across several low- and middle-income countries [ ]. In Nigeria, an SMS-based chatbot, offered by the Nigeria Centre for Disease Control, with support from UNICEF (United Nations International Children’s Emergency Fund), provided localized guidance and timely information about pandemic [ ]. Similarly, India released an app with chatbot features to engage rural and urban Indian residents for the COVID-19 pandemic safety practices and checking symptoms, updates, and access helplines [ ]. These implementations demonstrate the scalability and potential of digital conversational intervention in public health emergencies. With generalist CAI, personalized public health intervention can be achieved, bridging communication gaps and supporting public health agencies during emergencies.A strategic plan for effective communication during a health crisis is required, leveraging a multiagent approach. In a partnership between the public health agencies (eg, WHO and CDC) and technology providers, we may create an information dissemination pipeline during a public health crisis (
). For instance, by adhering to CDC’s guidelines for communication and dissemination and using only vetted information sources [ ], generalist CAI agents can be designed to deliver messages accurately, empathetic, and action oriented. CDC-defined key elements for developing outbreak-related messages include expressing empathy, outlining clear actions, delineating what is known and unknown, explaining public health actions and their rationale, committing to ongoing communication, and guiding the public on where to find reliable information [ ]. For example, regarding the need to “outline clear actions,” a specialized agent could be programmed to ensure that every informational response includes actionable steps the user can take. This framework could be used to prompt the agents and ensure that the generated messages are fine-tuned to deliver intended messages toward not only informing and educating but also to build trust and encourage compliance with health advisories. Furthermore, aligning with WHO’s early action review guidelines that are designed to optimize early detection of public health emergencies, such agents could be deployed to intervene timely and tailored to the target audience in compliance with the local or national guidelines across the globe [ ]. Ideally, a trained generalist CAI acting as a public health support agent can be simultaneously activated across the populations, including low-resource countries, rural and urban areas, served in any preferred language through text or speech. In the long term, such agents can contribute to the knowledge of public health agencies and the data they provide can help learn more about the effectiveness of communications with real-time feedback from the public. Future research should focus on developing frameworks that facilitate the integration of multiagent CAI systems into existing public health infrastructures. Some of the viable approaches could be through interoperability with electronic health records, collaboration with public health databases and registries, and using standardized communication protocols like Fast Health care Interoperability Resources [ ], embedding CAI agents into communication platforms preferred by health professionals, and establishing governance mechanisms to ensure ethical and efficient operation [ , ].for our brief experiments and observations toward the opportunities and limitations with current generalist CAI applications in response to some of the emerging public health problems including pandemic response, unmet social needs, mental health support, and vaccinations.
Risks and Ethical Considerations
The deployment of generalist CAI assistants in messaging apps presents multifaceted challenges, including risks of bias, misinformation, hallucinations, and ethical pitfalls, which have been observed earlier with Snapchat’s personal assistant [
]. The models behind these CAI assistants pose a critical challenge due to inherent biases arising from skewed or incomplete training data, flawed algorithmic design, or the reinforcement of societal prejudices [ ]. These biases risk distorting health communications and exacerbating health disparities [ , ]. For example, an earlier study [ ] revealed that a widely used health care algorithm exhibited significant racial bias by predicting costs instead of health needs, leading to unequal care allocation for Black patients. Such bias could pose a risk for generalist CAI providing less comprehensive (or even inaccurate) information to individuals from underrepresented racial or ethnic backgrounds. In addition, a culturally sensitive and inclusive design is also important to mitigate the risk of inherent biases that may exist in the training data of generalist CAI. These issues underscore the importance of accountability, fairness, equity, and regulatory oversight [ ]. Moreover, ethical concerns extend beyond privacy to include user autonomy and the transparent use of data, necessitating clear guidelines and user consent. To support user autonomy, implementing verification mechanisms and ensuring the source data originating from authoritative health organizations can help maintain the trustworthiness of the AI-generated information. One step further, a multiagent approach can help control the CAI behavior regarding user location and profile, ensuring that advice and data handling procedures comply with the local rules and regulations. Even though messaging services are one of the most accessible and used communication technologies, the FCC Affordable Connectivity Program or similar global programs can be used to address the digital divide and affordability issues [ ]. While some messaging apps offer end-to-end encryption, overall, the lack of governance or medico-legal compliance (eg, HIPAA [Health Insurance Portability and Accountability Act], HITECH [Health Information Technology for Economic and Clinical Health] or GDPR [General data protection regulation] rule on processing sensitive data requirements for public health information), unless integrated with telehealth services, raises significant privacy concerns, similar to those of standard text messaging services. Technical standards and ethical frameworks should be developed to ensure AI systems are transparent and fair [ ]. Further considerations are outlined in .Challenge | Description | Potential solutions |
Language diversity |
|
|
Technological infrastructure |
|
|
Internet connectivity |
|
|
Integration with existing systems |
|
|
Data privacy and security |
|
|
Bias and cultural sensitivity |
|
|
Stakeholder engagement |
|
|
aAPI: application programming interface.
bGDPR: General data protection regulation.
cHIPAA: Health Insurance Portability and Accountability Act.
The contentious nature of AI governance and accreditation of service providers for multiagent AI services might require a legal infrastructure as much as a technical one, to reduce perceived legal risks and liabilities with government agencies accrediting private sector tools. However, the recent initiative by the Biden Administration to form task forces aimed at shaping policies for AI in health care signals a promising direction for overcoming these hurdles in the United States [
], suggesting that improvements in the medicolegal landscape could pave the way for safer AI implementations in public health communication [ ]. Internationally, regulatory bodies are also beginning to take similar steps, aiming for a cohesive global response to AI challenges [ ]. It is crucial to address how these technologies might disproportionately affect marginalized groups, ensuring inclusive and equitable AI development. The long-term societal impacts, such as the erosion of public trust through AI missteps, must also be considered in developing sustainable AI strategies. Engaging a broad spectrum of stakeholders in AI discussions can enhance the legitimacy and effectiveness of governance structures. This evolving scenario highlights the critical need for a balanced approach to harnessing AI’s potential while addressing its ethical, legal, and social challenges.Conclusions and Future Directions
AI-enhanced messaging apps hold significant promise for advancing public health by improving access to health information, supporting health behavior change, and addressing diverse community needs. Their scalability and adaptability have already demonstrated impact during public health crises, such as the COVID-19 pandemic, where CAI systems facilitated rapid and multilingual information dissemination. Generalist CAI, with its ability to handle diverse conversational tasks and adapt to user needs with its human-like interaction capability, represents a transformative opportunity to create equitable and accessible public health communication tools. However, realizing their full potential requires addressing critical challenges, including biases in training data, risks of misinformation, ethical concerns around equity and transparency, and the need for robust legal and technical frameworks. To unlock these opportunities, AI messaging services must be developed and continually updated to address evolving requirements on fairness, accountability, and culturally sensitive design, ensuring they uphold the principles of reliability, equity, and justice in public health.
Key research priorities should include evaluating the efficacy of multiagent systems, understanding user interaction and trust dynamics, addressing biases, assessing impacts on health equity, and exploring innovative applications beyond basic information delivery. Observational studies can help identify how individuals engage with CAI systems and guide improvements in public health applications. To ensure equitable access, public health agencies should advocate for culturally sensitive and linguistically diverse CAI systems. Collaborations with local organizations and the establishment of global repositories of vetted health information are critical to minimizing misinformation and aligning CAI outputs with public health guidelines. Eventually, public health agencies and researchers can create a roadmap for leveraging CAI effectively and equitably, paving the way for a transformative approach to public health communication. Continued research on user engagement and the optimization of AI models for public health is essential to fully leverage the capabilities of these ubiquitous tools.
Acknowledgments
We acknowledge Kelly Kelleher for his valuable feedback. We used the generative AI tool (ChatGPT by OpenAI) to refine phrases and improve the clarity of some statements.
Conflicts of Interest
ES is an Associate Editor of the Journal of Medical Internet Research.
Examples of generalist CAI (conversational artificial intelligence) responses to public health issues.
PDF File (Adobe PDF File), 9592 KBReferences
- Meckawy R, Stuckler D, Mehta A, Al-Ahdal T, Doebbeling BN. Effectiveness of early warning systems in the detection of infectious diseases outbreaks: a systematic review. BMC Public Health. 2022;22(1):2216. [FREE Full text] [CrossRef] [Medline]
- Hall AK, Cole-Lewis H, Bernhardt JM. Mobile text messaging for health: a systematic review of reviews. Annu Rev Public Health. 2015;36:393-415. [FREE Full text] [CrossRef] [Medline]
- Patel MS, Fogel R, Winegar AL, Horseman C, Ottenbacher A, Habash S, et al. Effect of text message reminders and vaccine reservations on adherence to a health system covid-19 vaccination policy: a randomized clinical trial. JAMA Netw Open. 2022;5(7):e2222116. [FREE Full text] [CrossRef] [Medline]
- Marcolino MS, Oliveira JAQ, D'Agostino M, Ribeiro AL, Alkmim MBM, Novillo-Ortiz D. The impact of mHealth interventions: systematic review of systematic reviews. JMIR Mhealth Uhealth. 2018;6(1):e23. [FREE Full text] [CrossRef] [Medline]
- Lauffenburger JC, Yom-Tov E, Keller PA, McDonnell ME, Crum KL, Bhatkhande G, et al. The impact of using reinforcement learning to personalize communication on medication adherence: findings from the REINFORCE trial. NPJ Digit Med. 2024;7(1):39. [FREE Full text] [CrossRef] [Medline]
- Singh B, Olds T, Brinsley J, Dumuid D, Virgara R, Matricciani L, et al. Systematic review and meta-analysis of the effectiveness of chatbots on lifestyle behaviours. NPJ Digit Med. 2023;6(1):118. [FREE Full text] [CrossRef] [Medline]
- Bendotti H, Lawler S, Chan GCK, Gartner C, Ireland D, Marshall HM. Conversational artificial intelligence interventions to support smoking cessation: a systematic review and meta-analysis. Digit Health. 2023;9:20552076231211634. [FREE Full text] [CrossRef] [Medline]
- Kusal S, Patil S, Choudrie J, Kotecha K, Mishra S, Abraham A. AI-based conversational agents: a scoping review from technologies to future directions. IEEE Access Institute of Electrical and Electronics Engineers (IEEE). 2022;10:92337-92356. [FREE Full text] [CrossRef]
- Laymouna M, Ma Y, Lessard D, Schuster T, Engler K, Lebouché B. Roles, users, benefits, and limitations of chatbots in health care: rapid review. J Med Internet Res. 2024;26:e56930. [FREE Full text] [CrossRef] [Medline]
- Amiri P, Karahanna E. Chatbot use cases in the Covid-19 public health response. J Am Med Inform Assoc. 2022;29(5):1000-1010. [FREE Full text] [CrossRef] [Medline]
- Fan Z, Yin W, Zhang H, Wang D, Fan C, Chen Z, et al. COVID-19 information dissemination using the wechat communication index: retrospective analysis study. J Med Internet Res. 2021;23(7):e28563. [FREE Full text] [CrossRef] [Medline]
- Zhou Y, Zhang A, Liu X, Tan X, Miao R, Zhang Y, et al. Protecting public's well-being against the COVID-19 infodemic: the role of trust in information sources and rapid dissemination and transparency of information over time. Front Public Health. 2023;11:1142230. [FREE Full text] [CrossRef] [Medline]
- Sezgin E, Huang Y, Ramtekkar U, Lin S. Readiness for voice assistants to support healthcare delivery during a health crisis and pandemic. NPJ Digit Med. 2020;3:122. [FREE Full text] [CrossRef] [Medline]
- Miner AS, Laranjo L, Kocaballi AB. Chatbots in the fight against the COVID-19 pandemic. NPJ Digit Med. 2020;3:65. [FREE Full text] [CrossRef] [Medline]
- How whatsApp can help stay you connected during COVID-19 pandemic. URL: https://www.whatsapp.com/coronavirus [accessed 2024-09-28]
- Ayers JW, Zhu Z, Poliak A, Leas EC, Dredze M, Hogarth M, et al. Evaluating artificial intelligence responses to public health questions. JAMA Netw Open. 2023;6(6):e2317517. [FREE Full text] [CrossRef] [Medline]
- Sezgin E. Redefining virtual assistants in health care: the future with large language models. J Med Internet Res. 2024;26:e53225. [FREE Full text] [CrossRef] [Medline]
- Haupt CE, Marks M. AI-generated medical advice-GPT and beyond. JAMA. 2023;329(16):1349-1350. [CrossRef] [Medline]
- Phi open models - small language models. URL: https://azure.microsoft.com/en-us/products/phi [accessed 2024-12-20]
- Wu Q, Bansal G, Zhang J, Wu Y, Li B, Zhu E, et al. AutoGen: enabling Next-Gen LLM applications via multi-agent conversation. arXiv:2308.08155. 2023. [FREE Full text]
- Multi-agent Conversation Framework. URL: https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat/ [accessed 2024-09-28]
- NCDC and UNICEF launch chatbot to combat COVID-19 misinformation in Nigeria. URL: https://www.unicef.org/nigeria/press-releases/ncdc-and-unicef-launch-chatbot-combat-covid-19-misinformation-nigeria [accessed 2025-01-07]
- Sehgal P, Jain V. Fighting pandemic: the mobile application way (a case of the Aarogya Setu App). Emerging Economies Cases Journal. 2023;4(6):251660422211470. [FREE Full text] [CrossRef]
- Tumpey AJ, Daigle D, Nowak G. Communicating during an outbreak or public health investigation. In: The CDC Field Epidemiology Manual. Oxford, England. Oxford University Press; 2019:243-260.
- Guidance and tools for conducting an early action review (EAR): rapid performance improvement for outbreak detection and response. WHO. URL: https://www.who.int/publications/i/item/WHO-WPE-HSP-CER-2023.1 [accessed 2024-03-02]
- Overview - FHIR v5.0.0. URL: https://www.hl7.org/fhir/overview.html [accessed 2024-10-01]
- Liao F, Adelaine S, Afshar M, Patterson BW. Governance of clinical AI applications to facilitate safe and equitable deployment in a large health system: key elements and early successes. Front Digit Health. 2022;4:931439. [FREE Full text] [CrossRef] [Medline]
- Boch S, Sezgin E, Lin Linwood S. Ethical artificial intelligence in paediatrics. Lancet Child Adolesc Health. 2022;6(12):833-835. [CrossRef] [Medline]
- McCallum S. Snapchat: Snap AI chatbot 'may risk children's privacy'. BBC News. URL: https://www.bbc.com/news/technology-67027282 [accessed 2024-03-08]
- Haltaufderheide J, Ranisch R. The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs). NPJ Digit Med. 2024;7(1):183. [FREE Full text] [CrossRef] [Medline]
- Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med. 2023;6(1):113. [FREE Full text] [CrossRef] [Medline]
- Timmons AC, Duong JB, Simo Fiallo N, Lee T, Vo HPQ, Ahle MW, et al. A call to action on assessing and mitigating bias in artificial intelligence applications for mental health. Perspect Psychol Sci. 2023;18(5):1062-1096. [FREE Full text] [CrossRef] [Medline]
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. [CrossRef] [Medline]
- Assistant Secretary for Public Affairs (ASPA). Guiding principles help health care community address potential bias resulting from algorithms. US Department of Health and Human Services. URL: https://www.hhs.gov/about/news/2023/12/15/guiding-principles-help-healthcare-community-address-potential-bias-resulting-from-algorithms.html [accessed 2024-03-02]
- Affordable connectivity program. URL: https://www.fcc.gov/acp [accessed 2024-03-09]
- Ashok M, Madan R, Joha A, Sivarajah U. Ethical framework for artificial intelligence and digital technologies. Int. J. Inf. Manag. 2022;62(2):102433. [FREE Full text] [CrossRef]
- Zhang X, Ogueji K, Ma X, Lin J. Towards best practices for training multilingual dense retrieval models. arXiv:2204.02363. 2020. [FREE Full text]
- Nagarajan R, Kondo M, Salas F, Sezgin E, Yao Y, Klotzman V, et al. Economics and equity of large language models: health care perspective. J Med Internet Res. 2024;26:e64226. [FREE Full text] [CrossRef] [Medline]
- van Zyl C, Badenhorst M, Hanekom S, Heine M. Unravelling 'low-resource settings': a systematic scoping review with qualitative content analysis. BMJ Glob Health. 2021;6(6):e005190. [FREE Full text] [CrossRef] [Medline]
- Prompt caching with Claude. URL: https://www.anthropic.com/news/prompt-caching [accessed 2024-10-01]
- Balch JA, Ruppert M, Loftus T, Guan Z, Ren Y, Upchurch G, et al. Machine learning-enabled clinical information systems using fast healthcare interoperability resources data standards: scoping review. JMIR Med Inform. 2023;11:e48297. [FREE Full text] [CrossRef] [Medline]
- May R, Denecke K. Security, privacy, and healthcare-related conversational agents: a scoping review. Inform Health Soc Care. 2022;47(2):194-210. [CrossRef] [Medline]
- Beatty S. Tiny but mighty: the Phi-3 small language models with big potential. Source. URL: https://news.microsoft.com/source/features/ai/the-phi-3-small-language-models-with-big-potential/ [accessed 2024-10-01]
- Xue J, Wang Y, Wei C, Liu X, Woo J, Kuo C. Bias and fairness in Chatbots: an overview. arXiv:2309.08836. 2023. [FREE Full text]
- Wan Y, Wang W, He P, Gu J, Bai H, Lyu M. BiasAsker: measuring the bias in conversational AI system. USA. ACM; 2023. Presented at: Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering; 2023 Nov 30; CA, San Francisco, USA.
- Landers RN, Behrend TS. Auditing the AI auditors: a framework for evaluating fairness and bias in high stakes AI predictive models. Am Psychol. 2023;78(1):36-49. [CrossRef] [Medline]
- Motta I, Quaresma M. Increasing transparency to design inclusive Conversational Agents (CAs): perspectives and open issues. USA. ACM; 2023. Presented at: Proceedings of the 5th International conference on conversational user interfaces; 2023 July 19-21; Eindhoven, Netherlands.
- Nadarzynski T, Knights N, Husbands D, Graham CA, Llewellyn CD, Buchanan T, et al. Achieving health equity through conversational AI: a roadmap for design and implementation of inclusive chatbots in healthcare. PLOS Digit Health. 2024;3(5):e0000492. [FREE Full text] [CrossRef] [Medline]
- Fact sheet: Biden-Harris administration announces key AI actions following President Biden's landmark Executive Order. The White House. URL: https://www.whitehouse.gov/briefing-room/statements-releases/2024/01/29/fact-sheet-biden-harris-administration-announces-key-ai-actions-following-president-bidens-landmark-executive-order/ [accessed 2024-03-03]
- Chin MH, Afsar-Manesh N, Bierman AS, Chang C, Colón-Rodríguez CJ, Dullabh P, et al. Guiding principles to address the impact of algorithm bias on racial and ethnic disparities in health and health care. JAMA Netw Open. 2023;6(12):e2345050. [FREE Full text] [CrossRef] [Medline]
- Cha S. Towards an international regulatory framework for AI safety: lessons from the IAEA’s nuclear safety regulations. Humanit Soc Sci Commun. 2024;11(1):506. [CrossRef]
Abbreviations
AI: artificial intelligence |
CAI: conversational artificial intelligence |
CDC: Centers for Disease Control and Prevention |
GDPR: General data protection regulation. |
HIPAA: Health Insurance Portability and Accountability Act |
HITECH: Health Information Technology for Economic and Clinical Health |
LLM: large language model |
UNICEF: United Nations International Children’s Emergency Fund |
WHO: World Health Organization |
Edited by A Mavragani; submitted 19.11.24; peer-reviewed by B Davis, J John Thayil; comments to author 15.12.24; revised version received 20.12.24; accepted 21.12.24; published 20.01.25.
Copyright©Emre Sezgin, Ahmet Baki Kocaballi. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 20.01.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.