Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/65527, first published .
Considerations and Challenges in the Application of Large Language Models for Patient Complaint Resolution

Considerations and Challenges in the Application of Large Language Models for Patient Complaint Resolution

Considerations and Challenges in the Application of Large Language Models for Patient Complaint Resolution

Authors of this article:

Bin Wei1 Author Orcid Image ;   Xin Hu1 Author Orcid Image ;   XiaoRong Wu1 Author Orcid Image

Letter to the Editor

The 1st Affiliated Hospital, Jiangxi Medical College, Nanchang University, Nanchang, China

Corresponding Author:

XiaoRong Wu, MD

The 1st Affiliated Hospital

Jiangxi Medical College

Nanchang University

No. 17 Yongwai Zheng Street

Donghu District

Nanchang, 330000

China

Phone: 86 13617093259

Email: wxr98021@126.com



We are writing to express our appreciation for the recent publication of the study titled “Performance of Large Language Models in Patient Complaint Resolution: Web-Based Cross-Sectional Survey” in the Journal of Medical Internet Research [1]. The authors provide an innovative perspective on the application of large language models (LLMs) like ChatGPT in resolving patient complaints, demonstrating their potential to enhance patient satisfaction through thoughtful and well-constructed responses. Although we greatly appreciate this study’s meticulous work and significant contributions, we would like to highlight additional considerations and potential limitations for future research.

First, the decision-making process of LLMs is often a “black box.” While these models can generate reasonable and satisfactory responses, the underlying logic and reasoning remain unknown [2]. This lack of transparency and interpretability could pose challenges to the legal responsibilities of health care institutions and potentially undermine patient trust. Additionally, when artificial intelligence (AI) handles sensitive patient information, even with deidentification measures, ensuring data security and privacy remains a critical concern. Therefore, as the application of LLMs expands, establishing appropriate regulatory frameworks and accountability mechanisms may be necessary.

Second, this study may have been conducted within the English-speaking health care system of Singapore, but patient complaints and requirements in different regions heavily depend on local cultural, legal, and social contexts. LLMs may struggle to capture and differentiate these subtle cultural nuances, potentially leading to responses that are not fully appropriate or that even cause misunderstandings. Moreover, while AI models can generate responses that appear empathetic, such responses are merely based on the model’s imitation of language patterns, rather than a true understanding of the patient’s psychological state and emotional needs [3,4]. In the tense environment of doctor-patient relationships in China, if patients perceive a difference in responses, it could exacerbate their anxiety and dissatisfaction. In certain circumstances, using AI to handle patient complaints may lack the necessary human touch, thereby intensifying conflict between patients and health care providers.

Third, over time, LLMs need continuous updates to reflect the latest medical practices, policy changes, and societal expectations, which undoubtedly increases maintenance costs. If AI models generate responses based on incorrect or outdated medical information, they may unintentionally mislead patients, particularly in critical medical decision-making and health guidance, which could be especially dangerous. Furthermore, given the varying levels of health care across different regions, and the differing approaches to treatment, this variability presents a significant challenge for AI.

In conclusion, while LLMs like ChatGPT show great potential in enhancing patient communication and handling complaints, their practical application requires careful consideration of their suitability and the challenges they may present. Therefore, establishing robust regulatory frameworks, ethical guidelines, and stringent oversight mechanisms is crucial. This approach not only helps build trust between patients and health care providers but also contributes to creating a more patient-centered and efficient health care system.

Conflicts of Interest

None declared.

Editorial Notice

The corresponding author of “Performance of Large Language Models in Patient Complaint Resolution: Web-Based Cross-Sectional Survey” declined to respond to this letter.

  1. Yong LPX, Tung JYM, Lee ZY, Kuan WS, Chua MT. Performance of large language models in patient complaint resolution: web-based cross-sectional survey. J Med Internet Res. Aug 09, 2024;26:e56413. [FREE Full text] [CrossRef] [Medline]
  2. Bender EM, Gebru T, McMillan-Major A, Shmitchell S. On the dangers of stochastic parrots: can language models be too big. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 2021. Presented at: FAccT '21; March 3-10, 2021:610-623; Virtual Event. [CrossRef]
  3. D'Alfonso S, Santesteban-Echarri O, Rice S, Wadley G, Lederman R, Miles C, et al. Artificial intelligence-assisted online social therapy for youth mental health. Front Psychol. 2017;8:796-720. [FREE Full text] [CrossRef] [Medline]
  4. Hoermann S, McCabe KL, Milne DN, Calvo RA. Application of synchronous text-based dialogue systems in mental health interventions: systematic review. J Med Internet Res. Jul 21, 2017;19(8):e267. [FREE Full text] [CrossRef] [Medline]


AI: artificial intelligence
LLM: large language model


Edited by T Leung, L Beri; This is a non–peer-reviewed article. submitted 18.08.24; accepted 23.08.24; published 17.09.24.

Copyright

©Bin Wei, Xin Hu, XiaoRong Wu. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 17.09.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.