Original Paper
Abstract
Background: Patient follow-up is an essential part of hospital ward management. With the development of deep learning algorithms, individual follow-up assignments might be completed by artificial intelligence (AI). We developed an AI-assisted follow-up conversational agent that can simulate the human voice and select an appropriate follow-up time for quantitative, automatic, and personalized patient follow-up. Patient feedback and voice information could be collected and converted into text data automatically.
Objective: The primary objective of this study was to compare the cost-effectiveness of AI-assisted follow-up to manual follow-up of patients after surgery. The secondary objective was to compare the feedback from AI-assisted follow-up to feedback from manual follow-up.
Methods: The AI-assisted follow-up system was adopted in the Orthopedic Department of Peking Union Medical College Hospital in April 2019. A total of 270 patients were followed up through this system. Prior to that, 2656 patients were followed up by phone calls manually. Patient characteristics, telephone connection rate, follow-up rate, feedback collection rate, time spent, and feedback composition were compared between the two groups of patients.
Results: There was no statistically significant difference in age, gender, or disease between the two groups. There was no significant difference in telephone connection rate (manual: 2478/2656, 93.3%; AI-assisted: 249/270, 92.2%; P=.50) or successful follow-up rate (manual: 2301/2478, 92.9%; AI-assisted: 231/249, 92.8%; P=.96) between the two groups. The time spent on 100 patients in the manual follow-up group was about 9.3 hours. In contrast, the time spent on the AI-assisted follow-up was close to 0 hours. The feedback rate in the AI-assisted follow-up group was higher than that in the manual follow-up group (manual: 68/2656, 2.5%; AI-assisted: 28/270, 10.3%; P<.001). The composition of feedback was different in the two groups. Feedback from the AI-assisted follow-up group mainly included nursing, health education, and hospital environment content, while feedback from the manual follow-up group mostly included medical consultation content.
Conclusions: The effectiveness of AI-assisted follow-up was not inferior to that of manual follow-up. Human resource costs are saved by AI. AI can help obtain comprehensive feedback from patients, although its depth and pertinence of communication need to be improved.
doi:10.2196/16896
Keywords
Introduction
Artificial intelligence (AI) is a system that can correctly interpret external data, learn from such data, and flexibly apply the acquired knowledge to achieve specific tasks and goals. With the dramatic improvement of computer power to process big data, AI is already ubiquitous in our daily life in the past 50 years. In the past several years, the application prospect of AI in surgery, radiology, dermatology, and oncology is up-and-coming [
- ].Telemedicine is a medical method used to provide clinical and educational services for remote areas. Information and communication technology are used to transmit medical information [
, ]. Telemedicine attempts to overcome challenges in health services delivery due to distance, time, and rough terrain by improving cost-effectiveness and accessibility of health services in both developing and developed regions [ ]. It is an “open and evolving science that incorporates advancements in new technologies and adapts to the changing health demands and social environments.” [ ] With the growth of AI and big data analytics, the scope and capability of telemedicine have expanded in recent years. Current telemedicine applications can be divided into four categories: patient monitoring, medical information technology, AI-assisted diagnosis, and information analysis collaboration [ ]. With the assistance of AI, telemedicine might be an effective method for disease assessment, diagnosis, management, and monitoring [ ], especially in chronic disease [ - ], skin diseases [ ], and postoperative follow-up care [ ].Postoperative follow-up is an essential part of orthopedic surgery. Medical institutions can provide service for discharged postoperative patients through follow-up. Traditional methods include phone calls, emails, visiting, and reexamination at clinic; all of these methods need a lot of medical resources. With the development of AI and telemedicine, computers or mobile phones could be used to complete more and more of the follow-up work.
In this paper, we describe an AI-assisted system for the follow-up of patients after surgery. The study’s aim was to compare the cost-effectiveness and feedback composition of the AI-assisted system with a traditional method for postoperative follow-up.
Methods
Ethics Approval and Consent to Participate
This study was approved by the Research and Ethics Institutional Committee of Peking Union Medical College Hospital (PUMCH).
Patient Enrollment and Data Collection
The AI-assisted follow-up system was launched in the Orthopedic Department of PUMCH in 2019. We enrolled 270 patients who had undergone orthopedic surgery at PUMCH from April 2019 to May 2019. This group of patients was defined as the AI-assisted follow-up group since all of the postoperative follow-up was completed by an AI-assisted follow-up system. 2,656 patients who had undergone orthopedic surgery at PUMCH from April 2018 to March 2019 were enrolled as the manual follow-up group. The postoperative follow-up of this group was completed by manual phone calls. Patient characteristics for both groups were collected from the hospital information management system including gender, age, and disease type.
AI-Assisted Follow-Up System
presents a block diagram of the AI-assisted follow-up system. The AI-assisted follow-up system obtained baseline information for each discharged patient including the ID number, gender, age, discharge date, diagnosis, telephone number, and caller location. The system called patients via automated speech telephony delivery in batches from 8:30 AM to 8:30 PM every day allowing hundreds of calls to be made daily. Interactions between the system and patients were based on machine learning, speech recognition, spoken language understanding, and human voice simulation technology. Communication contents included patient satisfaction in the hospital environment, nursing, and health education; wound recovery; functional training; postoperative complications; and other surgery-related medical consulting. The system was able to identify dialects in different parts of China via speech recognition technology and voice information was converted into text in real time. An example of transcribed dialogue that was automatically converted into text is shown in . A report, shown in , was generated and automatically uploaded to the cloud afterward. Surgeons and nurses could review the report and respond to patient feedback, if necessary.
Transcription of a sample dialogue.
System: Hello, this is Peking union medical college hospital. Could you please take a few minutes to complete a follow-up survey? Thank you!
Patient: ok.
System: our follow-up questions mainly include three aspects: ward environment, propaganda, and nursing. All of the following questions are ranked from 0 to 5, with 5 being the most satisfactory and 1 being the worst. We will record your questions at the end of the follow-up, and would you please help us complete the complete survey?
Patient: yes, no problem.
System: what is your score for the quiet and tidy environment of the ward? The full score is 5 points.
Patient: well, give it a five.
System: ok, may I ask how satisfied you are with the ward environment introduction, health guidance, and the matters needing attention after discharge, etc., given by the nurses during your stay in the hospital? Would you please rate
Patient: well, give it a five.
System: ok, please rate the nurses' responsibility, fit degree, injection technique, regular patrol, response speed, and other nursing work.
Patient: well, five points.
System: ok, what are your complaints about this hospitalization?
Patient: perhaps the most unsatisfactory part is that the doctor is too busy and has too few opportunities to communicate with patients.
System: do you have any other questions for feedback or consultation?
Patient: no.
System: thank you very much for your cooperation. We have recorded your problems and will reply to you as soon as possible. Please keep your mobile phone free and remember to review it on time.
Parameter | Information |
Basic information | Patient name, ID number, gender, age, discharge date, diagnosis, and caller locationa |
Dialogue result | System hang up after dialogue |
Number of dials | 1 |
Call duration (s) | 97 |
Question 1 score | 5 |
Question 2 score | 5 |
Question 3 score | 5 |
Total score | 15 |
Feedback question | Doctors are too busy to communicate with patients |
Follow-up date | 2019/6/24 |
aBasic information is automatically extracted from the hospital information management system.
Manual Follow-Up
Follow-up with the control group was performed by phone. Calls were made one by one depending on the availability of human resources. The communication contents were consistent with the AI-assisted follow-up group. Reports were recorded and uploaded manually. Feedback from patients were recorded by the operator and reported to surgeons and nurses.
Evaluation Indicators
Evaluation indicators of follow-up included telephone connection rate, follow-up rate, feedback collection rate, and session duration. Telephone connection rate = (number of effective follow-ups + number of invalid follow-ups) / number of patients called × 100%. Follow-up rate = number of effective follow-ups / (number of effective follow-ups + number of invalid follow-ups) × 100%. Feedback collection rate = number of patients with effective feedback / number of effective follow-ups × 100%. Effective follow-up was defined as the complete collection of data in
(excluding the feedback question parameter), while invalid follow-up was defined as the absence or incomplete collection of data in . Effective feedback included patient feedback about nursing, health education, hospital environment, and medical consulting. Moreover, feedback content should be specific and constructive such as dissatisfaction with hospital food or postoperative wound rupture.Statistical Methods
Data analysis was performed using SPSS statistical software (version 23.0, IBM Inc). Pearson chi-square test or Fisher exact test was used to compare categorical variables (age, gender, medical condition, telephone connection rate, follow-up rate, and feedback rate). The normality of continuous variables (session duration) was tested with a modified Kolmogorov-Smirnov test. Unpaired t tests were performed for those following a normal distribution. All tests were two-sided. Data were considered to be statistically significant for P<.05.
Results
Patient Characteristics
Patient characteristics for the two groups are shown in
. Group differences in age, gender, and disease category were not statistically significant ( ).Characteristics | Manual follow-up group, n (%) | AI-assisteda follow-up group, n (%) | Chi-square (df) | P value | |||||
Number of patients | 2656 (100) | 270 (100) | |||||||
Age (years) | 0.8 (3) | .86 | |||||||
<50 | 805 (30.3) | 77 (28.5) | |||||||
50-59 | 480 (18.1) | 47 (17.4) | |||||||
60-69 | 861 (32.4) | 94 (34.8) | |||||||
≥70 | 510 (19.2) | 52 (19.3) | |||||||
Gender | 1.7 (1) | .19 | |||||||
Male | 994 (37.4) | 112 (41.5) | |||||||
Female | 1662 (62.6) | 158 (58.5) | |||||||
Disease category | 4.1 (2) | .13 | |||||||
Spinal disease | 1676 (63.1) | 154 (57.0) | |||||||
Joint disease | 938 (35.3) | 110 (40.7) | |||||||
Other | 42 (1.6) | 6 (2.2) |
aAI-assisted: artificial intelligence–assisted.
Cost-Effectiveness
As shown in
, there was no significant difference in either telephone connection rate (manual: 2478/2656, 93.3%; AI-assisted: 249/270, 92.2%; P=.50) or follow-up rate (manual: 2301/2478, 92.9%; AI-assisted: 231/249, 92.8%; P=.96) between two groups. However, the feedback collection rate in the AI-assisted follow-up group was significantly higher than that in the manual follow-up group (manual: 68/2656, 2.5%; AI-assisted: 28/270, 10.3%; P<.001). The approximate session duration of the manual follow-up group varied from 60 seconds to 180 seconds, according to interviews with the operators. An extra 120 to 180 seconds were also necessary for material recording and uploading. Therefore, the average time spent on each patient in the manual follow-up group was approximately 3-6 minutes. We recorded the total time that the operators spent on randomly following 100 patients (9.3 hours). The average session duration of the AI-assisted follow-up group was 87.7 (SD 39.5) seconds. However, none of this time required human resources, and the AI system generated data reports automatically. In this way, the time spent on AI-assisted follow-up was close to 0 hours. Compared with manual follow-up, the AI-assisted follow-up were of shorter duration.Indicators | Manual follow-up | AI-assisteda follow-up | Chi-square (df) | P value | |||||
Telephone connection | 0.4 (1) | .50 | |||||||
Number of effective follow-ups + invalid follow-ups | 2478 | 249 | |||||||
Number of patients called | 2656 | 270 | |||||||
Rate, % | 93.3 | 92.2 | |||||||
Follow-up | 0.003 (1) | .96 | |||||||
Number of effective follow-ups + invalid follow-ups | 2301 | 231 | |||||||
Number of effective follow-ups + invalid follow-ups | 2478 | 249 | |||||||
Rate, % | 92.9 | 92.8 | |||||||
Feedback collection | 47.1 (1) | <.001 | |||||||
Number of patients with effective feedback | 68 | 28 | |||||||
Number of effective follow-ups + invalid follow-ups | 2656 | 270 | |||||||
Rate, % | 2.5 | 10.3 | |||||||
Time spent, hours per 100 patients | 9.3 | 0 | N/Ab | N/A |
aAI-assisted: artificial intelligence–assisted.
bN/A: not applicable.
Feedback Composition
The composition of feedback content is shown in
. In the manual follow-up group, 87.0% of the feedback were medical consultation–related, including functional training, wound recovery, medication usage and postoperative complication (29.5%, 42.7%, 4.5%, and 10.3%, respectively). Feedback related to nursing, health education and hospital environment accounted 7.3%, 1.4%, and 4.3%, respectively. In the AI-assisted follow-up group, only 10.7% of the feedback were related to medical consultation and most of the feedback focused on hospital environment, nursing, and health education (53.6%, 28.6%, and 7.1%, respectively).Discussion
Principal Findings
In this paper, we put forward a new methodology for postoperative data collection for follow-up of patients who had undergone orthopedic surgery. The AI-assisted follow-up system is intended to facilitate follow-up efficiency via machine learning, speech recognition, and human voice simulation technology.
In the cost-effectiveness analysis, our study found that there was no significant difference in the telephone connection rate and follow-up rate between the two groups, suggesting the effectiveness of AI was not inferior to the traditional manual method. AI-assisted follow-up could replace traditional manual follow-up to some extent. At PUMCH, follow-up was traditionally completed by nurses or telephone operators where the average time spent on each patient was about 3-6 minutes. The AI-assisted system, however, was able to follow up 5-7 patients simultaneously using telephone relay technology. Thousands of patients could be followed up daily while the time spent on AI-assisted follow-up was close to 0 hours, saving a lot of manpower and human resources.
AI-assisted follow-up did improve the feedback rate, but the composition of feedback was different between the two groups. Feedback from the AI-assisted follow-up group were mainly related to nursing, health education, and hospital environment; only 11% was related to medical consulting. Conversely, this number was 87% in the manual follow-up group. After interviewing the operators, we found a possible explanation to be the telephone operator (usually nurses) were more likely to record feedback that was difficult for them to respond directly, such as medical consultation. In order to improve follow-up efficiency, they tended to respond directly to feedback related to nursing, hospital environment, and health education. This feedback would not have been recorded in the follow-up materials. It also explains why the feedback rate was lower in the manual follow-up group. Another possible explanation was that compared with AI-assisted follow-up, patients could communicate more naturally and deeply with operators. As a result, they may have been more willing to put forward professional medical consultation feedback to operators. In the manual follow-up group, medical consulting could be divided into four categories while in the AI-assisted follow-up group, medical consultation was difficult classify because it was not pertinent.
Limitations
Our work has several limitations. First, apart from phone calls, there are other communication methods that could be combined with AI, such as text messages, computer software, and smartphone apps. Anthony et al [
] invented an automated mobile phone messaging platform for orthopedic trauma patients that improved the responding rate after trauma procedures; however, many elderly patients were not familiar with texting. Therefore, texting might not be suitable for follow-up. Another option is the chatbot. It is a computer program or smartphone app based on AI that can communicate with people via auditory or text [ ]. Medical chatbots have been used in disease diagnosis [ ], management [ ], and monitoring [ ]. Recent research showed that chatbots were a convenient method to help patients address common concerns after ureteroscopy [ ]. Integrating chatbots into the telemedicine system could be used to help assess disease conditions and provide self-care recommendations for patients [ ]. Therefore, future work may include developing a chatbot software or app for additional medical purposes and to provide better personal service.The second limitation of this study is that the probation period of the AI-assisted follow-up system is not too long. It is generally believed that with the increasing of machine learning time, the AI-assisted system would become more intelligent. Therefore, we should pay more attention to this system in the future.
Conclusions
In this research, we found that the effectiveness of AI-assisted follow-up was not inferior to the manual follow-up. Moreover, human resources costs could be saved with the assistance of artificial intelligence. Compared with manual follow-up, AI-assisted follow-up obtains more comprehensive feedback, but feedback lacks depth and pertinence. Therefore, the application of an AI-assisted follow-up system in hospital ward management has the potential to improve telemedicine follow-up service and patient satisfaction.
Acknowledgments
The AI-assisted follow-up system was designed by the Beijing Qiyun Zhida Technology Co, Ltd. This system is currently used in the field of trade, finance, and medicine.
Conflicts of Interest
None declared.
References
- Bouletreau P, Makaremi M, Ibrahim B, Louvrier A, Sigaux N. Artificial Intelligence: Applications in orthognathic surgery. J Stomatol Oral Maxillofac Surg 2019 Sep;120(4):347-354. [CrossRef] [Medline]
- Smith KC. A discrete speech recognition system for dermatology: 8 years of daily experience in a medical dermatology office. Semin Cutan Med Surg 2002 Sep;21(3):205-208. [CrossRef] [Medline]
- Garg S, Williams NL, Ip A, Dicker AP. Clinical Integration of Digital Solutions in Health Care: An Overview of the Current Landscape of Digital Technologies in Cancer Care. JCO Clin Cancer Inform 2018 Dec;2:1-9 [FREE Full text] [CrossRef] [Medline]
- WHO. A Health Telematics Policy in Support of WHO'S Health-For-All Strategy for Global Development. In: World Health Organization. 1997 Presented at: Report of the WHO Group Consultation on Health Telematics; 11-16 December; Geneva.
- Dinya E, Tamás T. Health informatics: eHealth and Telemedicine. 2013 Presented at: Semmelweis University Institute of Health Informatics; 2013; Budapest.
- Kuziemsky C, Maeder AJ, John O, Gogia SB, Basu A, Meher S, et al. Role of Artificial Intelligence within the Telehealth Domain. Yearb Med Inform 2019 Aug;28(1):35-40 [FREE Full text] [CrossRef] [Medline]
- Ryu S. Telemedicine: Opportunities and Developments in Member States: Report on the Second Global Survey on eHealth 2009 (Global Observatory for eHealth Series, Volume 2). Healthc Inform Res 2012;18(2):153. [CrossRef]
- Taylor Y, Eliasson A, Andrada T, Kristo D, Howard R. The role of telemedicine in CPAP compliance for patients with obstructive sleep apnea syndrome. Sleep Breath 2006 Sep;10(3):132-138. [CrossRef] [Medline]
- Montani S, Bellazzi R, Larizza C, Riva A, d'Annunzio G, Fiocchi S, et al. Protocol-based reasoning in diabetic patient management. Int J Med Inform 1999 Jan;53(1):61-77. [CrossRef] [Medline]
- Whitlock W, Brown A, Moore K, Pavliscsak H, Dingbaum A, Lacefield D, et al. Telemedicine improved diabetic management. Military medicine 2000;165(8):579-584. [CrossRef]
- Morlion B, Verbandt Y, Paiva M, Estenne M, Michils A, Sandron P, et al. A telemanagement system for home follow-up of respiratory patients. IEEE Eng Med Biol Mag 1999;18(4):71-79. [CrossRef] [Medline]
- Kinne G, Droste C, Fahrenberg J, Roskamm H. Symptomatic myocardial ischemia and everyday life: implications for clinical use of interactive monitoring. J Psychosom Res 1999 Apr;46(4):369-377. [CrossRef] [Medline]
- Elsner P, Bauer A, Diepgen TL, Drexler H, Fartasch M, John SM, et al. Position paper: Telemedicine in occupational dermatology - current status and perspectives. J Dtsch Dermatol Ges 2018 Aug;16(8):969-974. [CrossRef] [Medline]
- Young K, Gupta A, Palacios R. Impact of Telemedicine in Pediatric Postoperative Care. Telemed J E Health 2019 Nov;25(11):1083-1089. [CrossRef] [Medline]
- Anthony CA, Volkmar A, Shah AS, Willey M, Karam M, Marsh JL. Communication with Orthopedic Trauma Patients via an Automated Mobile Phone Messaging Robot. Telemed J E Health 2018 Jul;24(7):504-509. [CrossRef] [Medline]
- Ni L, Lu C, Liu N, Liu J. Towards a smart primary care chatbot application. 2017 Presented at: In International Symposium on Knowledge and Systems Sciences (pp. ). Springer; 2017; Singapore p. 38-52. [CrossRef]
- Chung K, Park RC. Chatbot-based heathcare service with a knowledge base for cloud computing. Cluster Comput 2018 Mar 16;22(S1):1925-1937. [CrossRef]
- Piau A, Crissey R, Brechemier D, Balardy L, Nourhashemi F. A smartphone Chatbot application to optimize monitoring of older patients with cancer. Int J Med Inform 2019 Aug;128:18-23. [CrossRef] [Medline]
- Goldenthal SB, Portney D, Steppe E, Ghani K, Ellimoottil C. Assessing the feasibility of a chatbot after ureteroscopy. Mhealth 2019;5:8 [FREE Full text] [CrossRef] [Medline]
- Fadhil A. Beyond patient monitoring: Conversational agents role in telemedicine & healthcare support for home-living elderly individuals. arXiv preprint arXiv.0 2018:6000.
Edited by G Eysenbach; submitted 07.11.19; peer-reviewed by V Khanna, D Pförringer; comments to author 10.02.20; revised version received 06.04.20; accepted 08.04.20; published 26.05.20
Copyright©Yanyan Bian, Yongbo Xiang, Bingdu Tong, Bin Feng, Xisheng Weng. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.05.2020.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.