Editorial
Abstract
The use of artificial intelligence (AI) to assist with the prevention, identification, and management of eating disorders and body image concerns is exciting, but it is not without risk. Technology is advancing rapidly, and ensuring that responsible standards are in place to mitigate risk and protect users is vital to the success and safety of technologies and users.
J Med Internet Res 2023;25:e50696doi:10.2196/50696
Keywords
Introduction
Eating disorders are among the deadliest of all mental health conditions. A multidisciplinary team is recommended for treatment, and often, long-term care required. In Australia, a dedicated credentialing system ensures that health professionals have the required skills to safely treat people experiencing eating disorders [
]. How could artificial intelligence (AI) ever play a role in specialized eating disorder treatment across the diagnostic spectrum?In light of recent news surrounding a publicly released eating disorder prevention chatbot providing dietary advice and encouraging weight loss [
], we take this opportunity to reflect on the ethical considerations in this rapidly advancing field. As our reliance on technology and AI increases, how do we in the field of eating disorders and body image keep advancing while maintaining ethical and safe conduct? Additionally, when we choose to expand treatment teams to include support from chatbots, how do we ensure the software is kept relevant for the safety of real-life users?AI and Eating Disorders
Similar to the field of mental health more broadly, AI holds promise for preventing and treating eating disorders and body image disturbances [
]. To date, research has focused on web-based digital interventions and chatbots, with varying results. Chatbots have been used as prevention strategies to support and provide resources for individuals seeking help. Rule-based chatbots use predetermined scripts and button prompts to facilitate conversations around topics of the researchers’ choice (eg, body image, coping skills, and health impacts). An example of a rule-based chatbot is the first body image chatbot, “KIT” (the current version is called “JEM”), which was positively received by young people seeking help and their parents and carers [ ]. Another example is “Tessa,” which researchers reported was a rule-based bot [ ]. However, “Tessa” also included a “proprietary AI algorithm [that enabled the] delivery of nuanced responses.” Thus, Tessa and other similar chatbots are not truly rule-based as evidenced by sometimes inappropriate responses, such as not recognizing negative or risky language. For example, one user wrote to “Tessa,” saying, “I hate my appearance, my personality sucks, my family does not like me, and I don’t have any friends or achievements,” to which this chatbot replied, “Keep on recognising your great qualities!...” [ ]. The latter example highlights the potential risks involved with these technologies [ ] and the importance of understanding if they are truly rule based or not.The challenge of studying and deploying AI-based chatbots are numerous. It cannot be assumed that the results of rule-based chatbots will translate to AI chatbots, and neither can it be assumed that the results of an AI chatbot will be the same if the software is upgraded [
]. These dual challenges are likely factors related to the inappropriate responses generated by the eating disorder chatbot in question. Solving these challenges requires new efforts to advance both clinical research as well as AI technology development for health. From a clinical research perspective, evidence for any chatbot (rule based or AI) needs to be robust. From a technology perspective, developers (and the broader multidisciplinary teams in which they work) need to be transparent about limitations and potential bias.Clinical research in chatbots is rapidly expanding given the potential of this work. They can provide accessible treatment support options at low or no cost, ensure that a broader range of individuals are getting access to evidence-based support, and could be used as a first-line intervention to connect people with further appropriate support. Furthermore, the delivery of chatbot interventions could be personalized and delivered to at-risk individuals by integrating them with mobile sensing technology (see Tzafilkou et al [
] for a comprehensive review), which uses mobile phone behavior to learn individual risk and recovery patterns based on real-time data. Is this futuristic health care model the ideal opportunity to provide universal access to mental health prevention (to all with a smartphone), or are we being too optimistic? The only way to truly tell will be through the next generation of high-quality studies that use rigorous methods such as digital control groups (in this circumstance, a chatbot that offers non–eating disorder advice) [ ] and rigorous safeguards. Studies without rigorous control groups will remain important pilot studies, but looking forward, new research that expands feasibility to explore placebo-controlled results may better guide which chatbots are a priority to move forward.Likewise, from a technical perspective, there is a rapid expansion of chatbots. With the recent advances in related large language models such as ChatGPT (OpenAI), chatbots are more accessible today than ever before [
]. However, easy accessibility does not preclude risk. Developers and researchers need to offer full transparency regarding the methods and data sources used for chatbot training, as there remains ample inappropriate and dangerous content on the internet that could bias these chatbots toward harm [ , ]. It is well known that AI is not immune to bias [ ], so the research and development team should carefully consider for whom they are designing the resource and how representative it is of the target community to avoid further disadvantages to already marginalized groups. Gender, language, race, age, and comorbidities must be carefully considered during development [ ]. In the case of the publicly released eating disorder chatbot that provided inappropriate responses, it is unknown if the multidisciplinary team responsible had accounted for these factors before users began to engage with it. If the same underlying AI is being used to support other health applications, are other populations also at risk?Paths Forward
The need and potential that are expanding chatbot mental health research are perhaps only matched by the complex research methods and AI technology necessary to create effective and ethical solutions. Given that eating disorders research has historically been underfunded, the challenge is further expanded. The ideal team for this work must include mental health clinicians, researchers, developers, individuals with lived experience, and ethicists. Each multidisciplinary team member brings expertise integral to the success of an AI resource. Although forming and maintaining such a team is neither simple nor inexpensive, is progress feasible without such resources? Even defining the ethical implications of AI in this space is complex, such as those in the physical (dignity, well-being, safety, and sustainability), cognitive (intelligibility, accountability, fairness, and autonomy), information (privacy and security), and governance (financial, economic, individual, and societal impacts) domains [
]. These tenets encompass both the creator and the AI itself. AI should be developed to have safe interactions with humans, and chatbots for eating disorders and body image concerns should be held to this standard, especially as there is a significant risk of harm if the AI malfunctions [ ]. The recent examples of the public having to use social media to raise the alarm of potential harm underscores the need to build robust testing systems that are better able to detect and prevent harm. If software is found to have flaws, the multidisciplinary teams responsible need to be open and transparent about such setbacks, similar to how researchers report negative trial results. Creating an ecosystem where the entire community can at least learn from mistakes ensures that the chance of future ones will be minimized.Are We Really Ready for This?
Many questions remain and will continue to evolve alongside the technology from which they stem. Eating disorders are serious conditions, and offering inappropriate and dangerous information to patients, even if “it” is through technology-based prevention or intervention programs, carries significant risk. AI is exciting and promises to improve access to prevention and treatment services, but has the technology evolved faster than the ethical safeguards required to protect users? The multidisciplinary teams remain responsible for ethical and safe care. With limited standards and best practice protocols [
], every means possible to mitigate risk and reduce harm should be used. Computers and smartphones may help deliver these technologies, but the users are real people seeking help, and we need to provide safe and ethical care that does not cause harm.Data Availability
Data sharing is not applicable to this article as no data sets were generated or analyzed during this study.
Authors' Contributions
GS contributed to conceptualization, writing—original draft, and writing—reviewing and editing. JT contributed to conceptualization and writing—reviewing and editing. MLW contributed to writing—original draft and writing—reviewing and editing.
Conflicts of Interest
JT is the editor in chief of JMIR Mental Health. All other authors declare no other conflicts of interest.
References
- Heruc G, Hurst K, Casey A, Fleming K, Freeman J, Fursland A, et al. ANZAED eating disorder treatment principles and general clinical practice and training standards. J Eat Disord. Nov 10, 2020;8(1):63. [FREE Full text] [CrossRef] [Medline]
- NEDA suspends AI chatbot for giving harmful eating disorder advice. Psychiatrist.com. Jun 5, 2023. URL: https://www.psychiatrist.com/news/neda-suspends-ai-chatbot-for-giving-harmful-eating-disorder-advice/ [accessed 2023-08-09]
- Graham AK, Kosmas JA, Massion TA. Designing digital interventions for eating disorders. Curr Psychiatry Rep. Apr 2023;25(4):125-138. [CrossRef] [Medline]
- Beilharz F, Sukunesan S, Rossell SL, Kulkarni J, Sharp G. Development of a positive body image chatbot (KIT) with young people and parents/carers: qualitative focus group study. J Med Internet Res. Jun 16, 2021;23(6):e27807. [FREE Full text] [CrossRef] [Medline]
- Fitzsimmons-Craft EE, Chan WW, Smith AC, Firebaugh M, Fowler LA, Topooco N, et al. Effectiveness of a chatbot for eating disorders prevention: a randomized clinical trial. Int J Eat Disord. Mar 2022;55(3):343-353. [CrossRef] [Medline]
- Chan WW, Fitzsimmons-Craft EE, Smith AC, Firebaugh M, Fowler LA, DePietro B, et al. The challenges in designing a prevention chatbot for eating disorders: observational study. JMIR Form Res. Jan 19, 2022;6(1):e28003. [FREE Full text] [CrossRef] [Medline]
- Tzafilkou K, Economides AA, Protogeros N. Mobile sensing for emotion recognition in smartphones: a literature review on non-intrusive methodologies. Int J Hum Comput Interact. Sep 27, 2021;38(11):1037-1051. [CrossRef]
- Torous J, Benson NM, Myrick K, Eysenbach G. Focusing on digital research priorities for advancing the access and quality of mental health. JMIR Ment Health. Apr 24, 2023;10:e47898. [FREE Full text] [CrossRef] [Medline]
- Abbate-Daga G, Taverna A, Martini M. The oracle of Delphi 2.0: considering artificial intelligence as a challenging tool for the treatment of eating disorders. Eat Weight Disord. Jun 19, 2023;28(1):50. [FREE Full text] [CrossRef] [Medline]
- Sukunesan S, Huynh M, Sharp G. Examining the pro-eating disorders community on twitter via the hashtag #proana: statistical modeling approach. JMIR Ment Health. Jul 09, 2021;8(7):e24340. [FREE Full text] [CrossRef] [Medline]
- Nutley SK, Falise AM, Henderson R, Apostolou V, Mathews CA, Striley CW. Impact of the COVID-19 pandemic on disordered eating behavior: qualitative analysis of social media posts. JMIR Ment Health. Jan 27, 2021;8(1):e26011. [FREE Full text] [CrossRef] [Medline]
- Gabriels K. Addressing the soft impacts of weak AI-technologies. In: Artificial Life Conference Proceedings. Presented at: ALIFE 2018: The 2018 Conference on Artificial Life; July 23-27, 2018, 2018;504-509; Tokyo, Japan. [CrossRef]
- Ashok M, Madan R, Joha A, Sivarajah U. Ethical framework for artificial intelligence and digital technologies. Int J Inf Manage. Feb 2022;62:102433. [FREE Full text] [CrossRef]
- Fardouly J, Crosby RD, Sukunesan S. Potential benefits and limitations of machine learning in the field of eating disorders: current research and future directions. J Eat Disord. May 08, 2022;10(1):66. [FREE Full text] [CrossRef] [Medline]
- Buruk B, Ekmekci PE, Arda B. A critical perspective on guidelines for responsible and trustworthy artificial intelligence. Med Health Care Philos. Sep 2020;23(3):387-399. [CrossRef] [Medline]
Abbreviations
AI: artificial intelligence |
Edited by T Leung; This is a non–peer-reviewed article. submitted 13.07.23; accepted 07.08.23; published 14.08.23.
Copyright©Gemma Sharp, John Torous, Madeline L West. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 14.08.2023.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.