Published on in Vol 25 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/46448, first published .
Providing Self-Led Mental Health Support Through an Artificial Intelligence–Powered Chat Bot (Leora) to Meet the Demand of Mental Health Care

Providing Self-Led Mental Health Support Through an Artificial Intelligence–Powered Chat Bot (Leora) to Meet the Demand of Mental Health Care

Providing Self-Led Mental Health Support Through an Artificial Intelligence–Powered Chat Bot (Leora) to Meet the Demand of Mental Health Care

Viewpoint

Cyberpsychology Research Group, Biomedical Informatics and Digital Health Theme, School of Medical Sciences, The University of Sydney, Sydney, Australia

Corresponding Author:

Emma L van der Schyff, BBHS (Hons)

Cyberpsychology Research Group

Biomedical Informatics and Digital Health Theme, School of Medical Sciences

The University of Sydney

Fisher Road

Sydney, 2006

Australia

Phone: 61 477179944

Email: emma.vanderschyff@sydney.edu.au


Digital mental health services are becoming increasingly valuable for addressing the global public health burden of mental ill-health. There is significant demand for scalable and effective web-based mental health services. Artificial intelligence (AI) has the potential to improve mental health through the deployment of chatbots. These chatbots can provide round-the-clock support and triage individuals who are reluctant to access traditional health care due to stigma. The aim of this viewpoint paper is to consider the feasibility of AI-powered platforms to support mental well-being. The Leora model is considered a model with the potential to provide mental health support. Leora is a conversational agent that uses AI to engage in conversations with users about their mental health and provide support for minimal-to-mild symptoms of anxiety and depression. The tool is designed to be accessible, personalized, and discreet, offering strategies for promoting well-being and acting as a web-based self-care coach. Across all AI-powered mental health services, there are several challenges in the ethical development and deployment of AI in mental health treatment, including trust and transparency, bias and health inequity, and the potential for negative consequences. To ensure the effective and ethical use of AI in mental health care, researchers must carefully consider these challenges and engage with key stakeholders to provide high-quality mental health support. Validation of the Leora platform through rigorous user testing will be the next step in ensuring the model is effective.

J Med Internet Res 2023;25:e46448

doi:10.2196/46448

Keywords



Mental health is a significant public health concern. Globally, it is estimated that 1 in every 8 people lives with a mental disorder [1]. Mental health service use for depression was estimated to range from 33% in high-income countries to 8% in low- and middle-income countries [2]. A significant gap exists between mental ill-health and access to clinical services. Further, global estimates have not yet captured the impact of the COVID-19 pandemic. Some preliminary estimates suggest that an additional 76.2 million cases of anxiety disorders could be attributed to COVID-19 [3].

The outbreak of COVID-19 and subsequent stay-at-home requirements from governments across the world were the precursors to a significant increase in the demand for mental health services [4]. Amid uncertainty in health and employment, individuals reported that mental ill-health was exacerbated [4]. In response, the World Health Organization stated, “the scaling-up and reorganization of mental health services that is now needed on a global scale is an opportunity to build a mental health system that is fit for the future” [5].

Web-based mental health services are a key to improved service delivery that can be augmented to meet the needs of the changing times ahead [6]. Telehealth appointments, whose reach was expanded during the pandemic, are an effective way digital health was deployed. Research suggests that the expansion of telehealth services was generally well-accepted [7]. However, long wait times limit the accessibility of telehealth services. Moreover, the demand for clinically effective web-based mental health tools and services far exceeds the supply [8,9]. COVID-19 required a significant rethinking and restructuring of what effective mental health care service delivery looks like. With this, evidence-based, efficacious, and easily scalable digital mental health services are increasingly needed.


Artificial intelligence (AI) is the ability of computer systems to display intelligent behavior by analyzing an environment and taking action with some degree of autonomy [10]. In health care, AI is prominently used in the context of predicting, detecting, and treating illness. This is particularly useful when developing solutions for large-scale problems. Burgeoning examples of this can be found in clinical settings [11] as well as in translational medical research [12,13]. Another way AI is being used is in chatbots, where it has the potential to make an impact in the area of mental health [14,15]. Chatbots are software that engages in dialog with individuals using natural language [16].

The advent of these mental health bots has the potential to be useful in several ways. For one, therapeutic chatbots can provide a platform for individuals to engage at any hour of the day in self-help. Ready access to digital therapies and tools is important given that distressed individuals may seek help outside business hours. Bots also have the potential to triage individuals who feel stigmatized by the current health care model and who otherwise would not be comfortable accessing treatment. Reluctance to access help due to the fear of being judged or labeled is a significant barrier for individuals to access mental health support [17]. Men and some minority groups report stigma as being particularly prolific in this context [18,19].

Woebot [20] and Wysa [21] are 2 examples of AI chatbots in the field of mental health. Woebot uses cognitive behavioral therapy and was found to significantly reduce symptoms of depression and anxiety in young adults [15]. Wysa also reported a reduction in self-reported symptoms of depression and reported that 68% of the cohort found the app helpful and encouraging [14]. These tools show potential for AI mental health chatbots; however, critiques have been made about the size of the sample in the validation studies and the lack of research on their long-term efficacy [22]. Given the potential for AI-powered tools to address mental ill-health at scale, further research into the development and validity of this technology needs to be completed.


Providing mental health treatments necessitates ethical scrutiny, especially when incorporating AI into brief assessments and the support of users. The difficulty of designing ethically robust interventions with AI is that ethical design and review cannot remain a one-off, as the technology itself is continuously developing for both the user and clinical providers [23]. Transparency in the development and limitations of AI are important considerations. For example, how an algorithm works to predict and learn is especially important to understand in the context of mental health, particularly regarding how it relates to patient safety and the potential provision of clinical advice. Further, confidentiality and anonymity are a particular concern for mental health tool users, given the stigma associated with mental ill-health [24,25], as are data ownership and privacy rights, which are difficult but important elements to manage [26].


In consideration of the aforementioned challenges of AI development for mental health support, the AI platform Leora has incorporated a model of development that is transparent, ethical, and user-focused with regard to data privacy. An understanding that AI is currently best suited as a mental health support tool and early service entrance assistant for users has guided the development of Leora toward its focus as an AI mental health support program.

Leora is a conversational agent that leverages AI to engage in discussions with users about their mental health. Accessible via a web browser and mobile app, this digital toolkit has the capacity to assist individuals with mental ill-health by assessing, monitoring, and managing minimal-to-mild symptoms of anxiety and depression. Further, Leora provides evidence-based strategies to cope with distress and promote well-being. Leora is positioned as a web-based self-care coach and offers easily accessible, personalized, and discreet mental health support.


An interaction with Leora is illustrated in Figure 1. The first step is identifying the individual’s needs. This is an important component of patient-centered care and is associated with more promising patient outcomes [27,28]. AI can gauge the appropriate clinical need via a standardized evaluation tool for anxiety and depression, which are the most common mental health concerns [1]. It can do this by using natural language processing—an algorithm that allows a computer to understand language as it is spoken and written [29].

Figure 1. Road map for self-assessment of mental ill-health on the Leora platform.

An important ethical consideration in developing and implementing AI digital health solutions in the mental health field is that solutions and services must provide clinically appropriate recommendations [30]. By integrating the standardized measures of the Generalized Anxiety Disorder 7-item scale (GAD-7) and the Patient Health Questionnaire 9-item scale (PHQ-9) into the platform, recommendations and referrals will be clinically valid in terms of criterion, construct, factorial, and procedural validity (see [31,32] for GAD-7 validity testing and [33,34] for PHQ-9 validity testing). Individuals with clinically significant scores on these measures will be directed to mental health services to book an in-person therapy session. Individuals with low or no indication of anxiety and depression symptoms will be directed to psychoeducation and mindfulness strategies, deemed efficacious for those needing to understand their mood and cognitive changes. This approach to care is standardized and is well-used by many web-based mental health services [35,36]. Individuals with mild depression and anxiety scores will be directed to strategies to manage these factors based on mental health first aid techniques grounded in brief cognitive behavioral therapy [37,38] and Acceptance and Commitment Therapy, also known as mindfulness [39].

Figure 2 illustrates a flowchart of the evaluation of mental health support strategies that Leora would offer an individual. If strategies to reduce the impact of depression and anxiety were not considered helpful by the individual, the platform would refer the user to another strategy that may provide improved support. If the strategy is considered effective by the individual, posttechnique support and maintenance sessions may be offered.

Figure 2. Assessing the effectiveness of the mental health support strategies offered by Leora.

The platform blends AI with authentic, human-first, psychological support known as humanistic therapy [40]. The conversational agent can engage in meaningful conversations with the individual, conduct standardized assessments, and book therapy for those who need more support (see Figure 3 for the interface examples).

Patient safety is paramount throughout client engagement. Leora has developed “escalation pathways” into the design of triaging patients. A self-assessment that indicates an individual has clinically significant levels of anxiety and depression will be referred to appropriate services, such as a psychologist or a local mental health emergency contact (eg, a mental health crisis website or phone service). In the instance that the client meets the criteria for a referral process, the platform would assist the user to access an appointment with nonconfrontational wording, for example, “You might want to get support beyond what I can help you and that’s not a problem at all! I know a network of experienced mental health professionals. I can assist you in booking sessions with them.”

Figure 3. Leora platform interface: chatting to the conversational agent, self-assessment, and booking therapy.

Accessibility, equity, clinical effectiveness, user acceptability, and satisfaction, as well as service efficiency, are components of ethical digital health solutions [41] and can be evaluated by data drawn from Leora. As such, data collected from the cycle of support and evaluation of individual users can be assessed and integrated back into the model to improve the AI service. This iterative design process, characteristic of AI and machine learning models, gives Leora scope to improve by integrating the experiences of a variety of users. Independently, qualitative feedback and internal or external user experience testing are additional layers of evidence to cross-reference with the review data of client sessions with Leora.


Leora is stored on Amazon Web Services (AWS) cloud servers and is secured by virtual private cloud networking configurations. Data are protected from exposure to the public-facing internet through the strict configuration of security groups, subnet settings, and the use of identity access management. This establishes the highest degree of privacy for collected data and ensures that it cannot be leveraged for commercial use. Leora’s architecture is protected by the AWS Encryption Software Development Kit (Amazon Web Services, Inc), which uses envelope encryption. This form of multilayered encryption involves first generating a key for the data chunk at the application layer and then wrapping or encrypting this key again at the storage layer, creating a hierarchy of keys with access governance controlled by identity access management [42]. To decrypt the encrypted message, the AWS Encryption Software Development Kit uses the wrapping key to decrypt at least one encrypted data key. It can then decrypt the ciphertext and return a plaintext message. Leora is compliant with the Australian Privacy Policy [43] and gives users of the platform the option of gaining access (through authorization and verification tokens) and deleting their data. While one of the functions of Leora is to build connectivity between users and therapists, it also puts the choice of what data is shared in the hands of users.


Appropriate acknowledgment of the scope of support in digital mental health solutions is a pillar of ethical development and patient care [30]. It is critical for conversational agents to be transparent about the scope of their abilities. Leora is not designed to assist with mental health crises. Clear warnings and postings of this message appropriately indicate to users the scope of the platform. Abuse, self-harm, severe mental health conditions that may cause feelings of suicide, and any other medical emergencies are generally out of the scope of conversational agents. Leora cannot and will not offer medical or clinical advice to individuals, so if presented with a mental health crisis it will instead direct clients to emergency mental health services.


Principal Findings

As the use of AI-powered mental health chatbots becomes more widespread, ethical design and development must be considered. One pillar of the ethical integration of AI into mental health treatment is trust and transparency. Clinicians and patients may be hesitant to trust these platforms if they do not have a basic understanding of how the algorithms work to provide care [44]. To address this issue, it is imperative that the detailed development, testing, and deployment of AI chatbots be transparent to clinicians and consumers. This means clearly explaining the algorithms and processes that the chatbots use to generate responses, as well as the limitations of the technology and any and all risks to users. By providing this information, clinicians and patients may make more informed decisions about the advice provided by an AI chatbot and how to use it effectively.

Another important consideration for conversational agents is the potential for bias and health inequity. This is especially pertinent for groups with low health literacy levels, who may not be native speakers or people living with a disability. To ensure that chatbots are accessible, developers must design the technology to be user-friendly and easy to understand. This may involve providing clear and concise explanations of the chatbots’ responses as well as offering multiple ways for users to interact with the technology (eg, through text, voice, or other means).

McGreevey et al [30] suggest that there are several research and developmental considerations in the development of effective and ethical AI conversational agents. Some of the key research and development challenges for researchers going forward include evaluating the effectiveness of different approaches or tones for conversational agents, such as empathetic and stoic tones, terse and engaging delivery, and gendered delivery of care. Understanding the reasons why some patients stop using conversational agents is also crucial for improving technology. Longitudinal evaluation of patient outcomes with conversational agents is also crucial. This can help researchers determine the effectiveness of the technology in delivering mental health care across demographics and differing client needs. By addressing these research and development questions, researchers can help ensure that AI-powered conversational agents are used effectively in the introductory phase of this technology and provide the greatest benefit to individuals in the long term.

Further research should also consider the changing needs of patients. Ratheesh and Alvarez-Jimenez [6] note that with improvements in technology and an overall improved comfort with technology, it is possible for technology to change and adapt to the expectations and needs of its users. Given that research suggests that the relationship between the individual and clinician is one of the strongest correlates of successful treatment [45], further research is required to determine patient values and preferences for the role of AI-powered mental health support, such as in providing a pathway to developing successful therapeutic alliances.

Supporting innovation in the development of AI-powered conversational agents for mental health is essential to ensuring that patients have access to the most advanced and effective treatments available. However, it is also important to balance this goal with the need to protect patients. While the ethical development of AI is paramount, conservative approaches to the implementation of cutting-edge technology need to be balanced with supporting innovation with the potential to positively impact individuals on a large scale. One way to support innovation while still protecting patients is to carefully consider the potential risks and benefits of any new technology before it is introduced. This might involve conducting clinical trials to evaluate the safety and effectiveness of the technology, as well as engaging with key stakeholders such as patients, health care providers, and ethics experts to ensure that any potential concerns are addressed. Ultimately, supporting innovation in the development of AI-powered conversational agents for mental health will require a careful balance between boundary-pushing and the need to protect patients.

Moving forward, it is imperative for AI-powered mental health support to test the clinical outcomes for specific mental health disorders. While previous research has tested usefulness, acceptability, and perception [46], clinical effectiveness and reduction in the symptomology of depression, distress, and stress need to be tested more rigorously. A recent meta-analysis concluded that while there was promise for conversational agents, the evidence for them was currently weak due to the limited number of clinical trials and the high estimated risk of bias in the current evidence base [47]. Future research should aim to rectify this problem using a combination of participant-centered action research and traditional randomized control trials of AI-powered chatbots with a comparative therapeutic approach (eg, human-led solution-focused counseling).

Conclusions

Digital mental health development is an important and necessary component of mental health service delivery. The Leora platform is a promising addition to self-led mental health help that leverages AI and humanistic therapy to support and triage individuals to appropriate mental health services. This is significant given the increasing demand and limited supply of mental health treatments worldwide. Notably, it meets the agreed pathways and development standards set by the Australian Government’s National Digital Mental Health Framework [48]. The integration of AI-powered chatbots into mental health service delivery is a logical innovation toward increasing equitable and free mental health service access globally. It necessitates certain ethical considerations to foster trust and transparency between client users and service providers. It is therefore essential in these early years of AI mental health support innovation that the development and deployment of AI chatbots are transparent to clinicians and consumers in order to rapidly and safely meet client demand and provide for best-practice use of the technology as an adjunct to existing clinical services. To do this, a layperson’s explanation of both the algorithms and processes that leverage chatbots’ data and the limitations of the technology will ensure a robust model of care. Ideally, this technology should aid clinicians and patients in making informed decisions about mental health treatment and services. Developers should remain vigilant against clinical and user experience bias. Moreover, equity should remain at the forefront of designing these tools to promote user acceptability, particularly for groups with low health literacy, nonnative speakers, or those living with a disability.

As AI for mental health support refines over the coming years, specifically for triaging clients in mental distress and conducting accurate psychometric testing, it has the potential to significantly benefit the strained traditional mental health system (ie, general practitioners, psychologists, and psychiatrists). By incorporating sophisticated yet clinician-monitored data sharing between AI systems like Leora and digital patient health records, this technology has the ability to provide the results of clinically validated longitudinal measures to these health care professionals, which in turn may expedite the treatment and recovery process of individuals with complex cases in a timely manner. The next step will be to test the Leora platform’s ability to deliver timely and effective mental health treatment.

Conflicts of Interest

AJC is on the advisory board of Leora to ensure the ethical and safe development of the Leora platform. The remaining authors declare no conflicts of interest.

  1. GBD results. Institute for Health Metrics and Evaluation. URL: https://vizhub.healthdata.org/gbd-results [accessed 2023-01-04]
  2. Moitra M, Santomauro D, Collins PY, Vos T, Whiteford H, Saxena S, et al. The global gap in treatment coverage for major depressive disorder in 84 countries from 2000-2019: a systematic review and Bayesian meta-regression analysis. PLoS Med. 2022;19(2):e1003901. [FREE Full text] [CrossRef] [Medline]
  3. COVID-19 Mental Disorders Collaborators. Global prevalence and burden of depressive and anxiety disorders in 204 countries and territories in 2020 due to the COVID-19 pandemic. Lancet. 2021;398(10312):1700-1712. [accessed Jan 8, 2023] [FREE Full text] [CrossRef] [Medline]
  4. Yu J, Park J, Hyun SS. Impacts of the COVID-19 pandemic on employees’ work stress, well-being, mental health, organizational citizenship behavior, and employee-customer identification. J Hosp Mark Manag. 2021;30(5):529-548. [accessed Jan 8, 2023] [FREE Full text] [CrossRef]
  5. Substantial investment needed to avert mental health crisis. World Health Organization. URL: https://www.who.int/news/item/14-05-2020-substantial-investment-needed-to-avert-mental-health-crisis [accessed 2023-01-08]
  6. Ratheesh A, Alvarez-Jimenez M. The future of digital mental health in the post-pandemic world: evidence-based, blended, responsive and implementable. Aust N Z J Psychiatry. 2022;56(2):107-109. [FREE Full text] [CrossRef] [Medline]
  7. Gentry MT, Puspitasari AJ, McKean AJ, Williams MD, Breitinger S, Geske JR, et al. Clinician satisfaction with rapid adoption and implementation of telehealth services during the COVID-19 pandemic. Telemed J E Health. 2021;27(12):1385-1392. [accessed Jan 8, 2023] [FREE Full text] [CrossRef] [Medline]
  8. Kola L. Global mental health and COVID-19. Lancet Psychiatry. 2020;7(8):655-657. [FREE Full text] [CrossRef] [Medline]
  9. Zima BT, Devaskar SU, Pediatric Policy COUNCIL. Imperative to accelerate research aligning real-time clinical demand with mental health supply. Pediatr Res. 2022;92(4):917-920. [FREE Full text] [CrossRef] [Medline]
  10. Shaping Europe's digital future. Communication artificial intelligence for Europe. European Commission. 2018. URL: https://digital-strategy.ec.europa.eu/en/library/communication-artificial-intelligence-europe [accessed 2023-02-01]
  11. Jha S, Topol EJ. Adapting to artificial intelligence: radiologists and pathologists as information specialists. JAMA. 2016;316(22):2353-2354. [accessed Feb 12, 2023] [FREE Full text] [CrossRef] [Medline]
  12. Golub TR, Slonim DK, Tamayo P, Huard C, Gaasenbeek M, Mesirov JP, et al. Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science. 1999;286(5439):531-537. [CrossRef] [Medline]
  13. Wang Y, Tetko IV, Hall MA, Frank E, Facius A, Mayer KFX, et al. Gene selection from microarray data for cancer classification: a machine learning approach. Comput Biol Chem. 2005;29(1):37-46. [CrossRef] [Medline]
  14. Inkster B, Sarda S, Subramanian V. An empathy-driven, conversational artificial intelligence agent (Wysa) for digital mental well-being: real-world data evaluation mixed-methods study. JMIR mHealth uHealth. 2018;6(11):e12106. [FREE Full text] [CrossRef] [Medline]
  15. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health. 2017;4(2):e19. [FREE Full text] [CrossRef] [Medline]
  16. Dale R. The return of the chatbots. Nat Lang Eng. 2016;22(5):811-817. Cambridge, United Kingdom: Cambridge University Press [FREE Full text] [CrossRef]
  17. Schomerus G, Stolzenburg S, Freitag S, Speerforck S, Janowitz D, Evans-Lacko S, et al. Stigma as a barrier to recognizing personal mental illness and seeking help: a prospective study among untreated persons with mental illness. Eur Arch Psychiatry Clin Neurosci. 2019;269(4):469-479. [CrossRef] [Medline]
  18. Vogel DL, Wester SR, Hammer JH, Downing-Matibag TM. Referring men to seek help: the influence of gender role conflict and stigma. Psychol Men Masc. 2014;15(1):60-67. US: Educational Publishing Foundation. [CrossRef]
  19. Byrow Y, Pajak R, McMahon T, Rajouria A, Nickerson A. Barriers to mental health help-seeking amongst refugee men. Int J Environ Res Public Health. 2019;16(15):2634. [FREE Full text] [CrossRef] [Medline]
  20. Woebot Health. URL: https://woebothealth.com/ [accessed 2023-01-03]
  21. Wysa. URL: https://www.wysa.com/ [accessed 2023-01-03]
  22. D'Alfonso S. AI in mental health. Curr Opin Psychol. 2020;36:112-117. [CrossRef] [Medline]
  23. Peters D, Vold K, Robinson D, Calvo RA. Responsible AI: two frameworks for ethical design practice. IEEE Trans Technol Soc. 2020;1(1):34-47. [FREE Full text] [CrossRef]
  24. Hattingh HL, Knox K, Fejzic J, McConnell D, Fowler JL, Mey A, et al. Privacy and confidentiality: perspectives of mental health consumers and carers in pharmacy settings. Int J Pharm Pract. 2015;23(1):52-60. [FREE Full text] [CrossRef] [Medline]
  25. Pretorius C, Chambers D, Coyle D. Young people's online help-seeking and mental health difficulties: systematic narrative review. J Med Internet Res. 2019;21(11):e13873. [FREE Full text] [CrossRef] [Medline]
  26. Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. In: Bohr A, Memarzadeh K, editors. Artificial Intelligence in Healthcare. London, UK. Academic Press; 2020;295-336.
  27. Shimada T, Nishi A, Yoshida T, Tanaka S, Kobayashi M. Development of an individualized occupational therapy programme and its effects on the neurocognition, symptoms and social functioning of patients with schizophrenia. Occup Ther Int. 2016;23(4):425-435. [FREE Full text] [CrossRef] [Medline]
  28. Zilcha-Mano S, Keefe JR, Chui H, Rubin A, Barrett MS, Barber JP. Reducing dropout in treatment for depression: translating dropout predictors into individualized treatment recommendations. J Clin Psychiatry. 2016;77(12):e1584-e1590. [CrossRef] [Medline]
  29. Chowdhary KR. Natural language processing. In: Fundamentals of Artificial Intelligence. New Delhi, India. Springer India; 2020;603-649.
  30. McGreevey JD, Hanson CW, Koppel R. Clinical, legal, and ethical aspects of artificial intelligence–assisted conversational agents in health care. JAMA. 2020;324(6):552-553. [CrossRef] [Medline]
  31. Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. 2006;166(10):1092-1097. [FREE Full text] [CrossRef] [Medline]
  32. Naeinian MR, Shaeiri MR, Sharif M, Hadian M. To study reliability and validity for a brief measure for assessing generalized anxiety disorder (GAD-7). Clin Psychol Pers. 2011;9(1):41-50.
  33. Kroenke K, Spitzer RL, Williams JBW. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606-613. [FREE Full text] [CrossRef] [Medline]
  34. Beard C, Hsu KJ, Rifkin LS, Busch AB, Björgvinsson T. Validation of the PHQ-9 in a psychiatric sample. J Affect Disord. 2016;193:267-273. [CrossRef] [Medline]
  35. Donker T, Griffiths KM, Cuijpers P, Christensen H. Psychoeducation for depression, anxiety and psychological distress: a meta-analysis. BMC Med. 2009;7(1):79. [FREE Full text] [CrossRef] [Medline]
  36. Taylor-Rodgers E, Batterham PJ. Evaluation of an online psychoeducation intervention to promote mental health help seeking attitudes and intentions among young adults: randomised controlled trial. J Affect Disord. 2014;168:65-71. [CrossRef] [Medline]
  37. Ellis A. Reason and Emotion in Psychotherapy. New York, NY. Lyle Stuart; 1962;442.
  38. Beck AT. Thinking and depression: I. Idiosyncratic content and cognitive distortions. Arch Gen Psychiatry. 1963;9(4):324-333. [CrossRef] [Medline]
  39. Harris R. Embracing your demons: an overview of acceptance and commitment therapy. Psychother Aust. 2006;12(4):70-76. [CrossRef]
  40. Elliott R. The effectiveness of humanistic therapies: a meta-analysis. In: Cain DJ, Seeman J, editors. Humanistic Psychotherapies: Handbook of Research and Practice. Washington, DC. American Psychological Association; 2002;57-81.
  41. Kaplan B. Revisiting health information technology ethical, legal, and social issues and evaluation: telehealth/telemedicine and COVID-19. Int J Med Inform. 2020;143:104239. [FREE Full text] [CrossRef] [Medline]
  42. Envelope encryption. Google Cloud. URL: https://cloud.google.com/kms/docs/envelope-encryption [accessed 2023-02-01]
  43. What is a privacy policy? Office of the Australian Information Commissioner. URL: https://www.oaic.gov.au/privacy/your-privacy-rights/what-is-a-privacy-policy [accessed 2023-01-05]
  44. Henry KE, Kornfield R, Sridharan A, Linton RC, Groh C, Wang T, et al. Human-machine teaming is key to AI adoption: clinicians' experiences with a deployed machine learning system. NPJ Digit Med. 2022;5(1):97. [FREE Full text] [CrossRef] [Medline]
  45. Klein DN, Schwartz JE, Santiago NJ, Vivian D, Vocisano C, Castonguay LG, et al. Therapeutic alliance in depression treatment: controlling for prior change and patient characteristics. J Consult Clin Psychol. 2003;71(6):997-1006. [CrossRef] [Medline]
  46. Abd-Alrazaq AA, Alajlani M, Ali N, Denecke K, Bewick BM, Househ M. Perceptions and opinions of patients about mental health chatbots: scoping review. J Med Internet Res. 2021;23(1):e17828. [FREE Full text] [CrossRef] [Medline]
  47. Abd-Alrazaq AA, Rababeh A, Alajlani M, Bewick BM, Househ M. Effectiveness and safety of using chatbots to improve mental health: systematic review and meta-analysis. J Med Internet Res. 2020;22(7):e16021. [FREE Full text] [CrossRef] [Medline]
  48. National digital mental health framework. Australian Government. Department of Health. 2021. URL: https:/​/www.​health.gov.au/​sites/​default/​files/​documents/​2022/​03/​national-digital-mental-health-framework_0.​pdf [accessed 2023-01-03]


AI: artificial intelligence
AWS: Amazon Web Services
GAD: Generalized Anxiety Disorder
PHQ: Patient Health Questionnaire


Edited by T Leung, T de Azevedo Cardoso; submitted 12.02.23; peer-reviewed by S Machinathu Parambil Gangadharan, E Bunge, ME Chatzimina ; comments to author 31.03.23; revised version received 21.04.23; accepted 17.05.23; published 19.06.23.

Copyright

©Emma L van der Schyff, Brad Ridout, Krestina L Amon, Rowena Forsyth, Andrew J Campbell. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 19.06.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.