Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/58939, first published .
Research Into Digital Health Intervention for Mental Health: 25-Year Retrospective on the Ethical and Legal Challenges

Research Into Digital Health Intervention for Mental Health: 25-Year Retrospective on the Ethical and Legal Challenges

Research Into Digital Health Intervention for Mental Health: 25-Year Retrospective on the Ethical and Legal Challenges

Viewpoint

1Institute of Mental Health, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, United Kingdom

2National Institute for Health and Care Research (NIHR) Nottingham Biomedical Research Centre, Institute of Mental Health, University of Nottingham, Nottingham, United Kingdom

3National Institute for Health and Care Research (NIHR) MindTech HealthTech Research Centre, Institute of Mental Health, University of Nottingham, Nottingham, United Kingdom

4Responsible AI UK, School of Computer Science, University of Nottingham, Nottingham, United Kingdom

5School of Health Sciences, Institute of Mental Health, University of Nottingham, Nottingham, United Kingdom

Corresponding Author:

Stefan Rennick-Egglestone, PhD

School of Health Sciences

Institute of Mental Health

University of Nottingham

University of Nottingham Innovation Park

Triumph Road

Nottingham, NG7 2TU

United Kingdom

Phone: 44 11582 ext 30926

Email: stefan.egglestone@nottingham.ac.uk


Digital mental health interventions are routinely integrated into mental health services internationally and can contribute to reducing the global mental health treatment gap identified by the World Health Organization. Research teams designing and delivering evaluations frequently invest substantial effort in deliberating on ethical and legal challenges around digital mental health interventions. In this article, we reflect on our own research experience with digital mental health intervention design and evaluation to identify 8 of the most critical challenges that we or others have faced, and that have ethical or legal consequences. These include: (1) harm caused by online recruitment work; (2) monitoring of intervention safety; (3) exclusion of specific demographic or clinical groups; (4) inadequate robustness of effectiveness and cost-effectiveness findings; (5) adequately conceptualizing and supporting engagement and adherence; (6) structural barriers to implementation; (7) data protection and intellectual property; and (8) regulatory ambiguity relating to digital mental health interventions that are medical devices. As we describe these challenges, we have highlighted serious consequences that can or have occurred, such as substantial delays to studies if regulations around Software as a Medical Device (SaMD) are not fully understood, or if regulations change substantially during the study lifecycle. Collectively, the challenges we have identified highlight a substantial body of required knowledge and expertise, either within the team or through access to external experts. Ensuring access to knowledge requires careful planning and adequate financial resources (for example, paying public contributors to engage in debate on critical ethical issues or paying for legal opinions on regulatory issues). Access to such resources can be planned for on a per-study basis and enabled through funding proposals. However, organizations regularly engaged in the development and evaluation of digital mental health interventions should consider creating or supporting structures such as advisory groups that can retain necessary competencies, such as in medical device regulation.

J Med Internet Res 2024;26:e58939

doi:10.2196/58939

Keywords



The World Health Organization has identified a treatment gap for mental health [1], which is present across all regions and for all mental health conditions [2]. For some conditions and in some settings, the treatment gap is increasing. For example, the depression treatment gap in the United States increased between 2015 and 2019 due to an increased depression prevalence in younger people, with no commensurate increase in treatment [3].

Digital mental health interventions (DMHIs) are now routinely integrated into mental health services internationally and can extend provision to those not currently using mental health services [4]. DMHIs can be used to support the prevention, assessment, diagnosis, treatment, and management of mental health conditions. For this review, we focus mainly on treatment and management. DMHIs can enable the delivery of treatment outside of psychiatric settings, a factor identified by a World Psychiatric Association member survey as extremely important for improving access to mental health care, particularly for health conditions other than schizophrenia and those experienced in childhood [5]. Even in the case of schizophrenia and mental health conditions experienced in childhood, studies have shown that treatment through DMHIs can be safe and effective in nonpsychiatric settings [6,7]; hence, we would argue that they are relevant transdiagnostically.

DMHIs can be more easily scalable than nondigital treatments, potentially narrowing the global treatment gap, particularly given a rapid increase in access to computing and networking technologies globally, even in low-resource settings [8]. However, despite this potential for scalability, the majority of DMHIs are not widely integrated into health care systems [9], and varying access to mobile technologies continues to be reported as a barrier to their use to narrow the treatment gap in low- and middle-income settings [10]. Furthermore, for the subset of DMHIs that integrate artificial intelligence (AI), a systematic review has highlighted biases in the mental health conditions examined in evaluations and deficits in the methodological quality of research. These examples point to continuing barriers that limit integration into evidence-based practice [11].

Over the past quarter century, the DMHI landscape has swiftly progressed, enabled by rapid technological advances during this period. This has resulted in an ever-increasing range of DMHIs, including those delivered as mobile apps, internet-delivered therapy, AI, extended and virtual reality, and remote measurement technology (to name just a few). The mode of digital complexity of these DMHIs may vary from low (eg, the digital platform used as a tool for delivery such as videoconferencing) to high (eg, personalization of therapeutic content through machine learning technologies). The level of human input into these DMHI may also vary from fully self-directed (eg, mental health app with no clinical input), to asynchronous motivational support (eg, a therapist provides motivational texts to encourage engagement with a digital tool), through to fully supported (eg, real-time delivered clinician therapy via videoconferencing) [12].

Many DMHIs are publicly available, often for free or at a low cost, such as through public app stores. Most app store offerings have no scientific support or involvement in their development process [13], with the consequence that some can recommend harmful strategies, such as consuming strong alcohol during a bipolar episode [14]. However, DMHIs have been an important focus of mental health research, including high-quality randomized controlled trials (RCTs). For example, in a recent review of trials included in the International Standard Randomized Controlled Trial Numbers (ISRCTN) registry, we identified 23 DMHI trials that had published findings [15]. Since DMHIs are delivered using digital technologies, their evaluations are frequently delivered using technology as well. In online trials, some or all trial procedures are delivered online, frequently (but not always) on the web, which brings a range of methodological challenges [16]. DMHIs often have a small effect size; a meta-analysis of evaluations of DMHIs for depression in adults found a pooled effect size of Cohen d=0.33 (95% CI 0.11-0.55) in evaluations of unsupervised DMHIs [17]. Since a small anticipated effect size necessitates a large target sample, just one challenge in online trials can be finding sufficient eligible participants.

The authors have published and contributed to RCTs of DMHI evaluations, including novel interventions for tics [18,19], attention deficit hyperactivity disorder (ADHD) [20], virtual reality therapy for people with psychosis [7], and the use of mental health recovery narratives as an intervention [21]. We have observed that research teams designing and delivering evaluations frequently invest substantial effort in deliberating on ethical and legal challenges around DMHIs. Conducting our own evaluations ethically and legally has been one of our primary concerns. This focus on ethics and legality can arise because evaluations of DMHIs introduce challenges that are less present in evaluations of more traditional clinical-led interventions, such as less or no contact with clinical teams and the impact this has on monitoring participants (for example, to identify adverse events). DMHIs can result in harm through different mechanisms, such as usage by people experiencing a desperate need for mental health improvement who are unsuited to a particular offering [22] but face a lower barrier to entry to a DMHI in a web-based trial with a self-rated inclusion assessment. Legal challenges can arise when there are rapid changes in international law, such as around data protection arrangements for data transfers between jurisdictions [23] and when regulatory systems are challenged by the rapidly changing landscape of DMHI provision (eg, as AI technologies are integrated) [24].

As we reflect on the past 25 years and the current position, we draw on our own research experience of DMHI design and evaluation and our knowledge of the field to identify ethical and legal concerns that we believe are of critical importance to address. We endeavor to support an informed discourse and guide future research designs to promote the ethically sound and legally resilient integration of DMHIs into mental health care provision.


Online evaluations of DMHI often use online recruitment mechanisms. These can enable the rapid recruitment of participants [16], including people not currently using formal health services, which is important for external validity if DMHIs are intended to extend service reach [4]. However, some forms of online recruitment can cause harm to people with mental health conditions, which means that the design of recruitment strategies is an ethical issue.

Some researchers have chosen to “lurk” on various web-based discussion forums, by joining these forums as a member solely to share recruitment material with other members. Their presence can compromise the safety of these spaces [25]. For example, author Rennick-Egglestone, who is a longstanding member of a mental health support group for people experiencing cyclothymia, has seen peers leave the group permanently when researchers join to share recruitment material, due to a fear that group interactions may be treated as data in research studies. The British Sociological Association has published general guidance on researching web-based forums, which provides a broad overview of thought and practice in this space [26]. The British Psychological Society has provided a guide to how its Code of Human Research Ethics can be interpreted in internet-mediated research [27]. Both highlight that the distinction between public and private space is increasingly blurred in web-based interactions, leading to challenging decisions for researchers about appropriate methods.

Harm can also be caused if potential participants misunderstand online recruitment material, and hence feel misled about the purpose of a study they have chosen to participate in. This may be more likely in web-based evaluations (eg, if study information such as participant information sheets is disseminated over the internet) in a context wherein study team members are not routinely available to explain its contents. We have explored the development of mandatory recruitment principles for online evaluations by developing and using exemplar principles for a web-based trial, which were intended to promote safe practices by trial workers [28]. These included explicit instructions not to lurk but instead to approach the gatekeepers of online groups with requests to share recruitment material and direction on minimum content for each published recruitment message [28].

Frequently, participants identified through web-based recruitment mechanisms are not asked for formal identity verification. This is in keeping with guidance from US [29] and UK [30] regulators on procedures for clinical trials that do not evaluate investigational medicinal products, commonly referred to as non-CTIMPs (non–Clinical Trials of Investigational Medicinal Products). There is developing evidence that “fake participation” is a risk to the integrity of studies recruiting online that do not formally confirm identity. For example, Hydock (2017) reported that a “small but nontrivial portion of participants in online survey studies misrepresented their identity for the chance of financial gain” [19]. How to identify and respond to repeat registration is an ethical consideration, as misrepresentation by participants can distort findings [31]. This could amount to a misuse of the effort contributed by real trial participants and the misuse of public funds through subsequent commissioning decisions.

Without formal identity verification, DMHIs may also be accessed by people for whom they are not suitable, which raises both legal and ethical concerns, such as if DMHIs intended for adults are accessed by children. Legally, this is a developing space, with the UK Online Safety Act (2023) and the US Children’s Online Personal Protection Act (1998) both offering children under the age of 13 greater protection in the online space (eg, restricting the data that is allowed to be collected without parental consent [32]). The latter was reinforced in the United States via the Children's Online Privacy Protection Rule introduced in 2013 [33]. Studies that inadvertently include data provided by younger children may therefore be challenged on the legality of their processes.

Some DMHIs can contain mental health material with the potential to cause distress [21], which may be unsuitable for people such as children lacking the capacity to process the distress of others. These considerations take place in a context where campaign groups across the globe have called for tighter restrictions in response to the terrible loss of children and young people to suicide attributed to the negative impact of web-based platforms [34].


DMHIs can harm people with mental health conditions. The harm can arise through mechanisms such as the physical act of engaging with the intervention (eg, cybersickness in the case of virtual reality), or through interactions between the type of content or information given and the mental health experiences of the technology user. For example, in the United States, the National Eating Disorders Association was forced to remove an AI chatbot servicing a helpline when it began to provide recommendations, such as limiting caloric intake that might reasonably be expected to exacerbate conditions for people experiencing disordered eating [35]. Most generally, harm due to DMHIs might be experienced as one or more adverse events, some of which can be serious, including hospitalization due to substantial distress associated with intervention usage [21]. A growing body of evidence suggests that, historically, adverse events caused by DMHIs have not been well recorded either in research evaluations or in post market surveillance of DMHI usage [15]. For example, a recent review of 23 DMHI trials included in the ISRCTN registry revealed that although most (69%) have published one or more statements on adverse event identification, few (26%) had reported on the number of adverse events in their main findings [15], and few had examined whether these events were serious (26%) or related to the intervention (17%). Another review on DMHI showed that only 65% (15/23) of studies (which included trials and postmarket surveillance studies) explored risks to safety. The authors noted that this was an improvement compared with the findings of a review in 2017, whereby none of the 9 included papers assessed safety risks [36].

Despite this increase, there is a clear need to improve the way harms are monitored, both during formal research evaluations and as technologies are implemented into practice. It is critical that harms are understood to inform the benefit/risk ratio and to make informed decisions on whether a product should be implemented in health care settings, as well as on forms of support required for safe use. Recording adverse events in online evaluations brings new challenges, particularly if there are no routinely planned interactions with health care workers where adverse events could be enquired about or reported. The likelihood of adverse events being spontaneously reported is limited in online interventions, even where reporting mechanisms are provided [15]. Suggestions have been raised to facilitate harm and event identification, including ensuring independent review committees include technology experts [15], ensuring adverse events are statistically compared between arms in a trial, and using information on harms collected during an evaluation to inform postmarket surveillance [37]. There is also the potential to integrate features into DMHIs to support the recording of harms, including developing real-time monitoring of symptoms that raise alerts when participants reach predefined cut-offs. This requires evaluation to understand its acceptability, feasibility, and effectiveness [15].


It is an important ethical principle that access to evidence-based health care is equitable and does not disadvantage certain groups or patient characteristics. For example, in the United Kingdom and the United States, over 80% of adults and young people own a smartphone [38-40]. However, access to a mobile data plan is likely to be less in populations that are typically underserved by research and health care services (with example populations in the United States including Black, Hispanic, older, and less educated populations) [39]; hence, evaluations and implementation of DMHIs risk exacerbating existing health inequalities [41]. Although it is anticipated that digital disparities will reduce over time, the importance of digital literacy should not be overlooked [40]. Some training projects have been developed to address issues around digital literacy, including the Digital Outreach for Obtaining Resources and Skills program, which provides both in-person and web-based support in digital literacy [42]. The use of “digital navigators,” who are trained members of the clinic team providing technology support [43], while not placing additional burden on core-delivery clinic members, is a potentially promising way forward and has been implemented successfully in DMHI evaluations [44]. It is also critical that clinicians and health care providers are efficient at integrating technology into practice to increase access to DMHI, but clinicians may worry about the capacity of clinical systems to respond to risks created by DHIs [45].

Equally, it is critical that the DMHIs are designed and evaluated in the cultural context where they will be used [46]. The majority of research on DMHI is delivered in Westernized countries with relatively high incomes; studies have suggested that the effectiveness of these interventions may not directly translate to different cultures [47,48]. For example, issues such as the concept of “depression” not translating across cultures and languages [49] highlight the critical importance of language and context [50]. Although there is a growing interest in culturally adapted DMHIs, to date there has been no research that directly compares culturally adapted versus nonadapted interventions to explore if this improves uptake and effectiveness [48]. Given the potential that DMHIs have in providing scalable health care solutions, this requires further research.

Finally, trials of DMHIs often do not include participants who may be of higher risk, such as those with suicidality or severe mental health problems. A recent review found that 13 out of 23 trials screened out patients who had more serious and enduring mental health illnesses [15]. Another review found that the main method to mitigate risk in trials of DMHIs was to exclude high-risk populations [37]. Whereas establishing proof-of-concept and indications of efficacy and safety on a restricted sample is ethically appropriate, it is important that when larger, pragmatic trials or research designs are conducted before wide-scale rollout, these larger studies should include samples that are reflective of the population of interest, including all relevant characteristics and comorbidities. The severity of mental health problems is frequently underdiagnosed; hence, evaluations should consider that DMHIs may be used by people whose mental health conditions are much more severe than for whom the intervention was intended, such as people diagnosed with mild-to-moderate depression, but who are experiencing a period of severe depression [22].

Some categories of DMHI may embed a risk of discrimination on personal characteristics that are protected by law. One example is AI technologies, where evidence shows that algorithmic biases can lead to discrimination based on characteristics such as gender and ethnicity [51], which are frequently protected in national legislation [52]. Laws provide protection for individuals against discrimination, but the challenge is often understanding when discrimination has happened, as the algorithms used in interventions often can be difficult to interpret and explain [51].


The adequacy of study designs to fully assess intervention effectiveness and cost-effectiveness is a key challenge to the field. If trial designs are not sufficiently robust to answer the intended question, it is possible that the reported benefits of an intervention may be inflated, resulting in interventions that are not effective or fit-for-purpose being recommended for routine use. Consequently, engaging in these interventions may result in users delaying access or being unable to access more suitable interventions, and potentially represents a waste of health service resources, hence the adequacy of evaluation design is inherently an ethical issue.

One significant challenge lies in selecting the most appropriate comparator for RCTs. Potential comparators include active therapeutic interventions delivered nondigitally (eg, face-to-face therapy) or different active interventions (eg, cognitive-behavioral therapy versus exposure therapy) delivered digitally. Alternatively, comparators may be nonactive controls such as waitlists or no intervention. It may be impossible for participants to be masked to treatment allocation in a DMHI evaluation, raising the risk of bias. Reviews have suggested that comparisons to nonactive controls may lead to larger treatment effects than comparisons to active controls [53,54].

Trials of DMHI frequently encounter high dropout rates (failure to follow participants up to the primary and secondary end points) or fail to recruit the target number of participants, leading to reduced statistical power and a lower probability of identifying effects. Analysis of data from the National Institute for Health and Care Research (NIHR) Health Technology Assessment database revealed that from 2010 to 2020, 60% of DMHI trials did not meet their recruitment targets, and 56% did not meet their follow-up targets [55]. This shortfall may result in recommendations based on insufficient research to draw definitive conclusions. It raises an ethical concern about wasting the time of participants who have engaged in the research but to no scientific avail.

A significant challenge within DMHIs is the speed at which technology evolves and progresses, versus the slow and careful study designs we use, predicated on pharmaceutical research. By the time RCTs are complete, the technology itself may have required significant changes to be made within the intervention. This raises the question as to whether we should be considering alternative study designs (eg, rapid evaluations of specific therapeutic components or evaluating the application of therapeutic modalities through different mediums rather than specific interventions).

A systematic review has shown that current study designs often neglect to assess the cost-effectiveness of interventions [56]. Understanding cost-effectiveness is crucial for evaluating an intervention's benefits to the health care system and enabling informed decisions by health care providers regarding investment in digital technology over other treatment modalities. Moreover, if trials do not explore cost-effectiveness, they may fail to record other health resource utilization outside the intervention, which is essential for understanding clinical benefits between treatment groups.


Most DMHIs require some form of active engagement from their users. Some interventions, such as Beating the Blues [57], are built on an interaction approach in which users are expected to work through a structured program of activities intended to offer therapeutic benefits, often intended to last for weeks or months. Developing DMHIs with the potential to sustain engagement is a significant challenge; research studies and real-world implementation experiences often report low levels of engagement [55]. Where engagement is low or irregular, and engagement with a structured program of activities is intended, this is sometimes conceptualized by study teams as poor adherence to the intended technology interaction. Poor adherence to structured programs is frequently reported in research evaluations [55].

Challenges around engagement and adherence are ethical issues for research teams. For example, an automated DMHI may present a user experience that is off-putting due to poor design. This might cause harm if a person with a mental health condition perceives it as a personal failing that they could not sustain the expected engagement, rather than a technology design failure. Whether to include human support in study designs is an important question. Studies have consistently demonstrated that guided interventions, which involve human support through asynchronous messages or real-time interaction, have demonstrated higher adherence and greater clinical effectiveness compared to unguided interventions [58], but there are settings in which they may not be practical (eg, due to resource limitations) or where they may cause harm, such as where the expected user base is experiencing profound social anxieties [22].

Engagement with a DMHI is often difficult for researchers to quantify. Often, a range of DMHI interaction behaviors may need to be considered, including frequency or duration of interaction with the DMHI, and metrics of “active” engagement,” such as the number of clicks on links, pages visited, or interactive tasks completed [59]. It can be difficult for researchers to define which is the best metric to use. One approach is to conduct a principal component analysis to create a comprehensive composite measure [59]. Some researchers are beginning to explore more subjective constructs of engagement, including aspects such as knowing that the intervention is available whenever it is needed, and satisfaction and usability [60].

Despite its importance, research evaluations of internet-delivered interventions often neglect to investigate the impact of usage level on intervention outcome. A systematic review found that only 21% of included trials had conducted statistical analyses to account for differences in usage [61], leading to a lack of knowledge on how much engagement with a DMHI is sufficient. Even if an understanding of the necessary level of engagement to provide benefits is established in a tightly controlled research setting, achieving this in a real-world setting can be challenging, as research settings often offer greater levels of participant support than in real-world implementation. For example, when the serious game “SPARX” aimed at supporting low mood in children and young people was nationally rolled out in New Zealand, only between 2% and 5% of young people completed the full program [62], whereas considerably higher completion rates had been found in the trial [63].

There is also a danger of research teams misconceptualizing lack of usage as poor adherence, in cases where users have made a sensible choice to avoid usage of a technology that provides no benefits or is actively harming them. This can occur if teams take an ego-centric approach to conceptualizing their offerings as of central importance to their users [64]. For a recent trial examining the benefits of receiving access to a collection of mental health recovery narratives [21], we chose not to require a minimum level of usage, as our empirically developed change model had established that access to narratives could cause harm, so we wanted to enable users to self-manage engagement, which could include no engagement [65,66]. Healthy engagement was supported by a range of safety strategies, including the use of content warnings [66]. Hence, in this example, we did not conceptualize low engagement as a lack of adherence [21].


DMHIs often struggle to bridge the gap from a research product to implementation in health care services or broader society [67]. This is an ethical issue, as research evaluations regularly require hundreds of participants, large teams of evaluators and support workers, and access to substantial amounts of public funds, all of which are underused if this transition is not made. If DMHIs are both clinically effective and cost-effective but not implemented, people with mental health problems may not be receiving optimal care. Currently, there are several DMHIs that are available for use in health service settings that have not undergone independent evaluation [9]. Conversely, there are several well-validated DMHIs that may never be implemented in routine health care settings [68]. This illustrates a possible disconnect between research-led efforts to evaluate interventions, and the needs of clinical practice, which is a major barrier to service user benefit.

There is likely to be a broad range of factors underpinning this disconnect, and addressing these is critical to reducing the underuse of research evaluations of DMHIs. The fast-paced nature of technological progress can mean DMHIs can become obsolete or unacceptable to users before they reach the implementation stage [53] due to the slow progress of the research lifecycle. Research teams may lack the institutional support or commercial knowledge needed to implement DMHIs that they have evaluated, and work to implement technologies may be at odds with funder or institutional metrics used to evaluate individual academic performance, particularly where the regulatory burden is high. A possible solution is government-funded infrastructure to support the integration of academic, clinical, and industry expertise, but substantial resources may be required.

A clinically grounded challenge is that health care professionals may not have received preregistration training or continuous professional development in DMHIs and how to work with them clinically, and hence may lack the competence or confidence to integrate them into care. For example, an interview study with Norwegian general practitioners interested in computerized cognitive behavioral therapy revealed their concerns that they had insufficient knowledge to effectively recommend these to their service users [69]. We have argued that health care practitioners need training to enable their own psychological safety when providing therapies online through videoconferencing services, including due to the lesser availability of immediate emergency response in the event of suicidality [70].


The deployment and evaluation of DMHIs typically involve processing personal and sensitive data, sometimes with processing through third parties such as web-server hosts. In many jurisdictions, these data will be protected by legislation. For example, in the European Union (EU), Article 8 of the European Convention on Human Rights and Articles 7 and 8 of the EU Charter of Fundamental Rights enshrine privacy as a fundamental human right, and national implementations of the General Data Protection Regulations provide a specific legal framework for data processing.

Web-based DMHIs, in particular, have the potential to be evaluated in one jurisdiction but incorporate components hosted in another, frequently requiring close attention to the navigation of legal requirements, especially when the countries involved have different standards of data protection [71]. Additional challenges are present when computer vision technologies such as augmented reality are used, as data about third parties may be collected without their consent. Service users and clinicians have raised concerns about data security in DMHIs and the potential for unauthorized breaches of sensitive data [71-73]. For health care services, the introduction of these technologies may present too much uncertainty in terms of their legal requirements. This can contribute to challenges with the implementation of DMHIs. If users are encouraged to create and share content as part of working with a DMHI, this introduces additional considerations around how such content is licensed and used [74].

As evaluators, we have all invested substantial effort in data management planning around DMHIs, including identifying how to ethically and legally work with the data that is generated through our evaluations. This requires access to specific competencies such as knowledge of data protection laws. These competencies require work to maintain, as regulations around data processing can change rapidly (eg, when the EU Safe Harbor scheme for cross-border data processing was replaced by Privacy Shield [23]). Legal and ethical issues are present even for data that is publicly available. In 2015, the Samaritans released an app that identified tweets associated with suicidal ideation and alerted friends of those tweeting. This was rapidly withdrawn following interventions by privacy campaigners [75]. The UK Information Commissioner’s Office subsequently found that it was unlikely to be compatible with UK data protection law [76]. More recently, the US Federal Trade Commission fined BetterHelp, a platform connecting people to therapists, $7.8 million for sharing IP and email addresses as well as sensitive mental health information to third parties [77].

There are similar challenges around intellectual property, as DMHIs may emerge from collaborative efforts involving multiple partners, and they may be built on underlying technologies such as software libraries produced by third parties, sometimes with restrictive licensing conditions that can preclude commercial use. Successfully addressing these issues requires a proactive stance from the outset of development. We advocate seeking guidance from legal or technology transfer teams early in the conceptual phase to mitigate risks. Moreover, there is a pressing need to enhance researchers' skills in translational research and foster collaborative endeavors between stakeholders to effectively support the future implementation of DMHI.


Several organizations across the globe regulate hardware and software artifacts classified as medical devices. These include the European Medicines Authority, the Medicines and Health Care Products Regulatory Agency (MHRA) in the United Kingdom, the Food and Drug Administration (FDA) in the United States, and the Therapeutic Goods Administration (TGA) in Australia. These organizations typically have legislative powers to ensure that medical devices are safe and effective, which can include requiring manufacturers to submit information on their approach to the safe storage and processing of data. How this is enacted and how medical devices are defined may differ, and some countries remain without regulatory authorities. Where medical devices are regulated, classification is usually based on the “intended use/purpose,” as defined by the manufacturer or developer. For example, the EU’s classifications for medical devices are outlined in Textbox 1 [78].

Textbox 1. European Union (EU) classifications for medical devices.

“Medical device” means any instrument, apparatus, appliance, software, implant, reagent, material, or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes:

  • diagnosis, prevention, monitoring, prediction, prognosis, treatment, or alleviation of disease;
  • diagnosis, monitoring, treatment, alleviation of, or compensation for, an injury or disability,
  • investigation, replacement, or modification of the anatomy or of a physiological or pathological process or state;
  • providing information by means of in vitro examination of specimens derived from the human body, including organ, blood, and tissue donations;
  • and which does not achieve its principal intended action by pharmacological, immunological, or metabolic means, in or on the human body, but which may be assisted in its function by such means.

This definition explicitly recognizes the concept of Software as a Medical Device (SaMD), which is of particular relevance to DMHI developers and evaluators. The use of terms such as “disease” and “disability” have often left medical device classification ambiguous for SaMD intended for digital mental health purposes. In some jurisdictions, amendments have been made to reduce ambiguity. For example, the FDA has separated out a category of “low-risk devices” intended to target wellness or well-being, by offering specific internal staff guidance on how they should be regulated [79]. This perhaps creates a risk of DMHI developers classifying their tools as “wellness” devices rather than “treatments” to reduce regulatory burden [40]. In the United Kingdom, the MHRA is engaged in a change program on SaMD regulation but with a component examining arrangements needed so that AI technologies are effectively regulated [24].

Resolving ambiguity around whether our interventions are medical devices has sometimes taken substantial effort for us as evaluators (eg, through seeking opinions from sponsors and legal experts), and we are aware of at least one major research program that faced a delay of over a year when its intervention was unexpectedly classified as a medical device. We also wonder whether DMHIs are underclassified as medical devices; our review of DMHI trials found that only 2 of 23 interventions had a medical device classification [15]. Given the ambiguity of regulation, there is a potential for the medical status of DMHIs to change between the awarding of funding and the start of evaluation. We have found that regulatory guidance can be more ambiguous where DMHIs are not classified as medical devices, perhaps due to a regulatory perception that harm is less likely. For these DMHIs, authors ADGB and CLH are working to produce adverse event reporting guidelines that complement those of the MHRA.

Regulations that define when a medical device needs to be clinically investigated can also create ambiguity for DMHIs. For example, in MHRA regulation, clinical investigation is not needed if there is sufficient evidence available for an intervention that is equivalent [80]. Equivalency might be straightforward to assess for medication but less clear for DMHIs (eg, does equivalency hold if the same therapeutic intervention is delivered but through a different device?). Within MHRA regulation, further clinical investigation is needed when there has been a significant change to an intervention. However, current regulations may be unclear for AI-based DMHI interventions that learn and hence evolve; therefore, regulation is currently being examined in this space [24]. Finally, DMHIs are sometimes used outside their intended purpose (eg, if a self-management intervention is evaluated to treat a condition such as depression). These “secondary uses” for digital technologies appear to be relatively common and may produce harms that are not recorded (eg, if participants drop out of an evaluation due to being given an inappropriate technology and are not followed up).


Over the past 25 years, the rapid evolution of both DMHI-based research and encompassing regulatory systems has meant that study teams are faced with multifaceted ethical and legal challenges to resolve. We have drawn on our own experiences to describe 8 of the most pertinent challenges, but there will be others, and some concerns will specifically apply to particular types or forms of DMHI. Collectively, the challenges we have identified highlight a substantial body of knowledge and expertise that is required on the part of study teams, either within the team or through access to external experts. The challenges that we present also illustrate potentially serious negative consequences if sufficient resources are not available to study teams, including challenges regarding the legality of data collection procedures that inadvertently collect data from children, harm caused to members of the public through unethical recruitment methods, and substantial delays to studies that discover medical devices legislation applies to their intervention. Our focus was predominately on treatment; however, a similar reflection of the challenges of DMHI for prevention, assessment, or diagnosis is also warranted.

In many cases, understanding how to respond to these challenges requires collaboration between industry representatives, academics, public contributors, and clinicians. Each of these stakeholder groups can bring unique expertise and perspectives, and by leveraging their collective insights, innovative solutions can be developed to promote the legal and ethical use of DMHIs. To support success, such collaborations need to be planned early, with relevant collaborators and financial resources integrated into funding proposals, or other mechanisms for collecting study resources. For example, resources to pay for legal opinions are frequently not included in proposals but may be necessary to support successful negotiation of regulatory processes.

Organizations conducting health research might consider how to create structures that develop and maintain the necessary competence to resolve the challenges that we have identified. For example, organizations that are frequently engaged in DMHI evaluation might consider developing specific competence on medical device regulations, including the capacity to conduct horizon-scanning activities to identify possible changes (given that regulations can change between proposal submission and evaluation conclusion). Organizations might also consider developing communities of public members interested in public involvement in research. Collectively, we have found discussions with public members essential to resolve the most challenging ethical issues, such as around the management of narrator identity in our trial of a DMHI integrating mental health recovery narratives [66].

Acknowledgments

Authors CLH and SRE were supported by the UK National Institute for Health and Care Research (NIHR) through the NIHR Nottingham Biomedical Research Centre (NIHR203310). CLH was also supported through the NIHR MindTech Health Research Centre. AGB was supported by the Engineering and Physical Sciences Research Council through the UK Responsible Artificial Intelligence (RAI; grant EP/Y009800/1). The views expressed here are those of the authors and not necessarily those of the NIHR or the UK Department of Health and Social Care.

Conflicts of Interest

None declared.

  1. Kohn R, Saxena S, Levav I, Saraceno B. The treatment gap in mental health care. Bull World Health Organ. Nov 2004;82(11):858-866. [FREE Full text] [Medline]
  2. Demyttenaere K, Bruffaerts R, Posada-Villa J, Gasquet I, Kovess V, Lepine JP, et al. Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys. JAMA. Jun 2, 2004;291(21):2581-2590. [CrossRef] [Medline]
  3. Goodwin RD, Dierker LC, Wu M, Galea S, Hoven CW, Weinberger AH. Trends in US depression prevalence From 2015 to 2020: the widening treatment gap. Am J Prev Med. Nov 2022;63(5):726-733. [FREE Full text] [CrossRef] [Medline]
  4. Rennick-Egglestone S, Newby C, Robinson C, Yeo C, Ng F, Elliott RA, et al. Differences between online trial participants who have used statutory mental health services and those who have not: analysis of baseline data from 2 pragmatic trials of a digital health intervention. J Med Internet Res. Jun 27, 2023;25:e44687. [FREE Full text] [CrossRef] [Medline]
  5. Patel V, Maj M, Flisher AJ, De SMJ, Koschorke M, Prince M, et al. WPA ZonalMember Society Representatives. Reducing the treatment gap for mental disorders: a WPA survey. World Psychiatry. Oct 2010;9(3):169-176. [FREE Full text] [Medline]
  6. Lockwood J, Williams L, Martin JL, Rathee M, Hill C. Effectiveness, user engagement and experience, and safety of a mobile app (Lumi Nova) delivering exposure-based cognitive behavioral therapy strategies to manage anxiety in children via immersive gaming technology: preliminary evaluation study. JMIR Ment Health. Jan 24, 2022;9(1):e29008. [FREE Full text] [CrossRef] [Medline]
  7. Freeman D, Lambe S, Kabir T, Petit A, Rosebrock L, Yu L, et al. gameChange Trial Group. Automated virtual reality therapy to treat agoraphobic avoidance and distress in patients with psychosis (gameChange): a multicentre, parallel-group, single-blind, randomised, controlled trial in England with mediation and moderation analyses. Lancet Psychiatry. May 2022;9(5):375-388. [FREE Full text] [CrossRef] [Medline]
  8. The mobile economy: Sub-Saharan Africa. GSM Association. London, UK. GSM Association URL: https://www.gsma.com/mobileeconomy/wp-content/uploads/2020/09/GSMA_MobileEconomy2020_SSA_Eng.pdf [accessed 2024-07-27]
  9. Schueller SM, Torous J. Scaling evidence-based treatments through digital mental health. Am Psychol. Nov 2020;75(8):1093-1104. [FREE Full text] [CrossRef] [Medline]
  10. Rodriguez-Villa E, Naslund J, Keshavan M, Patel V, Torous J. Making mental health more accessible in light of COVID-19: Scalable digital health with digital navigators in low and middle-income countries. Asian J Psychiatr. Dec 2020;54:102433. [CrossRef] [Medline]
  11. Tornero-Costa R, Martinez-Millana A, Azzopardi-Muscat N, Lazeri L, Traver V, Novillo-Ortiz D. Methodological and quality flaws in the use of artificial intelligence in mental health research: systematic review. JMIR Ment Health. Feb 02, 2023;10:e42045. [FREE Full text] [CrossRef] [Medline]
  12. Sanderson C, Kouzoupi N, Hall CL. Technology Matters: The human touch in a digital age - a blended approach in mental healthcare delivery with children and young people. Child Adolesc Ment Health. May 27, 2020;25(2):120-122. [CrossRef] [Medline]
  13. Neary M, Schueller SM. State of the field of mental health apps. Cogn Behav Pract. Nov 2018;25(4):531-537. [FREE Full text] [CrossRef] [Medline]
  14. Nicholas J, Larsen ME, Proudfoot J, Christensen H. Mobile apps for bipolar disorder: a systematic review of features and content quality. J Med Internet Res. 2015;17(8):e198. [FREE Full text] [CrossRef] [Medline]
  15. Gómez Bergin AD, Valentine AZ, Rennick-Egglestone S, Slade M, Hollis C, Hall CL. Identifying and categorizing adverse events in trials of digital mental health interventions: narrative scoping review of trials in the International Standard Randomized Controlled Trial Number Registry. JMIR Ment Health. Feb 22, 2023;10:e42501. [FREE Full text] [CrossRef] [Medline]
  16. Murray E, Khadjesari Z, White IR, Kalaitzaki E, Godfrey C, McCambridge J, et al. Methodological challenges in online trials. J Med Internet Res. 2009;11(2):e9. [FREE Full text] [CrossRef] [Medline]
  17. Garrido S, Millington C, Cheers D, Boydell K, Schubert E, Meade T, et al. What works and what doesn't work? A systematic review of digital mental health interventions for depression and anxiety in young people. Front Psychiatry. 2019;10:759. [FREE Full text] [CrossRef] [Medline]
  18. Hollis C, Hall CL, Jones R, Marston L, Novere ML, Hunter R, et al. Therapist-supported online remote behavioural intervention for tics in children and adolescents in England (ORBIT): a multicentre, parallel group, single-blind, randomised controlled trial. Lancet Psychiatry. Oct 2021;8(10):871-882. [FREE Full text] [CrossRef] [Medline]
  19. Hall CL, Davies EB, Andrén P, Murphy T, Bennett S, Brown BJ, et al. ORBIT Trial team. Investigating a therapist-guided, parent-assisted remote digital behavioural intervention for tics in children and adolescents-'Online Remote Behavioural Intervention for Tics' (ORBIT) trial: protocol of an internal pilot study and single-blind randomised controlled trial. BMJ Open. Jan 03, 2019;9(1):e027583. [FREE Full text] [CrossRef] [Medline]
  20. Hall CL, Walker GM, Valentine AZ, Guo B, Kaylor-Hughes C, James M, et al. Protocol investigating the clinical utility of an objective measure of activity and attention (QbTest) on diagnostic and treatment decision-making in children and young people with ADHD-'Assessing QbTest Utility in ADHD' (AQUA): a randomised controlled trial. BMJ Open. Dec 01, 2014;4(12):e006838. [FREE Full text] [CrossRef] [Medline]
  21. Slade M, Rennick-Egglestone S, Elliott RA, Newby C, Robinson C, Gavan SP, et al. NEON study group. Effectiveness and cost-effectiveness of online recorded recovery narratives in improving quality of life for people with non-psychotic mental health problems: a pragmatic randomized controlled trial. World Psychiatry. Feb 2024;23(1):101-112. [FREE Full text] [CrossRef] [Medline]
  22. Rennick-Egglestone S, Knowles S, Toms G, Bee P, Lovell K, Bower P. Health technologies' in the wild': experiences of engagement with computerised CBT. ACM; 2016. Presented at: 2016 CHI Conference on Human Factors in Computing Systems; May 7-12; San Jose. [CrossRef]
  23. Weiss M, Archick K. US-EU data privacy: from safe harbor to privacy shield. Congressional Research Service. Washington, USA. Congressional Research Service; May 2016. URL: https://sgp.fas.org/crs/misc/R44257.pdf [accessed 2024-09-03]
  24. Software and AI as a Medical Device Change Programme - Roadmap. Gov.UK. 2023. URL: https://tinyurl.com/yckwd47w [accessed 2024-09-02]
  25. Eysenbach G, Till JE. Ethical issues in qualitative research on internet communities. BMJ. Nov 10, 2001;323(7321):1103-1105. [FREE Full text] [CrossRef] [Medline]
  26. Sugiura L. Researching online forums: ethics case study. British Sociological Association. Durham, UK. British Sociological Association; 2016. URL: https://www.britsoc.co.uk/media/24834/j000208_researching_online_forums_-cs1-_v3.pdf [accessed 2024-09-02]
  27. Hewson C, Buchanan T. Ethics guidelines for internet-mediated research. British Psychological Society. London, UK. The British Psychological Society; 2013. URL: https://www.bps.org.uk/guideline/ethics-guidelines-internet-mediated-research [accessed 2024-09-03]
  28. Rennick-Egglestone S. Principles for the production and dissemination of recruitment material for three clinical trials of an online intervention. Trials. Jul 09, 2021;22(1):441. [FREE Full text] [CrossRef] [Medline]
  29. Office of the Commissioner, Office of Clinical Policy and Programs, Office of Clinical Policy, Office of Good Clinical Practice, Center for Drug Evaluation and Research, Center for Devices and Radiological Health. Use of Electronic Informed Consent in Clinical Investigations: Questions and Answers. Food and Drug Administration. Washington, D.C. Food and Drug Administration; 2016. URL: https://www.fda.gov/media/116850/download [accessed 2024-09-04]
  30. Health Research Authority, Medicines and Healthcare Products Regulatory Agency. Joint statement on seeking consent by electronic methods v1.2. Health Research Authority. Sep 2018. URL: https:/​/www.​hra.nhs.uk/​about-us/​news-updates/​hra-and-mhra-publish-joint-statement-seeking-and-documenting-consent-using-electronic-methods-econsent/​ [accessed 2024-09-02]
  31. Robinson C, Newby C, Rennick-Egglestone S, Llewellyn-Beardsley J, Ng F, Elliott RA, et al. Statistical analysis plans for two randomised controlled trials of the Narrative Experiences Online (NEON) Intervention: impact of receiving recorded mental health recovery narratives on quality of life in people experiencing psychosis (NEON) and people experiencing non-psychosis mental health problems (NEON-O). Trials. May 20, 2023;24(1):343. [FREE Full text] [CrossRef] [Medline]
  32. Song H, Holzer A. Online safety laws by country. J Responsible Innov. 2023;3(3):3. [FREE Full text]
  33. Children's Online Privacy Protection Rule ("COPPA"). Federal Trade Commission. Federal Trade Commission; 1998. URL: https://www.ftc.gov/legal-library/browse/rules/childrens-online-privacy-protection-rule-coppa [accessed 2024-09-03]
  34. Bell J. Harmful or helpful? The role of the internet in self-harming and suicidal behaviour in young people. Ment Health Rev J. Mar 05, 2014;19(1):61-71. [CrossRef]
  35. NEDA suspends AI chatbot for giving harmful eating disorder advice. Psychiatrist. URL: https:/​/www.​psychiatrist.com/​news/​neda-suspends-ai-chatbot-for-giving-harmful-eating-disorder-advice/​ [accessed 2024-09-02]
  36. Bell IH, Lim MH, Rossell SL, Thomas N. Ecological momentary assessment and intervention in the treatment of psychotic disorders: a systematic review. Psychiatr Serv. Nov 01, 2017;68(11):1172-1181. [CrossRef] [Medline]
  37. Taher R, Hsu C, Hampshire C, Fialho C, Heaysman C, Stahl D, et al. The safety of digital mental health interventions: systematic review and recommendations. JMIR Ment Health. Oct 09, 2023;10:e47433. [FREE Full text] [CrossRef] [Medline]
  38. Bergin A, Davies EB. Technology Matters: mental health apps - separating the wheat from the chaff. Child Adolesc Ment Health. Feb 2020;25(1):51-53. [FREE Full text] [CrossRef] [Medline]
  39. Roberts ET, Mehrotra A. Assessment of disparities in digital access among medicare beneficiaries and implications for telemedicine. JAMA Intern Med. Oct 01, 2020;180(10):1386-1389. [CrossRef] [Medline]
  40. Torous J, Bucci S, Bell IH, Kessing LV, Faurholt-Jepsen M, Whelan P, et al. The growing field of digital psychiatry: current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry. Oct 2021;20(3):318-335. [FREE Full text] [CrossRef] [Medline]
  41. Helsper EJ. A corresponding fields model for the links between social and digital exclusion. Commun Theor. Oct 15, 2012;22(4):403-426. [CrossRef]
  42. Hoffman L, Wisniewski H, Hays R, Henson P, Vaidyam A, Hendel V, et al. Digital Opportunities for Outcomes in Recovery Services (DOORS): a pragmatic hands-on group approach toward increasing digital health and smartphone competencies, autonomy, relatedness, and alliance for those with serious mental illness. J Psychiatr Pract. Mar 2020;26(2):80-88. [CrossRef] [Medline]
  43. Connolly SL, Kuhn E, Possemato K, Torous J. Digital clinics and mobile technology implementation for mental health care. Curr Psychiatry Rep. May 07, 2021;23(7):38. [FREE Full text] [CrossRef] [Medline]
  44. Carpenter-Song E, Acquilano SC, Noel V, Al-Abdulmunem M, Torous J, Drake RE. Individualized intervention to support mental health recovery through implementation of digital tools into clinical care: feasibility study. Community Ment Health J. Jan 2022;58(1):99-110. [FREE Full text] [CrossRef] [Medline]
  45. Roe J, Brown S, Yeo C, Rennick-Egglestone S, Repper J, Ng F, et al. Opportunities, enablers, and barriers to the use of recorded recovery narratives in clinical settings. Front Psychiatry. Oct 30, 2020;11:589731. [FREE Full text] [CrossRef] [Medline]
  46. Groen G, Jörns-Presentati A, Dessauvagie A, Seedat S, van den Heuvel LL, Suliman S, et al. Development of a mobile application for detection of adolescent mental health problems and feasibility assessment with primary health care workers. Issues Ment Health Nurs. Nov 2022;43(11):1046-1055. [CrossRef] [Medline]
  47. Karyotaki E, Ebert DD, Donkin L, Riper H, Twisk J, Burger S, et al. Do guided internet-based interventions result in clinically relevant changes for patients with depression? An individual participant data meta-analysis. Clin Psychol Rev. Jul 2018;63:80-92. [CrossRef] [Medline]
  48. Spanhel K, Balci S, Feldhahn F, Bengel J, Baumeister H, Sander LB. Cultural adaptation of internet- and mobile-based interventions for mental disorders: a systematic review. NPJ Digit Med. Aug 25, 2021;4(1):128. [FREE Full text] [CrossRef] [Medline]
  49. Kokanovic R, Dowrick C, Butler E, Herrman H, Gunn J. Lay accounts of depression amongst Anglo-Australian residents and East African refugees. Soc Sci Med. Jan 2008;66(2):454-466. [CrossRef] [Medline]
  50. Loewenthal D, Mohamed A, Mukhopadhyay S, Ganesh K, Thomas R. Reducing the barriers to accessing psychological therapies for Bengali, Urdu, Tamil and Somali communities in the UK: some implications for training, policy and practice. Br J Guid Counc. Feb 2012;40(1):43-66. [CrossRef]
  51. Bartoletti I, Xenidis R. Study on the impact of artificial intelligence systems, their potential for promoting equality, including gender equality, and the risks they may cause in relation to non-discrimination. Council of Europe. Brussels, Belgium. Council of Europe; 2023. URL: https://tinyurl.com/yaethk4n [accessed 2024-09-03]
  52. Kotera Y, Rennick-Egglestone S, Ng F, Llewellyn-Beardsley J, Ali Y, Newby C, et al. Assessing diversity and inclusivity is the next frontier in mental health recovery narrative research and practice. JMIR Ment Health. Apr 17, 2023;10:e44601. [FREE Full text] [CrossRef] [Medline]
  53. Hollis C, Falconer CJ, Martin JL, Whittington C, Stockton S, Glazebrook C, et al. Annual Research Review: Digital health interventions for children and young people with mental health problems: a systematic and meta-review. J Child Psychol Psychiatry. Dec 10, 2016. [CrossRef] [Medline]
  54. Garrido S, Millington C, Cheers D, Boydell K, Schubert E, Meade T, et al. What works and what doesn't work? A systematic review of digital mental health interventions for depression and anxiety in young people. Front Psychiatry. 2019;10. [FREE Full text] [CrossRef] [Medline]
  55. Hall CL, Sanderson C, Brown BJ, Andrén P, Bennett S, Chamberlain LR, et al. Opportunities and challenges of delivering digital clinical trials: lessons learned from a randomised controlled trial of an online behavioural intervention for children and young people. Trials. Dec 09, 2020;21(1):1011. [FREE Full text] [CrossRef] [Medline]
  56. Valentine AZ, Brown BJ, Groom MJ, Young E, Hollis C, Hall CL. A systematic review evaluating the implementation of technologies to assess, monitor and treat neurodevelopmental disorders: A map of the current evidence. Clin Psychol Rev. Aug 2020;80:101870. [FREE Full text] [CrossRef] [Medline]
  57. Gilbody S, Littlewood E, Hewitt C, Brierley G, Tharmanathan P, Araya R, et al. REEACT Team. Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial): large scale pragmatic randomised controlled trial. BMJ. Nov 11, 2015;351:h5627. [FREE Full text] [CrossRef] [Medline]
  58. Lehtimaki S, Martic J, Wahl B, Foster KT, Schwalbe N. Evidence on digital mental health interventions for adolescents and young people: systematic overview. JMIR Ment Health. Apr 29, 2021;8(4):e25847. [FREE Full text] [CrossRef] [Medline]
  59. Khan K, Hollis C, Hall CL, Murray E, Davies EB, Andrén P, et al. Fidelity of delivery and contextual factors influencing children’s level of engagement: process evaluation of the online remote behavioral intervention for tics trial. J Med Internet Res. Jun 21, 2021;23(6):e25470. [CrossRef]
  60. Graham AK, Kwasny MJ, Lattie EG, Greene CJ, Gupta NV, Reddy M, et al. Targeting subjective engagement in experimental therapeutics for digital mental health interventions. Internet Interv. Sep 2021;25:100403. [FREE Full text] [CrossRef] [Medline]
  61. Koneska E, Appelbe D, Williamson PR, Dodd S. Usage metrics of web-based interventions evaluated in randomized controlled trials: systematic review. J Med Internet Res. Apr 16, 2020;22(4):e15474. [FREE Full text] [CrossRef] [Medline]
  62. Malatest International. Evaluation of SPARX. New Zealand Ministry of Health. URL: https://www.health.govt.nz/publications/evaluation-of-sparx [accessed 2024-09-02]
  63. Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, Lucassen MFG. The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial. BMJ. Apr 18, 2012;344:e2598. [FREE Full text] [CrossRef] [Medline]
  64. Rennick-Egglestone S, Mawson S. Homes of stroke survivors are a challenging environment for rehabilitation technologies. JMIR Rehabil Assist Technol. Jun 17, 2021;8(2):e12029. [FREE Full text] [CrossRef] [Medline]
  65. Rennick-Egglestone S, Ramsay A, McGranahan R, Llewellyn-Beardsley J, Hui A, Pollock K, et al. The impact of mental health recovery narratives on recipients experiencing mental health problems: Qualitative analysis and change model. PLoS One. 2019;14(12):e0226201. [FREE Full text] [CrossRef] [Medline]
  66. Slade M, Rennick-Egglestone S, Llewellyn-Beardsley J, Yeo C, Roe J, Bailey S, et al. Recorded mental health recovery narratives as a resource for people affected by mental health problems: development of the Narrative Experiences Online (NEON) intervention. JMIR Form Res. May 27, 2021;5(5):e24417. [FREE Full text] [CrossRef] [Medline]
  67. Liu M, Schueller SM. Moving evidence-based mental health interventions into practice: implementation of digital mental health interventions. Curr Treat Options Psych. Oct 03, 2023;10(4):333-345. [CrossRef]
  68. Bergin AD, Vallejos EP, Davies EB, Daley D, Ford T, Harold G, et al. Preventive digital mental health interventions for children and young people: a review of the design and reporting of research. NPJ Digit Med. 2020;3:133. [FREE Full text] [CrossRef] [Medline]
  69. Wilhelmsen M, Høifødt RS, Kolstrup N, Waterloo K, Eisemann M, Chenhall R, et al. Norwegian general practitioners' perspectives on implementation of a guided web-based cognitive behavioral therapy for depression: a qualitative study. J Med Internet Res. 2014;16(9):e208. [FREE Full text] [CrossRef] [Medline]
  70. Rennick-Egglestone S. Health workers need tech training for themselves and their patients. Times Higher Education. 2022. URL: https:/​/www.​timeshighereducation.com/​campus/​health-workers-need-tech-training-themselves-and-their-patients [accessed 2024-09-02]
  71. Gooding P. Mapping the rise of digital mental health technologies: Emerging issues for law and society. Int J Law Psychiatry. 2019;67:101498. [CrossRef] [Medline]
  72. Lustgarten SD, Garrison YL, Sinnard MT, Flynn AW. Digital privacy in mental healthcare: current issues and recommendations for technology use. Curr Opin Psychol. Dec 2020;36:25-31. [FREE Full text] [CrossRef] [Medline]
  73. Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res. Mar 24, 2021;23(3):e24387. [FREE Full text] [CrossRef] [Medline]
  74. Perez Vallejos E, Koene A, Carter CJ, Hunt D, Woodard C, Urquhart L, et al. Accessing online data for youth mental health research: meeting the ethical challenges. Philos. Technol. Oct 12, 2017;32(1):87-110. [CrossRef]
  75. Hsin H, Torous J, Roberts L. An adjuvant role for mobile health in psychiatry. JAMA Psychiatry. Feb 2016;73(2):103-104. [CrossRef] [Medline]
  76. Information Commissioner's Annual Report and Financial Statements 2014/15. Information Commissioner's Office. London, UK. Information Commissioner's Office; 2015. URL: https://ico.org.uk/media/about-the-ico/documents/1431982/annual-report-2014-15.pdf [accessed 2024-09-02]
  77. Fair L. FTC says online counseling service BetterHelp pushed people into handing over health information and broke its privacy promises. Federal Trade Commission. Washington, DC, USA. Federal Trade Commission; Mar 3, 2023. URL: https://tinyurl.com/kfxtkmzp [accessed 2024-09-02]
  78. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on Medical Devices, Amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009, and Repealing Council Directives 90/385/EEC and 93/42/EEC. EUR-Lex. European Union URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A32017R0745 [accessed 2024-09-02]
  79. General wellness: policy for low risk devices. guidance for industry and Food and Drug Administration staff. US Food and Drug Administration. URL: https:/​/www.​fda.gov/​regulatory-information/​search-fda-guidance-documents/​general-wellness-policy-low-risk-devices [accessed 2024-09-02]
  80. Keutzer L, Simonsson US. Medical device apps: an introduction to regulatory affairs for developers. JMIR Mhealth Uhealth. Jun 26, 2020;8(6):e17567. [FREE Full text] [CrossRef] [Medline]


ADHD: attention deficit hyperactivity disorder
AI: artificial intelligence
CTIMP: Clinical Trial of an Investigational Medicinal Product
DMHI: digital mental health intervention
EU: European Union
FDA: Food and Drug Administration
ISRCTN: International Standard Randomized Controlled Trial Number
MHRA: Medicines and Healthcare Products Regulatory Agency
NIHR: National Institute for Health and Care Research
RCT: randomized controlled trial
SaMD: Software as a Medical Device
TGA: Therapeutic Goods Administration


Edited by G Eysenbach; submitted 28.03.24; peer-reviewed by J Adler, A Smith; comments to author 11.05.24; revised version received 15.05.24; accepted 16.07.24; published 09.09.24.

Copyright

©Charlotte L Hall, Aislinn D Gómez Bergin, Stefan Rennick-Egglestone. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 09.09.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.