Published on in Vol 25 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/43727, first published .
Assessing Patient Adherence to and Engagement With Digital Interventions for Depression in Clinical Trials: Systematic Literature Review

Assessing Patient Adherence to and Engagement With Digital Interventions for Depression in Clinical Trials: Systematic Literature Review

Assessing Patient Adherence to and Engagement With Digital Interventions for Depression in Clinical Trials: Systematic Literature Review

Review

1Otsuka Pharmaceutical Development & Commercialization, Inc, Princeton, NJ, United States

2Oxford PharmaGenesis Inc, Newtown, PA, United States

Corresponding Author:

Ainslie Forbes, PhD

Otsuka Pharmaceutical Development & Commercialization, Inc

508 Carnegie Center Dr

Princeton, NJ, 08540

United States

Phone: 1 301 956 2702

Email: ainslie.forbes@otsuka-us.com


Background: New approaches to the treatment of depression are necessary for patients who do not respond to current treatments or lack access to them because of barriers such as cost, stigma, and provider shortage. Digital interventions for depression are promising; however, low patient engagement could limit their effectiveness.

Objective: This systematic literature review (SLR) assessed how participant adherence to and engagement with digital interventions for depression have been measured in the published literature, what levels of adherence and engagement have been reported, and whether higher adherence and increased engagement are linked to increased efficacy.

Methods: We focused on a participant population of adults (aged ≥18 years) with depression or major depressive disorder as the primary diagnosis and included clinical trials, feasibility studies, and pilot studies of digital interventions for treating depression, such as digital therapeutics. We screened 756 unique records from Ovid MEDLINE, Embase, and Cochrane published between January 1, 2000, and April 15, 2022; extracted data from and appraised the 94 studies meeting the inclusion criteria; and performed a primarily descriptive analysis. Otsuka Pharmaceutical Development & Commercialization, Inc (Princeton, New Jersey, United States) funded this study.

Results: This SLR encompassed results from 20,111 participants in studies using 47 unique web-based interventions (an additional 10 web-based interventions were not described by name), 15 mobile app interventions, 5 app-based interventions that are also accessible via the web, and 1 CD-ROM. Adherence was most often measured as the percentage of participants who completed all available modules. Less than half (44.2%) of the participants completed all the modules; however, the average dose received was 60.7% of the available modules. Although engagement with digital interventions was measured differently in different studies, it was most commonly measured as the number of modules completed, the mean of which was 6.4 (means ranged from 1.0 to 19.7) modules. The mean amount of time participants engaged with the interventions was 3.9 (means ranged from 0.7 to 8.4) hours. Most studies of web-based (34/45, 76%) and app-based (8/9, 89%) interventions found that the intervention group had substantially greater improvement for at least 1 outcome than the control group (eg, care as usual, waitlist, or active control). Of the 14 studies that investigated the relationship between engagement and efficacy, 9 (64%) found that increased engagement with digital interventions was significantly associated with improved participant outcomes. The limitations of this SLR include publication bias, which may overstate engagement and efficacy, and low participant diversity, which reduces the generalizability.

Conclusions: Patient adherence to and engagement with digital interventions for depression have been reported in the literature using various metrics. Arriving at more standardized ways of reporting adherence and engagement would enable more effective comparisons across different digital interventions, studies, and populations.

J Med Internet Res 2023;25:e43727

doi:10.2196/43727

Keywords



Background

Depression has a substantial global humanistic burden [1] and is associated with lower academic performance [2,3], reduced adherence to treatments for other medical conditions [4], impaired quality of life [5,6], mental and somatic comorbidities [7], and premature death [1]. Worldwide, 279.6 million (5%) adults had depression in 2019 [8,9], including 19.4 million adults in the United States who experienced a major depressive episode [10]. Depression prevalence in the United States increased more than 3-fold during the COVID-19 pandemic; individuals with lower income and greater exposure to stress have had an even higher risk of depression during the pandemic [11].

In addition to the high humanistic costs, depression has considerable economic costs. Compared with people without depression, those with depression have higher excess direct medical costs and indirect costs of lower productivity (absence from work and not fully functioning when at work) [12,13]. In the United States, major depressive disorder (MDD) cost >US $326 billion and afflicted 17.5 million (7.1%) adults in 2018 [13]. Patients with MDD incur even higher medical costs and health care resource use when they experience relapse or recurrence [14].

Approximately 1 in 3 adults with MDD in the United States does not receive treatment [15]. Many obstacles prevent people from receiving treatment, including cost [16], a shortage of mental health care providers [16], an uneven distribution of providers [16,17], long wait times for appointments [18], side effects of medication [19], and stigma about mental illness [16].

Furthermore, among patients who do receive treatment, approximately half do not adhere to antidepressants for MDD [20], and between 40% and 60% of patients with depression do not respond to first-line antidepressant treatment (ADT) with adequate symptom relief [21,22]. Similarly, dropout is common in psychotherapy for depression [23,24], and approximately half of the patients with depression across 5 trials did not respond to cognitive behavioral therapy [25]. These patients have a heightened risk of impaired functioning, lower quality of life, comorbid conditions, and suicidal behavior [26-28]. Given the barriers to access and the limited response to existing treatments, there is an unmet need for new and more personalized treatments for depression [29-31].

Digital health technologies may help address many of the treatment needs of patients with depression [32,33]. These tools offer increased access to treatment (including asynchronous access outside the clinic) [34-36], can reduce stigma concerns [35], and have the potential to be cost-effective [34,37,38]. Digital technologies such as chatbots, telehealth, smartphone apps, and virtual reality are now more widely used in mental health care than before the COVID-19 pandemic [32,34,39]. Smartphones are being used to deliver on-demand mental health interventions based on cognitive and behavioral therapies [32]. The most popular depression and anxiety apps average 10,000 downloads each month, with thousands of daily users [40]. However, the effect of apps may be limited if users do not sustain engagement [41]. Indeed, not all apps are used for long; in general, 65% of people who download a smartphone app stop using it within 1 week [42]. Furthermore, not all apps are used equally; a study of the top 50 hits for depression apps and the top 50 hits for anxiety apps in the Google app store found that 6 apps accounted for 90% of active use, whereas most of the remaining apps had no monthly active users [40]. User engagement data from various studies indicate challenges with initial and sustained engagement [43-45]. For example, 1 real-world study of 12 depression and anxiety apps found that after downloading the apps, approximately two-thirds of the users stopped engaging within 1 week, with half of the users discontinuing within 1 day [43]. Another real-world study of 93 Android mental health apps found that only 3.9% of the users engaged with the apps 15 days later, and only 3.3% of the users sustained engagement on day 30 [44].

To design mental health apps that people will use, it is critical to understand how and why the most successful apps sustain user engagement [32,40,46]. Six core design principles have been identified by the professional services network PwC as essential to the success of these technologies: (1) integrating into the care and lives of patients; (2) fitting in with other relevant systems, such as health records and software; (3) delivering data to patients and providers in an intelligent and actionable way; (4) connecting with patients, providers, and payers; (5) measuring efficacy outcomes; and (6) being engaging so that patients use the technology regularly [47]. A recent analysis of 1000 digital health apps indicated that only 17.8% (n=178) had been studied scientifically, and only 5.6% (n=56) satisfied the criteria for being engaging (by incorporating gamification into the app design), with a mere 0.4% (n=4) that met all 6 aforementioned design principles [48]. There may be a difference in engagement and efficacy between a specific category of digital health tools called digital therapeutics (DTx) and other categories of digital health tools, although this is yet to be determined. DTx are evidence-based interventions delivered by high-quality software intended to prevent, manage, or treat specific diseases [37,49,50]. Unlike many currently available health and wellness apps, DTx must submit efficacy and safety data to be reviewed or cleared by regulatory bodies before reaching consumers [37,50,51]. This manuscript focuses on evidence-based digital interventions, rather than the broader pool of digital health tools.

Adherence and Engagement

To ultimately improve mental health apps, it is important to first select meaningful criteria for measuring adherence and engagement. However, there is no universally accepted definition for either term; both are used interchangeably in the literature, and each word has been used when defining the other [46,52-55]. Li et al [56] defined engagement as “the degree to which a patient adheres to an intervention,” citing Christensen et al [57]. However, rather than engagement, Christensen et al [57] defined adherence as “the extent to which individuals experience the content of the Internet intervention.” Flett et al [58], also citing Christensen et al [57], defined adherence as “whether individuals access the content and use it in the manner it was designed to be optimally effective.” In a systematic literature review (SLR) of the definitions of adherence to eHealth technology, Sieverink et al [55] concluded that the term adherence is often incorrectly applied to variables that simply measure the amount of use of a technology, arguing that a true metric of adherence requires 3 components: use, intended use, and a justification for how the level of intended use was determined. Only 8% of the studies they reviewed included all 3 components, whereas 37% included the first 2 components [55]. For this SLR, we used the latter, less stringent definition of adherence but included only the first 2 components—actual use and recommended use—because very few studies reported the third component justifying the level of recommended use (Textbox 1).

Similar to adherence, engagement has also been conceptualized in several ways. In human-computer interaction research, it has been interpreted as having cognitive (eg, attention and effort), emotional (eg, feeling interested or bored), and behavioral (eg, participation and action) dimensions [59]. In this SLR, we use a behavioral conceptualization of engagement, defining it as the extent to which patients interacted with an intervention (eg, number of hours used, modules used, log-ins, and days used).

Few studies have investigated the association between user adherence or engagement and the efficacy of digital interventions [60]; however, these studies suggest a complicated relationship. Some studies found a dose-response relationship (where users with higher engagement with the technology achieved better outcomes) [61,62], whereas others either found no such relationship [53,63,64] or indicated that users can be impacted by interventions that they do not fully complete [55]. To gain insight into user adherence and engagement and—where possible—their relationship with efficacy, we conducted an SLR of clinical trials, feasibility studies, and pilot studies of digital interventions for depression delivered via mobile apps, the web, or both. We sought to better understand the levels of adherence and engagement reported in the literature and how these are affected by factors such as the method of intervention delivery, access to psychotherapy, and the reception of human support with delivery. By synthesizing this information, we aimed to support research on digital interventions by identifying ways of standardizing the collection and reporting of adherence and engagement data.

Textbox 1. Key terms used in this systematic literature review.

Adherence

  • Actual digital intervention use compared with intended use (ie, the average number of modules used divided by the total number of modules available)

Engagement

  • To which extent patients interact with an intervention (eg, number of hours used, modules used, log-ins, and days used)

Care as usual

  • Unrestricted access to psychiatrists, medication, psychotherapy, and primary care physicians

Active control group

  • Use of a depression treatment other than the digital intervention under investigation (eg, in-person therapy or web-base progressive muscle relaxation)

Searches

Our search covered interventions, therapies, and cognitive trainings that were computer assisted, internet assisted, smartphone based, or digital in some other way; designed to treat MDD or depression; and reported engagement outcomes. We conducted a search for articles published between January 1, 2000, and April 15, 2022, in Ovid MEDLINE In-Process & Other Non-Indexed Citations, Embase, and the Cochrane Library (CDSR, DARE, CENTRAL, CMR, and NHS EED) using the search string in Textbox 2. We also included references cited by relevant review articles [44,57,65-68]. The decision to exclude studies published before 2000 was made because digital therapies for depression were rare before this year. The first round of screening was of the titles and abstracts, and the second round of screening was of the full text. Multiple people were involved in the screening, with each article being screened by 1 person at each round. In cases where a reviewer was uncertain whether an article should be included or excluded, 2 other reviewers were consulted, and consensus was reached. We used DistillerSR (version 2.39.0; DistillerSR Inc) to deduplicate the searches and help coordinate the screening and extraction.

Textbox 2. Ovid search string used to conduct this systematic literature review.

(Cognitive behavio?r therapy technology OR Cognitive remediation technology OR Cognitive training technology OR Computer-assisted cognitive behavio?r therapy OR Computer-assisted cognitive remediation OR Computer-assisted cognitive training OR Computer-assisted intervention OR Computer-assisted psychotherapy OR Computer-assisted therapy OR Computerized cognitive behavio?r therapy OR Computerized cognitive remediation OR Computerized cognitive training OR Computerized intervention OR Computerized psychotherapy OR Computerized therapy OR Digital care OR Digital cognitive behavio?r therapy OR Digital cognitive remediation OR Digital cognitive training OR Digital health intervention OR Digital intervention OR Digital psychotherapy OR Digital therapy OR Digital therapeutic OR Internet cognitive behavio?r therapy OR Internet cognitive remediation OR Internet cognitive training OR Internet intervention OR Internet psychotherapy OR Internet therapy OR Internet-assisted cognitive behavio?r therapy OR Internet-assisted cognitive remediation OR Internet-assisted cognitive training OR Internet-assisted intervention OR Internet-assisted psychotherapy OR Internet-assisted therapy OR Mobile cognitive behavio?r therapy OR Mobile cognitive remediation OR Mobile cognitive training OR Mobile health application OR Mobile health intervention OR Mobile intervention OR Mobile psychotherapy OR Mobile therapy OR Online cognitive behavio?r therapy OR Online cognitive remediation OR Online cognitive training OR Online intervention OR Online psychotherapy OR Online therapy OR Psychotherapy technology OR Smartphone health application OR Smartphone health intervention OR Smartphone intervention OR Smartphone psychotherapy OR Smartphone therapy OR Therapy technology OR Web-based cognitive behavio?r therapy OR Web-based cognitive remediation OR Web-based cognitive training OR Web-based intervention OR Web-based psychotherapy OR Web-based therapy) AND (Major Depressive Disorder OR Depression) AND (Adherence OR Compliance OR Engagement OR Completion OR Complete OR Finish OR Download OR Log in OR Sign in OR Visit OR View OR Time)

Study Selection Criteria

We included studies that were of adults with depression or MDD as the primary diagnosis; were written in English; and described the results of randomized controlled trials (RCTs), pilot studies, or feasibility studies of digital interventions to treat depression, such as DTx and mobile mental health apps. Studies using medication- or telemedicine-based interventions were excluded, as were those lacking the evaluation of participant adherence or engagement outcomes. Full inclusion and exclusion criteria are described in Table 1, and the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist can be found in Multimedia Appendix 1. The review was not registered, and a separate protocol was not prepared; all methods are described herein.

Table 1. PICOSa criteria for the inclusion and exclusion of studies.
CriteriaIncludedExcluded
PopulationAdults with MDDb or at least mild depression
  • Adults with a primary mental health diagnosis other than MDD (eg, bipolar disorder)
InterventionsDigital intervention
  • Medication
  • Digital systems that include medication (eg, digital medicine)
  • Telemedicine
ComparatorsNot restricted
  • N/Ac
OutcomesAdherence metrics defined based on quantifiable data about participants’ engagement with a digital product (which may be referred to as adherence, compliance, or engagement). Examples include the following: number or percentage of participants who competed study or treatment, number or percentage of times participants logged into or started the intervention, duration of use or mean time spent on intervention, number of modules used or activities and assignments completed (either from the total program [if fixed amount] or over the course of the study), number of recommended modules or assignments completed, and number and types of web pages visited within the intervention
  • Economic outcomes
  • Studies that do not report compliance, adherence, or engagement outcomes or metrics
  • Studies that do not report the effects of digital mental health interventions on adherence, compliance, or engagement
Study designsClinical trials (will include RCTsd in addition to non-RCTs, such as pilot or feasibility studies, and protocols)
  • Nonhuman studies
  • Preclinical studies
  • Short-term studies (study length <10 days)
  • Studies interrupted or prematurely terminated
  • Systematic reviews and meta-analyses
  • Observational studies
  • Real-world studies

aPICOS: patients, intervention, comparator, outcomes, study design.

bMDD: major depressive disorder.

cN/A: not applicable.

dRCT: randomized controlled trial.

Extraction

We extracted data from the articles that passed the full-text screening based on the inclusion and exclusion criteria listed in Table 1. The following information was recorded from each study: (1) study design (RCT, feasibility, or pilot); (2) the primary diagnosis of participants (depression or MDD); (3) the metric used to diagnose depression for inclusion in the study; (4) the number of participants included in the study; (5) participant demographics (age, sex or gender, race, and ethnicity); (6) the type of digital intervention used (web based, app based, both, or CD-ROM); (7) the name of the digital intervention; (8) the number of days participants were allotted to use the intervention; (9) whether the intervention was unguided or delivered with human support; (10) whether other forms of treatment for depression, such as psychotherapy and antidepressants, were permitted during the study period; (11) how care as usual (CAU) was defined; (12) the type of control group used as a comparator to the digital intervention group (active, waitlist, CAU, or none); (13) the adherence and engagement metrics used; (14) the level of adherence and engagement reported; (15) the primary efficacy outcome and any other efficacy outcomes, if reported; and (16) the relationship of adherence and engagement with clinical outcomes, if reported.

We appraised the quality of the included studies using the tool developed by Hawker et al [69], which provides a rubric for assigning a score of “good,” “fair,” “poor,” or “very poor” to the clarity of the abstract, introduction, methods, and results; the sampling strategy; the rigor of the data analysis; the discussion of ethical issues and bias; and the generalizability and usefulness of the study. For each study, the same individual extracted the data and rated the quality of the study, and a second individual performed a quality control check of the extracted data to ensure accuracy.

Analysis

The analysis in this SLR is primarily descriptive, whereby we calculated the overall averages of the means and median values reported by the studies and visually presented the topline findings. We analyzed the studies as a whole group and as subgroups formed according to the following factors: mode of intervention delivery (web based vs app based), access to psychotherapy (yes vs no), and reception of human support with intervention delivery (yes vs no). Studies were included in any analysis for which they had appropriate data and were excluded from analyses for which they were missing data. Because interventions varied in the number of modules available, we calculated the mean dose received (an adherence metric) by dividing the mean number of modules completed by the total number of available modules for each of the 38 studies in which this information was available. In addition, we calculated the Pearson correlation coefficient for the relationship between the number of hours participants spent engaging with digital interventions and the allotted treatment duration in the studies.


Studies Selected

The search yielded 1181 records, and an additional 590 records were identified by manually searching the reference lists of relevant review articles [57,65-68,70] (Figure 1). After removing the duplicates, 756 records were screened. During abstract screening, we excluded 566 (74.9%) of the 756 records, most often when the study pertained to a diagnosis other than depression (488/756, 64.6%) and when the study design (107/756, 14.2%) or intervention (11/756, 1.5%) was deemed to be outside the scope of this review (Figure 1 and Multimedia Appendix 2). A total of 190 articles underwent a full-text screening for eligibility. At this step of screening, we excluded 27.9% (53/190) of articles that did not have depression as a primary diagnosis, 9.5% (18/190) of articles that did not have engagement outcomes, 7.9% (15/190) of articles that had a study design beyond the scope of this review (eg, SLR, meta-analysis, case study, or study of an adolescent patient population), and 5.3% (10/190) of articles that had an intervention outside the scope of this review (eg, an intervention delivered via the web in real time by a therapist or an intervention to help patients taper off antidepressants; Multimedia Appendix 2). Of 190 studies, we included the remaining 94 (49.5%) studies in this review [61-64,71-161]: 65/94 (69.1%) identified by database search and 29/94 (30.9%) identified from relevant review articles (Multimedia Appendix 3 [61-64,71-161]). Of the 94 studies, 12 (13%) were protocols for clinical trials, and although they did not report results, we included them in this review to gauge how they planned to assess participant engagement and adherence. A summary of the main features of the articles can be found in Multimedia Appendix 4 [61-64,71-85,88-107,109-161].

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) diagram. MDD: major depressive disorder.

Study Characteristics and Types of Digital Interventions

Of the 94 studies included in this SLR, the majority were RCTs (n=69, 73%), 18 (19%) were pilot studies, and 7 (7%) were feasibility studies (Multimedia Appendix 4). The number of studies reporting on adherence to or engagement with digital interventions for depression increased over time: of the 94 studies covered in this review, 8 (9%) were published between 2006 and 2010, another 30 (32%) were published between 2011 and 2015, another 41 (44%) were published between 2016 and 2020, and 15 (16%) were published in 2021 and early 2022 (Figure S1 in Multimedia Appendix 5). Most interventions were delivered with human support, and most participants were female; in the studies reporting data on race, the majority of the participants were White. A total of 77 (82%) out of 94 studies used depression symptoms as assessed by clinically validated scales as the primary diagnosis, whereas only 17 (18%) specifically used MDD as an inclusion criterion. In total, the studies covered at least 68 different interventions: 47 (69%) different web-based interventions (with 10 additional web-based interventions not mentioned by name in the publications), 15 (22%) app-based interventions, 5 (7%) app-based interventions that were also accessible via the web, and 1 (1%) CD-ROM intervention. Overall, 14 (21%) out of 68 digital interventions were delivered in a language other than English: 5 (7%) in Spanish; 4 (9%) in German; and 1 (1%) each in Chinese, Swedish, Bahasa, Indonesian, Norwegian, and Portuguese.

Of the total 94 studies, 10 (11%) studies had an active control group (as defined by the use of a depression treatment other than the digital intervention under investigation, such as in-person therapy or web-based progressive muscle relaxation); 50 (53%) studies had a CAU control group (the definition of CAU varied, but CAU generally included unrestricted access to psychiatrists, medication, psychotherapy, and primary care physicians); and 43 (46%) studies had a waitlist control (27 of which reported pairing with CAU). Moreover, 14 (15%) out of 94 studies mentioned providing financial incentives to the study participants for completing various tasks, such as trial questionnaires, follow-up assessments, and the study itself. The duration for which participants were given to access the intervention ranged from 14 to 365 days, with a mean of 77 days. Appraisal of the studies indicated that of the 94 studies, 45 (48%) were of high quality, 44 (47%) were of medium quality, and 5 (5%) were of low quality. The published clinical trial protocols could not achieve the highest appraisal score because they did not have results to rate.

Criteria Used to Assess Depression

The most common depression criterion for inclusion in these studies was having a certain minimum score on the Patient Health Questionnaire-9 (PHQ-9; 40/94, 43% studies), followed by the Beck Depression Inventory-II (10/94, 11% studies) and the Center for Epidemiologic Studies Depression Scale (CES-D; 9/94, 10% studies; Figure S2 in Multimedia Appendix 5). Studies using the same tool for assessing depression often used different depression severity cutoffs for inclusion. For example, of the 36 studies using the PHQ-9 score (38% of the 94 studies), 15 (42%) set the minimum inclusion threshold at a score of at least 5 (mild depression); 16 (44%) used a threshold score of at least 10 (moderate depression); and the rest (5/36, 14%) used scores of 4 (minimal depression), 9 (mild depression), or 15 (moderately severe depression). Of the 17 studies with inclusion criteria requiring a diagnosis of MDD, they used the Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition, n=7, 41% or Fifth Edition, n=1, 6%) or the Mini International Neuropsychiatric Interview (version 5.0.0; n=4, 24%; Spanish version 6.0.0, n=1, 6%; Swedish version 6.0.0b, n=1, 6%; or unspecified version, n=3, 18%).

Participant Demographics

In total, the studies reviewed in this SLR included results from 20,111 participants. The mean number of participants included in each study was 314.9 (median 150; range 8-7884). The median age of the participants was 40.0 (mean age ranged from 20.9 to 69.6) years. Of the studies reporting data on participant sex or gender, all but 1 study (80/81, 99%) had a majority female population, with a median of 72.8% (range 48.7%-88.0%) female participants. Only 28% (23/82) of the studies reporting results included data on race, and of those studies, 91% (21/23) had a majority White participant population (median 76.7%).

Most Reported Adherence and Engagement Metrics

Two primary adherence outcomes were reported: the most common was the percentage of participants who completed the entire intervention (finished all available modules), reported by 35% (33/94) of the studies (Figure S3 in Multimedia Appendix 5). The second most common adherence metric, reported by 15% (14/94) of the studies, was the percentage of participants who completed the recommended number of modules, which, in some studies, was less than the total number of modules available.

The most common engagement outcome reported was the number of modules used or completed by participants, which was described in 43 (46%) of the 94 studies (Figure S3 in Multimedia Appendix 5). This engagement metric was also the earliest reported in the literature, along with study attrition, in 2006 (Figure S4 in Multimedia Appendix 5). Other commonly reported measures of engagement were the duration of the use of the intervention (number of hours or days used; 34/94, 36%), number of log-ins (19/94, 20%), percentage of users beginning or completing the first module (18/94, 19%), and attrition from the study (14/94, 15%); the definition of attrition varied, such as cases where participants did not complete the follow-up assessments, withdrew from the study, or did not complete any sessions of the intervention. Less frequently used measures of engagement included the percentage of participants who logged in at least once, the number of page visits, the number of activities completed, the percentage of users completing the last module of the intervention, and homework completion.

Efficacy Metrics Most Commonly Used

Efficacy metrics were often presented as change from baseline; however, some studies reported statistics comparing the scores of the intervention group with those of the control group at a specific time point without describing the change from baseline. The PHQ-9 score was the most common metric for assessing efficacy, used in 43 (46%) out of 94 studies (Figure S5 in Multimedia Appendix 1), followed by the Beck Depression Inventory (26/94, 28%), General Anxiety Disorder-7 (16/94, 17%), CES-D (10/94, 11%), EQ-5D (9/94, 10%), Hamilton Depression Rating Scale (9/94, 10%), and remission (7/94, 7%). Other efficacy outcomes reported in multiple studies included scores on the Quick Inventory of Depressive Symptomatology, Hospital Anxiety and Depression Scale, Montgomery-Asberg Depression Rating Scale, 12-Item Short Form Survey, Automatic Thoughts Questionnaire, Kessler Psychological Distress Scale, Dysfunctional Attitude Scale, Beck Anxiety Inventory, Work and Social Adjustment Scale, and Patient Health Questionnaire-8. The most common primary efficacy outcomes were the PHQ-9 and Beck Depression Inventory, assessed in 35 (37%) and 23 (24%) out of 94 studies, respectively.

Analysis of Adherence and Engagement Levels

The mean percentage of participants who completed all the modules in the intervention—the most common adherence metric—was 44.2% (mean range 1.8%-94%; Figure 2). A similar adherence metric reported was the percentage of participants who completed the number of recommended modules. A mean of 56.3% of the participants completed the recommended number of modules (3-7 modules), ranging from 36% to 80% of the patients across the 14 (15%) out of 94 studies that reported this metric. We calculated the average dose received by dividing the mean number of modules used by the number of available modules. For the 38 (40%) studies reporting these 2 variables, the mean dose-received adherence metric was 60.7% (range 13%-100%). Of the 15 (16%) studies that found a statistically significant effect of the digital intervention on the primary outcome, the dose received was 66.4%.

On average, participants used or completed 6.4 modules (mean range 1.0-19.7 modules; Figure 2). In the 27% (25/94) of studies cataloging the number of hours spent engaging with the intervention, participants spent a mean of total 3.9 (mean range 0.7-8.4) hours using the digital interventions. Participants averaged 39.6 (mean range 3.0-191.4) log-ins over the course of the 20% (19/94) of studies that tracked the average number of user log-ins.

Study attrition ranged from 1% to 67.1% (mean 29.8%). The average number of days between when the participants first started using the intervention and when the participants stopped using the intervention spanned 6.4 to 79.1 (mean 36.0) days. The average percentage of participants beginning the first module ranged from 51.7% to 96%. Similarly, the average percentage of participants completing the first module ranged from 44.1% to 100%. A total of 10% (9/94) of studies reported the mean length of time participants spent on a session, which ranged from means of 1.4 to 40.5 minutes (the average was 19.7 min). A total of 5% (5/94) of studies reported the percentage of participants who logged in at least once to use the intervention, which was between 75.8% and 100% (mean 84.3%).

Analysis across the studies indicated that participants engaged with the intervention more when given a longer period to use it (ie, the number of hours participants spent engaging with digital interventions increased with the number of weeks allotted for treatment in the studies; Pearson correlation coefficient r=0.56; P=.01; n=19; Figure S6 in Multimedia Appendix 5).

Figure 2. Engagement and adherence outcomes for commonly reported metrics. This graph displays the average values for the most reported engagement and adherence metrics, calculated from the means reported by the studies. The dose-received metric was calculated from the studies (38/94, 40%) that reported the number of modules used and the number of available modules (by dividing the former by the latter). For each metric, the highest and lowest means reported are also displayed, as well as the number of studies from which the metric was calculated (note: some studies reported means from multiple digital interventions, accounted for in the footnotes). a42 values from 38 studies, b37 values from 33 studies, c51 values from 43 studies, d20 values from 19 studies, e12 values from 9 studies, f7 values from 5 studies.

Efficacy

Overall, 21% (20/94) of studies did not use a waitlist, CAU, or active control group, including single-arm trials and studies where all groups received the digital intervention but differed from one another in some other way, such as the degree of support received or mode of intervention delivery. Of the 79% (74/94) of studies that did have a control group, 74% (55/74) reported results on the comparison of the control group with the intervention group, with 78% (43/55) finding that the digital intervention was effective for at least 1 outcome. The participants in the digital intervention group had significantly greater improvement for at least 1 outcome than the participants in the control group in 91% (30/33) of the studies with control groups that involved a waitlist versus 79% (30/38) of the studies with control groups that involved CAU. Breaking it down further by control group type, superior efficacy of the digital intervention was found in 100% (9/9) of the studies with waitlist-only groups, 90% (19/21) of the studies with groups receiving CAU while on the waitlist, 55% (6/11) of the studies with CAU-only groups, 71% (5/7) of the studies with enhanced CAU (eg, CAU plus 15-min weekly phone check-ins, a psychoeducation information session, and updated training on depression for patients’ primary care physicians), and 67% (4/6) of the studies with active control groups. Of the 55 studies that reported results on the comparison of the control group with the intervention group, 13 (24%) found no differences in outcomes between the digital intervention and control groups. Narrowing efficacy to just the primary outcome, 65% (36/55) of the studies found the digital intervention to be significantly more effective than the control. Of these studies, 61% (22/36) were appraised as high-quality studies, compared with 58% (25/43) of the studies reporting on any efficacy outcome that was significantly different in the digital intervention group compared with the control group.

Comparison of the Use of Digital Interventions Between Studies That Allowed Psychotherapy and Those That Did Not

Only 1% (1/94) of studies listed ADT as an exclusion criterion. Some studies (33/94, 35%) allowed ADT while prohibiting other forms of treatment for depression, most commonly excluding participants from receiving psychotherapy. Most of the studies (60/94, 64%) used the digital intervention in conjunction with other forms of treatment, allowing for ADT, psychotherapy, and other forms of treatment such as inpatient care and distress call center lines. Some studies were explicit about permitting the use of other depression treatments, whereas others merely did not list the current use of psychotherapy or ADT as an exclusion criterion.

The percentage of studies finding efficacy for at least 1 outcome appeared to be slightly higher in the 18 studies of interventions used in a narrower context in which ADT was the only other depression treatment permitted (n=15, 83%) than in the 37 studies of interventions used in a more lenient context allowing a broader range of other treatments (n=28, 76%; Figure 3A).

In addition to moderately higher efficacy, interventions delivered in a setting with fewer permissible depression treatments had greater adherence and engagement. In studies that excluded psychotherapy compared with those that allowed a broader range of depression treatments, participants completed a higher dose of treatment (75.6% of the modules on average in 11 studies excluding psychotherapy vs 57% of the modules on average in 24 studies allowing a broader range of treatments) and were more likely to complete the entire digital intervention (54.6% vs 35.9%, n=16 and n=17 studies, respectively; Figure 3B). Furthermore, users in studies excluding psychotherapy logged in slightly more often (43.9 log-ins on average in 9 studies excluding psychotherapy vs 35.7 log-ins in 10 studies allowing a broader range of treatments) and had lower study attrition (18.6% vs 31.9%, n=5 and n=9 studies, respectively), although they spent fewer hours using the intervention (3.5 vs 4.0 hours, n=8 and n=16 studies, respectively; Figure 3B).

Figure 3. Summary of efficacy and engagement across the studies based on access to psychotherapy, intervention delivery with support, and mode of digital delivery. (A) Efficacy based on whether or not participants had access to psychotherapy. (B) Participant adherence and engagement based on whether access to psychotherapy was permitted. (C) Efficacy based on whether or not the digital interventions were delivered with support. (D) Participant adherence and engagement based on whether or not the interventions were delivered with support. (E) Efficacy based on whether the interventions were delivered via the web or via apps (the latter included interventions that were exclusively app-based and those that were app-based but could also be accessed via the web). (F) Participant adherence and engagement based on whether the interventions were delivered via the web or via apps. aEfficacy refers to at least 1 outcome in which the digital intervention group experienced significantly greater improvement than the control group, b30 values from 24 studies, c12 values from 11 studies, d21 values from 17 studies, e10 values from 9 studies, f31 values for 27 studies, g39 values from 33 studies, h3 values from 2 studies, i39 values from 32 studies, j9 values from 8 studies.

Comparison of the Use of Digital Interventions Delivered With Support and Those Delivered Without Support

In most of the studies (77/94, 82%), the digital intervention was delivered with support (eg, an onboarding call, weekly check-in calls from coaches, written feedback after completing each lesson, or adherence reminder calls), whereas in 18% (17/94) of studies, delivery was unguided. The percentage of studies demonstrating that the intervention had efficacy for at least 1 participant outcome (such as depression severity or quality of life) appeared to be similar whether the digital intervention was delivered with human support (34/44, 77%) or without support (9/11, 82%; Figure 3C). More participants completed all the modules when given support (27/57, 47% of participants, n=27 studies) than when unguided (29% of participants, n=6 studies; Figure 3D); however, they spent slightly less time using the intervention on average (3.8 vs 4.2 hours, n=18 studies and n=6 studies, respectively; Figure 3D). Because only 2 studies of unguided interventions reported enough information to calculate the dose-received values, we refrained from interpreting that comparison.

Comparison of the Use of Web-Based Versus App-Based Interventions

We investigated the difference between interventions that were web based and those that were app based (which included interventions that were exclusively app based and those that were app based but could also be accessed via the web). Studies of web-based interventions were published, on average, 3 years earlier than studies of app-based interventions (2015 vs 2018; Figure S1 in Multimedia Appendix 5). When assessed by delivery type, the percentage of studies finding efficacy for at least 1 outcome was higher for app-based interventions, with 76% (34/45) of the studies using web-based interventions and 89% (8/9) of the studies using app-based interventions finding the digital intervention to be significantly more effective than the control group (Figure 3E). However, this finding should be interpreted with caution because the sample was heterogeneous, and the number of app-based interventions was small (n=9). The mean number of hours of use of the intervention was higher in the web-based studies than in the app-based studies (4.8 hours in 16 studies of web-based interventions vs 2.0 hours in 7 studies of app-based interventions), as was the dose received (62.9% vs 55.2%; Figure 3F). The web-based studies had a lower mean number of log-ins (21.0 log-ins, n=11 studies) than the app-based studies (60.2 log-ins, n=8 studies; Figure 3F), perhaps owing to differences in intervention design.

Relationship of Efficacy With Adherence and Engagement

Of the 94 studies, only 14 (15%) reported on a dose-response relationship, investigating an association between the level of adherence to or engagement with the digital intervention and efficacy outcomes (change in depression, anxiety, or stress scores; Table 2). Of these studies, 9 (64%) detected a significant relationship between adherence or engagement and at least 1 efficacy outcome.

Donkin et al [61] found that participants who improved by ≥5 points on the PHQ-9 had completed slightly more activities and logged in for slightly more time than those who did not. Similarly, MacLean et al [86] and Wright et al [73] reported a correlation between the PHQ-9 score and the number of modules completed [73,86]. Bur et al [71] noted a positive correlation between a composite adherence score (averaged from the time spent on the program and the number of clicks done, topics completed, and exercises completed) and improvement in the PHQ-9 score.

Moreover, Bolier et al [113] indicated that participants who completed >1 lesson had significantly greater improvement in their Hospital Anxiety and Depression Scale anxiety subscale scores at 2-month follow-up than those who completed ≤1 lesson. Mohr et al [62] reported that several engagement metrics were significantly associated with improvement in the PHQ-9 score: the number of days on which participants logged in, the number of lessons they viewed, the total number of tools they used, and the variety of tools they used. A subsequent study by Kelders et al [105] discovered that the completion of all modules and lesson reached were associated with improvement in CES-D and Hospital Anxiety and Depression Scale anxiety subscale scores after intervention and at follow-up. Krämer and Köhler [132] did not directly investigate the relationship between engagement and efficacy outcomes; however, the effect size of treatment on CES-D score improvement was greater in 52% of the participants who completed at least 5 modules than in all participants.

Moberg et al [88] found that the participants who used the digital intervention to record more of their thoughts in the app had greater reductions in anxiety and the Depression Anxiety and Stress Scales-21 stress subscale score from the posttreatment time point to follow-up; however, an initial increase in stress from baseline to the posttreatment time point was revealed in the same participants.

Of the 14 studies, 5 (36%) did not find a dose-response relationship. Of these 5 studies, 4 (80%) found the digital intervention to be effective for at least 1 outcome compared with the control, without finding efficacy to be correlated with engagement or adherence [63,64,87,102]. Batterham et al [78] reported that there was no relationship between module completion (none vs some) and either the PHQ-9 or General Anxiety Disorder-7 score; however, they also found no effect of the intervention on depression or anxiety overall.

Table 2. Summary of the studies that investigated a relationship between engagement and participant outcomes.
Intervention typeStudyDigital interventionTreatment durationEngagement outcomeEfficacy outcomeIs there a relationship between engagement and efficacy?
Web basedDonkin et al [61], 2013E-couch12 weeksNumber of activities completed per log-inParticipants whose PHQ-9a score improved by ≥5 points compared with those whose score did notYes (mean difference 0.20 activities, P=.01)
Web basedDonkin et al [61], 2013E-couch12 weeksTime spent per log-inParticipants whose PHQ-9 score improved by ≥5 points compared with those whose score did notYes (mean difference 3.26 min, P=.01)
Web basedBolier et al [113], 2013Psyfit2 monthsParticipants completing >1 lesson compared with those completing ≤1 lessonHADS-Ab at 2-month follow-upYes (Cohen d=0.16 vs Cohen d=0.43, P=.03)
Web basedBolier et al [113], 2013Psyfit2 monthsParticipants completing >1 lesson compared with those completing ≤1 lessonCES-Dc at 6-month follow-upNo (but there was a trend; Cohen d=0.47 vs Cohen d=0.87, P=.07)
Web basedBolier et al [113], 2013Psyfit2 monthsNumber of lessons completedChange in score on MHC-SFd, WHO-5e, CES-D, HADS-A, and MOS SFfNo (no clear relationship between adherence and effect size for any of the scales)
Web basedMacLean et al [86], 2020The Journal12 weeksNumber of lessons completedPHQ-9 score at week 12Yes (r=−0.436, P=.002)
Web basedMohr et al [62], 2013moodManager12 weeksNumber of log-in daysImprovement in PHQ-9 scoreYes (β=.14; P=.02)
Web basedMohr et al [62], 2013moodManager12 weeksNumber of lessons viewedImprovement in PHQ-9 scoreYes (β=.40, P=.01)
Web basedMohr et al [62], 2013moodManager12 weeksTotal tool useImprovement in PHQ-9 scoreYes (β=.01, P=.01)
Web basedMohr et al [62], 2013moodManager12 weeksVariety of tools usedImprovement in PHQ-9 scoreYes (β=1.21, P=.03)
Web basedKelders et al [105], 2015Living to the full12 weeksParticipants completing 0 to 5 lessons compared with those completing all 9 lessonsChange in CES-D score from baseline to follow-upYes (the effect size for participants who completed 0 to 5 lessons was Cohen d=0.64, compared with Cohen d=1.20 for participants who completed all 9 lessons, P<.001)
Web basedKelders et al [105], 2015Living to the full12 weeksParticipants completing 0 to 5 lessons compared with those completing all 9 lessonsChange in HADS-A score from baseline to follow-upYes (the effect size for participants who completed 0 to 5 lessons was Cohen d=0.33, compared with Cohen d=1.12 for participants who completed all 9 lessons, P<.001)
Web basedKelders et al [105], 2015Living to the full12 weeksAdherence and lesson reachedCES-D and HADS-A at the postintervention time point and follow-upYes (all regression analyses were significant, with P<.001 and β=.242 to.422, supporting a dose-response relationship; specific values not reported)
App basedMoberg et al [88], 2019Pacifica30 daysNumber of times a participant made a thought recordAnxiety reduction from the posttreatment time point to follow-upYes (the more the number of thought records made by a participant, the more their anxiety score reduced; β=−.10; P=.02)
App basedMoberg et al [88], 2019Pacifica30 daysNumber of times a participant made a thought recordStress reduction from baseline to the posttreatment time point and from the posttreatment time point to follow-upYes (the more the number of thought records made by a participant, the lesser the reduction in their stress from the pretreatment to posttreatment time points [β=−.30; P<.01] but the greater the stress reduction from the posttreatment time point to follow-up [β=−.47; P<.05])
App basedMoberg et al [88], 2019Pacifica30 daysNumber of times a participant made a thought recordChange in depression composite score, based on a composite of scores on the PHQ-8g and DASS-21h depression subscaleNo (P>.13)
Web basedBur et al [71], 2022HERMES8 weeksComposite adherence score (averaged z scores of the number of clicks, number of topics worked on, number of completed exercises, and time spent on the program)Change in PHQ-9 scoreYes (Kendall τ=0.11, P=.03)
Web basedKrämer and Köhler [132], 2021GET.ON Mood Enhancer7 weeksCES-DChange in CES-D scoreYes (authors did not directly investigate; however, the effect size of the intervention on CES-D scores for all participants was Cohen d=0.55, whereas effect size for the participants who completed ≥5 modules was larger, Cohen d=0.75.)
Web basedWright et al [73], 2022Good Days Ahead12 weeksNumber of modules completedChange in PHQ-9 scoreYes (estimate, −0.85, P=.009)
Web basedMeyer et al [102], 2015Deprexis3 monthsMean minutes users engaged with the program (use time)Change in PHQ-9 scoreNo (P>.20)
Web basedMoritz et al [63], 2012Deprexis8 weeksNumber of sessions completedChange in BDIi scoreNo (r<0.11, P>.30)
App basedGraham et al [87], 2020IntelliCare suite8 weeksNumber of app sessionsChange in PHQ-9 scoreNo (r=−0.03; 95% CI −0.22 to 0.15)
App basedGraham et al [87], 2020IntelliCare suite8 weeksTime until last useChange in PHQ-9 scoreNo (r=−0.14; 95% CI −0.31 to 0.05)
App basedGraham et al [87], 2020IntelliCare suite8 weeksNumber of days usedChange in PHQ-9 scoreNo (r=−0.05; 95% CI −0.23 to 0.14)
App basedGraham et al [87], 2020IntelliCare suite8 weeksNumber of app sessionsChange in GAD-7j scoreNo (r=0.01; 95% CI −0.17 to 0.19)
App basedGraham et al [87], 2020IntelliCare suite8 weeksTime until last useChange in GAD-7 scoreNo (r=−0.04; 95% CI −0.22 to 0.13)
App basedGraham et al [87], 2020IntelliCare suite8 weeksNumber of days usedChange in GAD-7 scoreNo (r=0.02; 95%CI −0.15 to 0.20)
App basedStiles-Shields et al [64], 2019Boost Me and Thought Challenger6 weeksApp useChange in PHQ-9 scoreNo (P>.05)
Web basedBatterham et al [78], 2021myCompass 27 weeksModule completion (0 vs 1-14 modules)PHQ-9 scoreNo (P=.74)
Web basedBatterham et al [78], 2021myCompass 27 weeksModule completion (0 vs 1-14 modules)GAD-7 scoreNo (P=.87)

aPHQ-9: Patient Health Questionnaire-9.

bHADS-A: Hospital Anxiety and Depression Scale anxiety subscale.

cCES-D: Center for Epidemiological Studies Depression Scale.

dMHC-SF: Mental Health Continuum-Short Form.

eWHO-5: 5-item World Health Organization Well-being Index.

fMOS SF: Medical Outcomes Study-Short Form.

gPHQ-8: Patient Health Questionnaire-8.

hDASS-21: Depression Anxiety and Stress Scales-21.

iBDI: Beck Depression Inventory.

jGAD-7: General Anxiety Disorder-7.


Principal Findings

Overview

This SLR of participant adherence to and engagement with digital interventions for depression was comprehensive, including 94 publications and 20,111 participants. Of the 55 publications that reported results on the comparison of a digital intervention group with a control group, 78% (n=43) found that the digital intervention group had greater improvement in at least 1 efficacy outcome. Participant adherence and engagement varied widely in terms of how they were defined and measured and their levels. Although acknowledging that the field lacks universal definitions of adherence to and engagement with digital interventions [46,52-55], we categorized adherence metrics as those that involved a comparison of intended use with actual use of the intervention [55] and engagement metrics as those that involved an assessment of the extent to which participants interacted with an intervention (eg, number of hours used, modules used, log-ins, days used). Only 6% (6/94) of studies of engagement with digital interventions for depression were published by 2010, with the majority (56/94, 60%) published in 2016 or after. As the number of such studies continues to increase, it is important for clinical trials of digital interventions to align with a common set of core adherence and engagement metrics. This alignment will encourage the consistent reporting of user engagement to make comparisons of digital interventions across studies more meaningful.

Engagement With Digital Interventions for Depression

Measuring DTx engagement is complicated. Studies of digital interventions often use metrics of engagement that do not describe the quality of the interaction with the intervention material [58]. In the studies reviewed in this SLR, the most common engagement metric was the number of modules used. However, this result does not account for the varying lengths and qualities of the modules across digital interventions, the degree of attention the users paid to the modules as they proceeded, or their level of retention of module content. It also does not distinguish between users who were repeating modules and those who moved sequentially through the intervention, nor does it reveal task time analytics. Moreover, for this SLR, the number of modules completed was not an ideal metric for comparing different types of digital interventions because the number of available modules varied considerably across interventions (from 4 to 20).

To address this issue, we calculated a dose-received metric of adherence from the 40% (38/94) of studies that made available both the mean number of modules used and the total number of available modules. This is not a perfect comparative metric because it does not factor in a module’s quality, length, or difficulty or the effort required to complete the module, which would require researchers to track and report more variables than have been published to date. Furthermore, not all digital interventions can be measured by this dose-received metric, such as chatbots and mood trackers. Thus, dose received is a rough but practical comparison tool. The dose-received metric calculated in this SLR revealed that the participants received an average dose of 60.7% of the modules available in the digital interventions.

Related adherence metrics were the percentage of participants who completed the recommended number of modules (56.3%, reported by 14/94, 15% of studies) and the percentage of participants who completed all available modules (44.2%, reported by 33/94, 35% of studies). These values were within the range of the 43% to 99% completion rates reported by other clinical studies of digital interventions [46] and higher than the 0.5% to 28.6% completion rates reported in the real-world use of depression and anxiety apps [46]. As a rough comparison to the use of other depression treatments in the real world, 35% of patients were classified as adherent to antidepressants by having at least 80% proportion of days covered [162], and although 12 to 20 sessions of cognitive behavioral therapy are recommended for depression [163], the median number of psychotherapy sessions attended was reported as 5 in 1 (1%) out of 94 studies [164] and another (n=1, 1%) found that most patients attended only 1 session [165].

It is possible that the quality of the digital interventions studied in clinical trials is higher than the quality of the apps that have not been tested for efficacy before being released to the public. Although 78% (74/94) of the studies with a control group reviewed here found the digital intervention to be effective for at least 1 participant outcome, most depression apps on the market do not have any efficacy data [32].

Adherence, Engagement, and Efficacy Based on Whether the Studies Allowed Participants to Access Psychotherapy

The effect size of a digital intervention can be impacted by the type of control group used for comparison [166,167]. A meta-analysis of internet-delivered cognitive behavioral therapy studies revealed that interventions had higher effect sizes when compared with a waitlist control versus a CAU group [166]. Similarly, in a meta-review of meta-analyses examining the outcomes of RCTs of app-based interventions, Goldberg et al [167] reported that digital interventions had a small but significant effect on improving depression symptoms compared with inactive controls but not compared with active controls. In line with these conclusions, in this SLR, the percentage of studies that found the digital intervention group to have a significantly greater improvement in at least 1 outcome depended on the type of control group used as a comparator, with a greater percentage of waitlist control studies finding efficacy than active control studies. This indicates that app-based interventions for depression may have the highest efficacy when used before initiating treatment, such as for patients on waitlists or those without feasible access to mental health care. Furthermore, it underscores the importance of noting whether the digital intervention was used as a monotherapy or an adjunctive treatment and of carefully considering what type of control group is used when analyzing studies of digital interventions.

All 94 studies reviewed here except 1 (1%) allowed antidepressant use along with the digital intervention. Just over one-third (33/94, 35%) of the studies excluded psychotherapy. The remaining studies allowed for the digital intervention to be used as an adjunct to medication, psychotherapy, and other forms of treatment. Compared with participants in studies allowing broad access to treatment, participants in studies where psychotherapy was not allowed had higher adherence and engagement, receiving a higher dose, completing the intervention at higher rates, and logging in more often (although spending less overall time using the intervention). There may have been more of an incentive to engage with the intervention when psychotherapy was not permitted as a treatment. Thus, digital interventions may be more valuable when psychotherapy is inaccessible or not the patient’s preference. However, there is no consensus in the field yet on whether digital interventions are most useful before beginning psychotherapy or best when used as stand-alone treatments [54,68,86,168,169]. Although some studies have indicated that digital interventions can decrease symptoms and benefit waitlisted patients before starting therapy [168,170], others have found no benefit [86,171].

A missing piece is the understanding of which patients will benefit most from digital interventions. Levin et al [172] found high satisfaction ratings from students using an app while on a waitlist for a college counseling center; however, recruitment was slow, and researchers concluded that the app may only interest a select subsample of the population. Prior research has indicated that adherence to digital interventions is affected by age, symptom severity, and gender, but the direction of the effects has differed from study to study [54]. Karyotaki et al [168] have called for future studies to include more participants from disadvantaged backgrounds (who may encounter issues using digital interventions owing to poverty or education level) and to methodically investigate factors that could impact the effectiveness of a digital treatment, including the duration of depression symptoms, comorbidities, the number of prior depressive episodes, and demographics. Such research is crucial for designing digital interventions for patients who may benefit the most from them and ensuring access for these populations.

The Diverse Support Offered With Digital Interventions

Previous research has indicated that digital interventions are more successful when support is provided [168]; however, the appropriate amount of guidance to offer has yet to be established [54]. The majority of the studies reviewed in this SLR offered some form of support in various ways. Support was sometimes marginal, such as a 10-minute phone call in the second week of the intervention [90], minimal email contact with psychologists [133], or an optional onboarding phone call [131]. Support could also be intensive, such as a psychiatrist appointment, 12 weekly 30- to 60-minute phone calls with a coach, and written contact between appointments [161]. One study used intense monitoring, facilitated early intervention, and assisted with personal crisis management [100].

Common support measures offered were weekly phone calls with a coach [62,64,73,74,80,81,84,86,94,103,109,116,119,132, 134,139,148,159,161] and adherence reminders [61,63,95,107,112,122,129,144,153,158]. Many studies provided coach feedback to participants after each lesson [72,75,77,99,106,108,111,122,124,130,135,141,144,145,154,155,157,158]. A few studies integrated the digital intervention with face-to-face meetings, either with a therapist [104,112] or in a group counseling session [85,125]. Moreover, 1 (1%) study ensured equal access by giving participants phones with phone plans for the duration of the study [116].

Interestingly, efficacy did not appear to be considerably impacted by whether support was provided. The interventions delivered with support were as likely to have efficacy for at least 1 participant outcome as those delivered without support. The literature on this topic may be less conclusive than previously thought. Although multiple studies have shown that interventions delivered with support have larger effect sizes than unguided interventions, it has been suggested that newer unguided digital interventions with features such as engagement reminders may have similar efficacy to interventions delivered with clinician support [173]. The patient population may also modulate the effect of guidance: Karyotaki et al [168] found that guidance did not impact the efficacy of a digital intervention for patients with a PHQ-9 score of 5 to 9; however, patients with higher PHQ-9 scores had better outcomes when the intervention was delivered with support.

Although support did not appear to impact efficacy in the reviewed studies, it did play a role in adherence. Studies that provided support had greater adherence than those that did not, with more participants completing all the modules in the former. It is not clear whether support helps improve adherence directly or by indirect mechanisms such as hope induction [54], nor what degree or form of support is best or if it differs among patients. Only 1 (1%) study differentiated support depending on baseline participant characteristics by offering email support from a therapist to participants with a PHQ-9 score >10 [143]. It is important to determine which patients will most benefit from support and to provide future resources to these patients for optimized personalized treatments.

Web-Based Versus App-Based Interventions

Engagement with web-based interventions for depression has been studied for longer than engagement with app-based interventions; however, the number of publications on the latter has been steadily increasing since 2016.

Despite there being a lower proportion of studies of web-based interventions reporting efficacy than studies of app-based interventions reporting efficacy, a higher degree of participant adherence and engagement was found for web-based interventions: participants spent a greater number of hours, on average, using the web-based intervention and averaged a higher dose. However, the number of studies with engagement data from apps was relatively low (n=9); thus, it is unknown whether such a pattern would hold for a larger sample size. It is also possible that web-based interventions may have been designed to take more time, on average, than mobile apps. One small study investigated the same intervention in both web-based and app-based forms and did not find a statistically significant difference in the number of participants completing all lessons (14/20, 70%, vs 10/15, 67%) or a difference in efficacy in terms of preintervention to postintervention scores on the PHQ-9, Beck Depression Inventory-II, or Kessler Psychological Distress Scale [156]. Future research should consider whether delivering a digital intervention via the web or an app is better for patient engagement and efficacy.

The Relationship of Adherence and Engagement With Efficacy

Many studies of digital interventions assume that there is a linear dose-response relationship, in which greater engagement with the intervention comes with greater efficacy, although such a relationship does not always exist or may not be linear [55,58]. As a result, the level of engagement recommended to achieve efficacy from digital interventions is often not justified with data [58]. Some patients may stop engaging with a digital intervention early because they are feeling better [174]. Further, an effective use pattern may differ from user to user [55]. Using machine learning and a data set of >54,000 adults using an internet-based cognitive behavioral therapy platform for depression and anxiety, Chien et al [175] identified 5 discrete subtypes of users based on engagement and concluded that the level of engagement was not always proportional to clinical improvements. They found that 1 subtype of users engaged more with core modules and mood tracking, whereas another engaged with relaxation and mindfulness tools; different forms of engagement still produced results, with each of the 5 subtypes experiencing an average reduction in the PHQ-9 score of at least 4.4 after 14 weeks [175]. Therefore, it may be useful to investigate >1 metric of engagement when measuring a dose-response relationship because the metrics alone may impact the ability to detect such a relationship [55].

In this SLR, the PHQ-9 was the most frequently used tool for both determining participant eligibility for the trial and reporting the efficacy of the intervention. Most of the studies with a control group found the digital intervention to be effective for at least 1 outcome compared with the control. However, only 14 (15%) of the 94 studies reported having analyzed the relationship between participant adherence or engagement and efficacy; approximately two-thirds (9/14, 64%) indicated such a relationship, with each finding increased efficacy with increased adherence or engagement.

The relationship between engagement and efficacy is complex. As reviewed here, it is not a unanimous finding in the existing literature that the more patients engage with the intervention, the better their outcomes. Therefore, it is difficult to determine the extent of engagement necessary to improve depressive symptoms.

Underreporting of Participant Race and Ethnicity Data

Sociocultural factors influence the acceptability and efficacy of digital interventions [176,177], yet only a few studies (23/82, 28%) in this SLR reported on the race or ethnicity of the participants. Despite the importance of diversity, many publications from clinical trials do not report race and ethnicity data [178,179]. Even after the requirement in 2017 to submit race and ethnicity data with trial results to the clinical trial database registry, only 62.4% of the studies have included these data in their publications [178]. In a review of 342 RCTs on interventions for depression from 1981 to 2016, Polo et al [179] found that only 43.3% (n=148) of the studies reported on the participants’ race or ethnicity. The studies of digital interventions for depression reviewed here performed far below this average, with only 29% (24/82) including race and ethnicity data. Among those that did report such data, the lack of diversity was striking. Polo et al [179] reported that a mere 16.7% (n=57) of the studies included at least 50% ethnic populations; in this review, it was only 8.7% (n=8). Presumably, a few more studies we reviewed did include at least 50% ethnic populations, such as those conducted in Latin America; however, these studies did not report race or ethnicity data; thus, they cannot be quantified. It is crucial that studies of digital interventions report race and ethnicity data and that they include unrepresented populations.

Considerations for Clinicians and Patients

There are several factors for patients and clinicians to consider when deciding whether to use a digital intervention for depression. First, apps should be evaluated for their efficacy and quality [163]. There are several organizations and tools to help evaluate digital interventions, including the American Psychiatric Association’s App Advisor [180], One Mind’s PsyberGuide [181], Organisation for the Review of Care and Health Apps’s App Library [182], Mobile App Rating Scale [183], mobile health app trustworthiness checklist [184], Framework to Assist Stakeholders in Technology Evaluation for Recovery [185], and Mhealth Index and Navigation Database [186]. A second factor to consider is patient traits. An SLR of 208 articles on digital mental health interventions concluded that the patients who were most likely to engage with digital interventions were women; had high digital health literacy; and had friends, family, and health care providers who supported their use of digital interventions [187]. Extroversion, fatigue, and more severe depression symptoms were barriers to engagement [187]. The same review pointed to program design as another important factor. Patients were more engaged with interventions that enabled them to connect with other users; had content that was credible, customizable, and relevant; and instilled a sense of privacy [187]. Additional considerations include patients’ commitment to engaging with a digital intervention for enough time to gain benefits, whether they have a preference for using an intervention that is guided or unguided, and whether they prefer to use the intervention as an adjunct to other depression treatments or as a stand-alone treatment.

Limitations and Strengths

One of the limitations of this SLR is publication bias; on average, studies with statistically significant results and larger effect sizes are more likely to be published than those with negative results or small effect sizes [188]. In addition, there is a potential for bias toward studies with higher adherence and engagement. Furthermore, studies could have reported only some exploratory results, but not others, and some reported on only participants who completed the intervention. Considering all these factors, this review may overestimate the effects of and engagement with digital interventions. Additional limitations include that each article was screened by 1 reviewer, and that because this is an SLR rather than a meta-analysis, our results are descriptive rather than statistical. Another limitation is that fewer than half of the studies reported both the total number and average number of modules used; therefore, the dose-received metric we calculated was drawn from a limited number of studies (38/94, 40%), making it less generalizable. Furthermore, it is possible that the relationship between engagement and efficacy was examined in an exploratory or a post hoc analysis in some studies that did not report a negative or nonsignificant finding, which would overinflate the positive findings in the literature. Although this SLR includes data from >20,000 participants, it is possible that some of these individuals were not unique if they participated in >1 of the studies. Yet another limitation is that these studies were biased toward a White participant population, which restricts the generalizability of the findings. Other important demographic information such as socioeconomic status, level of education, health care coverage and accessibility, and geographic location was rarely, if ever, reported. This lack of reported demographic data can and must be addressed in future studies if we are to move toward equitable access and personalized medicine. In addition, studies using only the PHQ-9 as a screening tool may have unintentionally included some individuals experiencing a bipolar depressive episode. Other covariates that were not accounted for included concurrent medication, other health comorbidities, and the severity of depression. Further, these findings may not reflect more recent modes of treatment that were outside the scope of this review, such as chatbots.

Despite these limitations, this SLR highlights many current problems that, if addressed, will strengthen the field. The strengths of this review include the relatively large number of articles analyzed (n=94) and the cumulative number of participants included in the studies (n=20,111).

Conclusions

These findings have different implications for different stakeholders. For digital intervention developers, a key takeaway is that even in the controlled environments of research trials, participants with depression used the interventions, on average, for only 3.9 hours in total (and app-based interventions, on average, for only 2.0 hours in total). Thus, it would be prudent for developers to front-load the most important content in the beginning modules. For mental health care providers, it may be helpful to conceive of digital interventions as short-term rather than long-term treatments, particularly for patients on waitlists. For patients, considerations include whether they are willing and able to make the time commitment involved in using a digital intervention long enough to receive the recommended dose, whether they would like to use a guided intervention, and whether they prefer to use the intervention as an adjunct to their standard of care treatment. For the research field, improvements could be made by using consistent metrics to report adherence (eg, dose received) and engagement (eg, hours spent using the intervention), through regular inclusion of control groups and patients of diverse backgrounds in studies, by always reporting race and ethnicity data in publications, by investigating the interplay of socioeconomic factors and the efficacy of digital interventions, and by measuring the dose-response relationship to make data-informed decisions about dose recommendations.

Acknowledgments

The authors would like to acknowledge Molly Hoke for her involvement in conceiving the study, developing the search strategy and protocol, and providing valuable feedback on an earlier draft of this manuscript. This study was funded by Otsuka Pharmaceutical Development & Commercialization, Inc, Princeton, New Jersey, United States. Editorial services were provided by Oxford PharmaGenesis Inc, Newtown, Pennsylvania, United States, and funded by Otsuka Pharmaceutical Development & Commercialization, Inc.

Data Availability

The data sets generated during this study are available from the corresponding author upon reasonable request.

Authors' Contributions

AF, FD, and MV conceived the project and designed the search parameters. MRK and MV screened the references, and FD resolved discrepancies. MRK analyzed the data and drafted the manuscript. All authors reviewed and approved the final version of the manuscript.

Conflicts of Interest

AF and FD are employees of Otsuka Pharmaceutical Development & Commercialization, Inc. MRK and MV are consultants for Otsuka Pharmaceutical Development & Commercialization, Inc.

Multimedia Appendix 1

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 item checklist.

DOCX File , 27 KB

Multimedia Appendix 2

Outcomes for each reference excluded during abstract and full-text screening.

DOCX File , 157 KB

Multimedia Appendix 3

Sources and study types of each reference included in the systematic literature review.

DOCX File , 48 KB

Multimedia Appendix 4

Study characteristics of references included in this systematic literature review.

DOCX File , 36 KB

Multimedia Appendix 5

Visualizations of adherence and engagement metrics.

DOCX File , 378 KB

  1. Herrman H, Kieling C, McGorry P, Horton R, Sargent J, Patel V. Reducing the global burden of depression: a Lancet-world psychiatric association commission. Lancet. Jun 15, 2019;393(10189):e42-e43. [CrossRef] [Medline]
  2. Awadalla S, Davies EB, Glazebrook C. A longitudinal cohort study to explore the relationship between depression, anxiety and academic performance among Emirati university students. BMC Psychiatry. Sep 11, 2020;20(1):448. [FREE Full text] [CrossRef] [Medline]
  3. Wallin E, Norlund F, Olsson EM, Burell G, Held C, Carlsson T. Treatment activity, user satisfaction, and experienced usability of internet-based cognitive behavioral therapy for adults with depression and anxiety after a myocardial infarction: mixed-methods study. J Med Internet Res. Mar 16, 2018;20(3):e87. [FREE Full text] [CrossRef] [Medline]
  4. Gold SM, Köhler-Forsberg O, Moss-Morris R, Mehnert A, Miranda JJ, Bullinger M, et al. Comorbid depression in medical diseases. Nat Rev Dis Primers. Aug 20, 2020;6(1):69. [CrossRef] [Medline]
  5. Saragoussi D, Christensen MC, Hammer-Helmich L, Rive B, Touya M, Haro JM. Long-term follow-up on health-related quality of life in major depressive disorder: a 2-year European cohort study. Neuropsychiatr Dis Treat. May 22, 2018;14:1339-1350. [FREE Full text] [CrossRef] [Medline]
  6. Gao K, Su M, Sweet J, Calabrese JR. Correlation between depression/anxiety symptom severity and quality of life in patients with major depressive disorder or bipolar disorder. J Affect Disord. Feb 01, 2019;244:9-15. [CrossRef] [Medline]
  7. Steffen A, Nübel J, Jacobi F, Bätzing J, Holstiege J. Mental and somatic comorbidity of depression: a comprehensive cross-sectional analysis of 202 diagnosis groups using German nationwide ambulatory claims data. BMC Psychiatry. Mar 30, 2020;20(1):142. [FREE Full text] [CrossRef] [Medline]
  8. GBD 2019 Mental Disorders Collaborators. Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet Psychiatry. Feb 2022;9(2):137-150. [FREE Full text] [CrossRef] [Medline]
  9. Depression. World Health Organization. Sep 13, 2021. URL: https://www.who.int/health-topics/depression#tab=tab_1 [accessed 2021-10-11]
  10. Results from the 2019 National Survey on Drug Use and Health (NSDUH): key substance use and mental health indicators in the United States. Substance Abuse and Mental Health Services Administration. Sep 2020. URL: https://tinyurl.com/bddcnc5e [accessed 2022-08-03]
  11. Ettman CK, Abdalla SM, Cohen GH, Sampson L, Vivier PM, Galea S. Prevalence of depression symptoms in US adults before and during the COVID-19 pandemic. JAMA Netw Open. Sep 01, 2020;3(9):e2019686. [FREE Full text] [CrossRef] [Medline]
  12. König H, König HH, Konnopka A. The excess costs of depression: a systematic review and meta-analysis. Epidemiol Psychiatr Sci. Apr 05, 2019;29:e30. [FREE Full text] [CrossRef] [Medline]
  13. Greenberg PE, Fournier AA, Sisitsky T, Simes M, Berman R, Koenigsberg SH, et al. The economic burden of adults with major depressive disorder in the United States (2010 and 2018). Pharmacoeconomics. Jun 2021;39(6):653-665. [FREE Full text] [CrossRef] [Medline]
  14. Gauthier G, Mucha L, Shi S, Guerin A. Economic burden of relapse/recurrence in patients with major depressive disorder. J Drug Assess. May 24, 2019;8(1):97-103. [FREE Full text] [CrossRef] [Medline]
  15. Hasin DS, Sarvet AL, Meyers JL, Saha TD, Ruan WJ, Stohl M, et al. Epidemiology of adult DSM-5 major depressive disorder and its specifiers in the United States. JAMA Psychiatry. Apr 01, 2018;75(4):336-346. [FREE Full text] [CrossRef] [Medline]
  16. Mongelli F, Georgakopoulos P, Pato MT. Challenges and opportunities to meet the mental health needs of underserved and disenfranchised populations in the United States. Focus (Am Psychiatr Publ). Jan 2020;18(1):16-24. [FREE Full text] [CrossRef] [Medline]
  17. Andrilla CH, Patterson DG, Garberson LA, Coulthard C, Larson EH. Geographic variation in the supply of selected behavioral health providers. Am J Prev Med. Jun 2018;54(6 Suppl 3):S199-S207. [FREE Full text] [CrossRef] [Medline]
  18. Ofonedu ME, Belcher HM, Budhathoki C, Gross DA. Understanding barriers to initial treatment engagement among underserved families seeking mental health services. J Child Fam Stud. Mar 2017;26(3):863-876. [FREE Full text] [CrossRef] [Medline]
  19. Marasine NR, Sankhi S. Factors associated with antidepressant medication non-adherence. Turk J Pharm Sci. Apr 20, 2021;18(2):242-249. [FREE Full text] [CrossRef] [Medline]
  20. Semahegn A, Torpey K, Manu A, Assefa N, Tesfaye G, Ankomah A. Psychotropic medication non-adherence and its associated factors among patients with major psychiatric disorders: a systematic review and meta-analysis. Syst Rev. Jan 16, 2020;9(1):17. [FREE Full text] [CrossRef] [Medline]
  21. MacQueen G, Santaguida P, Keshavarz H, Jaworska N, Levine M, Beyene J, et al. Systematic review of clinical practice guidelines for failed antidepressant treatment response in major depressive disorder, dysthymia, and subthreshold depression in adults. Can J Psychiatry. Jan 2017;62(1):11-23. [FREE Full text] [CrossRef] [Medline]
  22. Rush AJ, Trivedi MH, Wisniewski SR, Nierenberg AA, Stewart JW, Warden D, et al. Acute and longer-term outcomes in depressed outpatients requiring one or several treatment steps: a STAR*D report. Am J Psychiatry. Nov 2006;163(11):1905-1917. [CrossRef] [Medline]
  23. Swift JK, Greenberg RP. A treatment by disorder meta-analysis of dropout from psychotherapy. J Psychother Integr. 2014;24(3):193-207. [FREE Full text] [CrossRef]
  24. Linardon J, Fitzsimmons-Craft EE, Brennan L, Barillaro M, Wilfley DE. Dropout from interpersonal psychotherapy for mental health disorders: a systematic review and meta-analysis. Psychother Res. Oct 2019;29(7):870-881. [CrossRef] [Medline]
  25. Amick HR, Gartlehner G, Gaynes BN, Forneris C, Asher GN, Morgan LC, et al. Comparative benefits and harms of second generation antidepressants and cognitive behavioral therapies in initial treatment of major depressive disorder: systematic review and meta-analysis. BMJ. Dec 08, 2015;351:h6019. [FREE Full text] [CrossRef] [Medline]
  26. Kraus C, Kadriu B, Lanzenberger R, Zarate Jr CA, Kasper S. Prognosis and improved outcomes in major depression: a review. Transl Psychiatry. Apr 03, 2019;9(1):127. [FREE Full text] [CrossRef] [Medline]
  27. Lye MS, Tey YY, Tor YS, Shahabudin AF, Ibrahim N, Ling KH, et al. Predictors of recurrence of major depressive disorder. PLoS One. Mar 19, 2020;15(3):e0230363. [FREE Full text] [CrossRef] [Medline]
  28. Mauskopf JA, Simon GE, Kalsekar A, Nimsch C, Dunayevich E, Cameron A. Nonresponse, partial response, and failure to achieve remission: humanistic and cost burden in major depressive disorder. Depress Anxiety. 2009;26(1):83-97. [CrossRef] [Medline]
  29. Lake J, Turner MS. Urgent need for improved mental health care and a more collaborative model of care. Perm J. 2017;21:17-24. [FREE Full text] [CrossRef] [Medline]
  30. Ormel J, Cuijpers P, Jorm A, Schoevers RA. What is needed to eradicate the depression epidemic, and why. Ment Health Prev. Mar 2020;17:200177. [FREE Full text] [CrossRef]
  31. Shah RV, Grennan G, Zafar-Khan M, Alim F, Dey S, Ramanathan D, et al. Personalized machine learning of depressed mood using wearables. Transl Psychiatry. Jun 09, 2021;11(1):338. [FREE Full text] [CrossRef] [Medline]
  32. Torous J, Bucci S, Bell IH, Kessing LV, Faurholt-Jepsen M, Whelan P, et al. The growing field of digital psychiatry: current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry. Oct 2021;20(3):318-335. [FREE Full text] [CrossRef] [Medline]
  33. Lord SE, Campbell AN, Brunette MF, Cubillos L, Bartels SM, Torrey WC, et al. Workshop on implementation science and digital therapeutics for behavioral health. JMIR Ment Health. Jan 28, 2021;8(1):e17662. [FREE Full text] [CrossRef] [Medline]
  34. Dang A, Arora D, Rane P. Role of digital therapeutics and the changing future of healthcare. J Family Med Prim Care. May 31, 2020;9(5):2207-2213. [FREE Full text] [CrossRef] [Medline]
  35. Friis-Healy EA, Nagy GA, Kollins SH. It is time to REACT: opportunities for digital mental health apps to reduce mental health disparities in racially and ethnically minoritized groups. JMIR Ment Health. Jan 26, 2021;8(1):e25456. [FREE Full text] [CrossRef] [Medline]
  36. Porras-Segovia A, Díaz-Oliván I, Gutiérrez-Rojas L, Dunne H, Moreno M, Baca-García E. Apps for depression: are they ready to work? Curr Psychiatry Rep. Feb 05, 2020;22(3):11. [CrossRef] [Medline]
  37. Patel NA, Butte AJ. Characteristics and challenges of the clinical pipeline of digital therapeutics. NPJ Digit Med. Dec 11, 2020;3(1):159. [FREE Full text] [CrossRef] [Medline]
  38. Powell AC, Torous JB, Firth J, Kaufman KR. Generating value with mental health apps. BJPsych Open. Feb 05, 2020;6(2):e16. [FREE Full text] [CrossRef] [Medline]
  39. Baghaei N, Chitale V, Hlasnik A, Stemmet L, Liang HN, Porter R. Virtual reality for supporting the treatment of depression and anxiety: scoping review. JMIR Ment Health. Sep 23, 2021;8(9):e29681. [FREE Full text] [CrossRef] [Medline]
  40. Wasil AR, Gillespie S, Shingleton R, Wilks CR, Weisz JR. Examining the reach of smartphone apps for depression and anxiety. Am J Psychiatry. May 01, 2020;177(5):464-465. [CrossRef] [Medline]
  41. Nwosu A, Boardman S, Husain MM, Doraiswamy PM. Digital therapeutics for mental health: is attrition the Achilles heel? Front Psychiatry. Aug 5, 2022;13:900615. [FREE Full text] [CrossRef] [Medline]
  42. Sigg S, Lagerspetz E, Peltonen E, Nurmi P, Tarkoma S. Exploiting usage to predict instantaneous app popularity: trend filters and retention rates. ACM Trans Web. May 31, 2019;13(2):1-25. [FREE Full text] [CrossRef]
  43. Lattie EG, Schueller SM, Sargent E, Stiles-Shields C, Tomasino KN, Corden ME, et al. Uptake and usage of IntelliCare: a publicly available suite of mental health and well-being apps. Internet Interv. May 2016;4(2):152-158. [FREE Full text] [CrossRef] [Medline]
  44. Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res. Sep 25, 2019;21(9):e14567. [FREE Full text] [CrossRef] [Medline]
  45. Garrido S, Millington C, Cheers D, Boydell K, Schubert E, Meade T, et al. What works and what doesn't work? A systematic review of digital mental health interventions for depression and anxiety in young people. Front Psychiatry. Nov 13, 2019;10:759. [FREE Full text] [CrossRef] [Medline]
  46. Fleming T, Bavin L, Lucassen M, Stasiak K, Hopkins S, Merry S. Beyond the trial: systematic review of real-world uptake and engagement with digital self-help interventions for depression, low mood, or anxiety. J Med Internet Res. Jun 06, 2018;20(6):e199. [FREE Full text] [CrossRef] [Medline]
  47. Wasden C, Williams A. Owning the disease: a new transformational business model for healthcare. PricewaterhouseCoopers. 2012. URL: https://www.pwc.com/il/en/pharmaceuticals/assets/owning-the-disease.pdf [accessed 2022-07-13]
  48. Hong JS, Wasden C, Han DH. Introduction of digital therapeutics. Comput Methods Programs Biomed. Sep 2021;209:106319. [CrossRef] [Medline]
  49. Dang A, Dang D, Rane P. The expanding role of digital therapeutics in the post-COVID-19 era. Open COVID J. May 21, 2021;1(1):32-37. [FREE Full text] [CrossRef]
  50. Digital TA. Digital therapeutics definition and core principles. Digital Therapeutics Alliance. Nov 2019. URL: https://dtxalli ance.org/wp-content/uploads/2021/01/DTA_DTx-Definition-and-Core-Principles.pdf [accessed 2022-03-29]
  51. How to study and market your device. United States Food and Drug Administration. Jun 07, 2022. URL: https:/​/www.​fda .gov/​medical-devices/​device-advice-comprehensive-regulatory-assistance/​how-study-and-market-your-device [accessed 2021-12-10]
  52. Pham Q, Graham G, Carrion C, Morita PP, Seto E, Stinson JN, et al. A library of analytic indicators to evaluate effective engagement with consumer mHealth apps for chronic conditions: scoping review. JMIR Mhealth Uhealth. Jan 18, 2019;7(1):e11941. [FREE Full text] [CrossRef] [Medline]
  53. Kelders SM, Kip H, Greeff J. Psychometric evaluation of the TWente Engagement with Ehealth Technologies Scale (TWEETS): evaluation study. J Med Internet Res. Oct 09, 2020;22(10):e17757. [FREE Full text] [CrossRef] [Medline]
  54. Oehler C, Scholze K, Driessen P, Rummel-Kluge C, Görges F, Hegerl U. How are guide profession and routine care setting related to adherence and symptom change in iCBT for depression? - an explorative log-data analysis. Internet Interv. Nov 02, 2021;26:100476. [FREE Full text] [CrossRef] [Medline]
  55. Sieverink F, Kelders SM, van Gemert-Pijnen JE. Clarifying the concept of adherence to eHealth technology: systematic review on when usage becomes adherence. J Med Internet Res. Dec 06, 2017;19(12):e402. [FREE Full text] [CrossRef] [Medline]
  56. Li Y, Guo Y, Hong YA, Zeng Y, Monroe-Wise A, Zeng C, et al. Dose-response effects of patient engagement on health outcomes in an mHealth intervention: secondary analysis of a randomized controlled trial. JMIR Mhealth Uhealth. Jan 04, 2022;10(1):e25586. [FREE Full text] [CrossRef] [Medline]
  57. Christensen H, Griffiths KM, Farrer L. Adherence in internet interventions for anxiety and depression. J Med Internet Res. Apr 24, 2009;11(2):e13. [FREE Full text] [CrossRef] [Medline]
  58. Flett JA, Fletcher BD, Riordan BC, Patterson T, Hayne H, Conner TS. The peril of self-reported adherence in digital interventions: a brief example. Internet Interv. Dec 2019;18:100267. [FREE Full text] [CrossRef] [Medline]
  59. Doherty K, Doherty G. Engagement in HCI: conception, theory and measurement. ACM Comput Surv. Jan 23, 2019;51(5):1-39. [FREE Full text] [CrossRef]
  60. Molloy A, Anderson PL. Engagement with mobile health interventions for depression: a systematic review. Internet Interv. Sep 11, 2021;26:100454. [FREE Full text] [CrossRef] [Medline]
  61. Donkin L, Hickie IB, Christensen H, Naismith SL, Neal B, Cockayne NL, et al. Rethinking the dose-response relationship between usage and outcome in an online intervention for depression: randomized controlled trial. J Med Internet Res. Oct 17, 2013;15(10):e231. [FREE Full text] [CrossRef] [Medline]
  62. Mohr DC, Duffecy J, Ho J, Kwasny M, Cai X, Burns MN, et al. A randomized controlled trial evaluating a manualized TeleCoaching protocol for improving adherence to a web-based intervention for the treatment of depression. PLoS One. Aug 21, 2013;8(8):e70086. [FREE Full text] [CrossRef] [Medline]
  63. Moritz S, Schilling L, Hauschildt M, Schröder J, Treszl A. A randomized controlled trial of internet-based therapy in depression. Behav Res Ther. Aug 2012;50(7-8):513-521. [CrossRef] [Medline]
  64. Stiles-Shields C, Montague E, Kwasny MJ, Mohr DC. Behavioral and cognitive intervention strategies delivered via coached apps for depression: pilot trial. Psychol Serv. May 2019;16(2):233-238. [FREE Full text] [CrossRef] [Medline]
  65. Ng MM, Firth J, Minen M, Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv. Jul 01, 2019;70(7):538-544. [FREE Full text] [CrossRef] [Medline]
  66. Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. Jun 2017;7(2):254-267. [FREE Full text] [CrossRef] [Medline]
  67. Torous J, Michalak EE, O'Brien HL. Digital health and engagement-looking behind the measures and methods. JAMA Netw Open. Jul 01, 2020;3(7):e2010918. [FREE Full text] [CrossRef] [Medline]
  68. Wu A, Scult MA, Barnes ED, Betancourt JA, Falk A, Gunning FM. Smartphone apps for depression and anxiety: a systematic review and meta-analysis of techniques to increase engagement. NPJ Digit Med. Feb 11, 2021;4(1):20. [FREE Full text] [CrossRef] [Medline]
  69. Hawker S, Payne S, Kerr C, Hardey M, Powell J. Appraising the evidence: reviewing disparate data systematically. Qual Health Res. Nov 2002;12(9):1284-1299. [CrossRef] [Medline]
  70. Donkin L, Christensen H, Naismith SL, Neal B, Hickie IB, Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J Med Internet Res. Aug 05, 2011;13(3):e52. [FREE Full text] [CrossRef] [Medline]
  71. Bur OT, Krieger T, Moritz S, Klein JP, Berger T. Optimizing the context of support of web-based self-help in individuals with mild to moderate depressive symptoms: a randomized full factorial trial. Behav Res Ther. May 2022;152:104070. [FREE Full text] [CrossRef] [Medline]
  72. Karyotaki E, Klein AM, Ciharova M, Bolinski F, Krijnen L, de Koning L, et al. Guided internet-based transdiagnostic individually tailored cognitive behavioral therapy for symptoms of depression and/or anxiety in college students: a randomized controlled trial. Behav Res Ther. Mar 2022;150:104028. [FREE Full text] [CrossRef] [Medline]
  73. Wright JH, Owen J, Eells TD, Antle B, Bishop LB, Girdler R, et al. Effect of computer-assisted cognitive behavior therapy vs usual care on depression among adults in primary care: a randomized clinical trial. JAMA Netw Open. Feb 01, 2022;5(2):e2146716. [FREE Full text] [CrossRef] [Medline]
  74. Heim E, Ramia JA, Hana RA, Burchert S, Carswell K, Cornelisz I, et al. Step-by-step: feasibility randomised controlled trial of a mobile-based intervention for depression among populations affected by adversity in Lebanon. Internet Interv. Mar 04, 2021;24:100380. [FREE Full text] [CrossRef] [Medline]
  75. Rahmadiana M, Karyotaki E, Schulte M, Ebert DD, Passchier J, Cuijpers P, et al. Transdiagnostic internet intervention for Indonesian university students with depression and anxiety: evaluation of feasibility and acceptability. JMIR Ment Health. Mar 05, 2021;8(3):e20036. [FREE Full text] [CrossRef] [Medline]
  76. Grinberg A. Mobile cognitive training for the cognitive symptoms of depression in young adults: a double-blind, randomized pilot study with active control dissertation. The City University of New York. Sep 2020. URL: https://academicworks.cuny .edu/cgi/viewcontent.cgi?article=5061&context=gc_etds [accessed 2022-03-22]
  77. Krämer LV, Grünzig SD, Baumeister H, Ebert DD, Bengel J. Effectiveness of a guided web-based intervention to reduce depressive symptoms before outpatient psychotherapy: a pragmatic randomized controlled trial. Psychother Psychosom. 2021;90(4):233-242. [CrossRef] [Medline]
  78. Batterham PJ, Calear AL, Sunderland M, Kay-Lambkin F, Farrer LM, Christensen H, et al. A brief intervention to increase uptake and adherence of an internet-based program for depression and anxiety (enhancing engagement with psychosocial interventions): randomized controlled trial. J Med Internet Res. Jul 27, 2021;23(7):e23029. [FREE Full text] [CrossRef] [Medline]
  79. Pérez JC, Fernández O, Cáceres C, Carrasco ÁE, Moessner M, Bauer S, et al. An adjunctive internet-based intervention to enhance treatment for depression in adults: randomized controlled trial. JMIR Ment Health. Dec 16, 2021;8(12):e26814. [FREE Full text] [CrossRef] [Medline]
  80. Moskowitz JT, Addington EL, Shiu E, Bassett SM, Schuette S, Kwok I, et al. Facilitator contact, discussion boards, and virtual badges as adherence enhancements to a web-based, self-guided, positive psychological intervention for depression: randomized controlled trial. J Med Internet Res. Sep 22, 2021;23(9):e25922. [FREE Full text] [CrossRef] [Medline]
  81. Quiñonez-Freire C, Vara MD, Herrero R, Mira A, García-Palacios A, Botella C, et al. Cultural adaptation of the Smiling is Fun program for the treatment of depression in the Ecuadorian public health care system: a study protocol for a randomized controlled trial. Internet Interv. Nov 28, 2020;23:100352. [FREE Full text] [CrossRef] [Medline]
  82. Lukas CA, Eskofier B, Berking M. A gamified smartphone-based intervention for depression: randomized controlled pilot trial. JMIR Ment Health. Jul 20, 2021;8(7):e16643. [FREE Full text] [CrossRef] [Medline]
  83. Oehler C, Görges F, Rogalla M, Rummel-Kluge C, Hegerl U. Efficacy of a guided web-based self-management intervention for depression or dysthymia: randomized controlled trial with a 12-month follow-up using an active control condition. J Med Internet Res. Jul 14, 2020;22(7):e15361. [FREE Full text] [CrossRef] [Medline]
  84. Salamanca-Sanabria A, Richards D, Timulak L, Connell S, Mojica Perilla M, Parra-Villa Y, et al. A culturally adapted cognitive behavioral internet-delivered intervention for depressive symptoms: randomized controlled trial. JMIR Ment Health. Jan 31, 2020;7(1):e13392. [FREE Full text] [CrossRef] [Medline]
  85. Gili M, Castro A, García-Palacios A, Garcia-Campayo J, Mayoral-Cleries F, Botella C, et al. Efficacy of three low-intensity, internet-based psychological interventions for the treatment of depression in primary care: randomized controlled trial. J Med Internet Res. Jun 05, 2020;22(6):e15845. [FREE Full text] [CrossRef] [Medline]
  86. MacLean S, Corsi DJ, Litchfield S, Kucharski J, Genise K, Selaman Z, et al. Coach-facilitated web-based therapy compared with information about web-based resources in patients referred to secondary mental health care for depression: randomized controlled trial. J Med Internet Res. Jun 09, 2020;22(6):e15001. [FREE Full text] [CrossRef] [Medline]
  87. Graham AK, Greene CJ, Kwasny MJ, Kaiser SM, Lieponis P, Powell T, et al. Coached mobile app platform for the treatment of depression and anxiety among primary care patients: a randomized clinical trial. JAMA Psychiatry. Sep 01, 2020;77(9):906-914. [FREE Full text] [CrossRef] [Medline]
  88. Moberg C, Niles A, Beermann D. Guided self-help works: randomized waitlist controlled trial of Pacifica, a mobile app integrating cognitive behavioral therapy and mindfulness for stress, anxiety, and depression. J Med Internet Res. Jun 08, 2019;21(6):e12556. [FREE Full text] [CrossRef] [Medline]
  89. Görges F, Oehler C, von Hirschhausen E, Hegerl U, Rummel-Kluge C. GET.HAPPY - acceptance of an internet-based self-management positive psychology intervention for adult primary care patients with mild and moderate depression or dysthymia: a pilot study. Internet Interv. Mar 07, 2018;12:26-35. [FREE Full text] [CrossRef] [Medline]
  90. Lambert JD, Greaves CJ, Farrand P, Price L, Haase AM, Taylor AH. Web-based intervention using behavioral activation and physical activity for adults with depression (the eMotion study): pilot randomized controlled trial. J Med Internet Res. Jul 16, 2018;20(7):e10112. [FREE Full text] [CrossRef] [Medline]
  91. Thase ME, Wright JH, Eells TD, Barrett MS, Wisniewski SR, Balasubramani GK, et al. Improving the efficiency of psychotherapy for depression: computer-assisted versus standard CBT. Am J Psychiatry. Mar 01, 2018;175(3):242-250. [FREE Full text] [CrossRef] [Medline]
  92. Caplan S, Sosa Lovera AS, Reyna Liberato PR. A feasibility study of a mental health mobile app in the Dominican Republic: the untold story. Int J Ment Health. 2019;47(4):311-345. [FREE Full text] [CrossRef]
  93. Pratap A, Renn BN, Volponi J, Mooney SD, Gazzaley A, Arean PA, et al. Using mobile apps to assess and treat depression in Hispanic and Latino populations: fully remote randomized clinical trial. J Med Internet Res. Aug 09, 2018;20(8):e10130. [FREE Full text] [CrossRef] [Medline]
  94. Tomasino KN, Lattie EG, Ho J, Palac HL, Kaiser SM, Mohr DC. Harnessing peer support in an online intervention for older adults with depression. Am J Geriatr Psychiatry. Oct 2017;25(10):1109-1119. [FREE Full text] [CrossRef] [Medline]
  95. Beevers CG, Pearson R, Hoffman JS, Foulser AA, Shumake J, Meyer B. Effectiveness of an internet intervention (Deprexis) for depression in a united states adult sample: a parallel-group pragmatic randomized controlled trial. J Consult Clin Psychol. Apr 2017;85(4):367-380. [CrossRef] [Medline]
  96. Klein JP, Späth C, Schröder J, Meyer B, Greiner W, Hautzinger M, et al. Time to remission from mild to moderate depressive symptoms: one year results from the EVIDENT-study, an RCT of an internet intervention for depression. Behav Res Ther. Oct 2017;97:154-162. [CrossRef] [Medline]
  97. Mohr DC, Tomasino KN, Lattie EG, Palac HL, Kwasny MJ, Weingardt K, et al. IntelliCare: an eclectic, skills-based app suite for the treatment of depression and anxiety. J Med Internet Res. Jan 05, 2017;19(1):e10. [FREE Full text] [CrossRef] [Medline]
  98. Smith J, Newby JM, Burston N, Murphy MJ, Michael S, Mackenzie A, et al. Help from home for depression: a randomised controlled trial comparing internet-delivered cognitive behaviour therapy with bibliotherapy for depression. Internet Interv. May 18, 2017;9:25-37. [FREE Full text] [CrossRef] [Medline]
  99. Kenter RM, Cuijpers P, Beekman A, van Straten A. Effectiveness of a web-based guided self-help intervention for outpatients with a depressive disorder: short-term results from a randomized controlled trial. J Med Internet Res. Mar 31, 2016;18(3):e80. [FREE Full text] [CrossRef] [Medline]
  100. Kordy H, Wolf M, Aulich K, Bürgy M, Hegerl U, Hüsing J, et al. Internet-delivered disease management for recurrent depression: a multicenter randomized controlled trial. Psychother Psychosom. 2016;85(2):91-98. [FREE Full text] [CrossRef] [Medline]
  101. Montero-Marín J, Araya R, Pérez-Yus MC, Mayoral F, Gili M, Botella C, et al. An internet-based intervention for depression in primary care in Spain: a randomized controlled trial. J Med Internet Res. Aug 26, 2016;18(8):e231. [FREE Full text] [CrossRef] [Medline]
  102. Meyer B, Bierbrodt J, Schröder J, Berger T, Beevers CG, Weiss M, et al. Effects of an internet intervention (Deprexis) on severe depression symptoms: randomized controlled trial. Internet Interv. Mar 2015;2(1):48-59. [FREE Full text] [CrossRef]
  103. Gilbody S, Littlewood E, Hewitt C, Brierley G, Tharmanathan P, Araya R, et al. REEACT Team. Computerised cognitive behaviour therapy (cCBT) as treatment for depression in primary care (REEACT trial): large scale pragmatic randomised controlled trial. BMJ. Nov 11, 2015;351:h5627. [FREE Full text] [CrossRef] [Medline]
  104. Høifødt RS, Mittner M, Lillevoll K, Katla SK, Kolstrup N, Eisemann M, et al. Predictors of response to web-based cognitive behavioral therapy with high-intensity face-to-face therapist guidance for depression: a Bayesian analysis. J Med Internet Res. Sep 02, 2015;17(9):e197. [FREE Full text] [CrossRef] [Medline]
  105. Kelders SM, Bohlmeijer ET, Pots WT, van Gemert-Pijnen JE. Comparing human and automated support for depression: fractional factorial randomized controlled trial. Behav Res Ther. Sep 2015;72:72-80. [CrossRef] [Medline]
  106. Richards D, Timulak L, O'Brien E, Hayes C, Vigano N, Sharry J, et al. A randomized controlled trial of an internet-delivered treatment: its potential as a low-intensity community intervention for adults with symptoms of depression. Behav Res Ther. Dec 2015;75:20-31. [CrossRef] [Medline]
  107. Hallgren M, Kraepelien M, Öjehagen A, Lindefors N, Zeebari Z, Kaldo V, et al. Physical exercise and internet-based cognitive-behavioural therapy in the treatment of depression: randomised controlled trial. Br J Psychiatry. Sep 2015;207(3):227-234. [CrossRef] [Medline]
  108. Geraedts AS, Kleiboer AM, Wiezer NM, Cuijpers P, van Mechelen W, Anema JR. Feasibility of a worker-directed web-based intervention for employees with depressive symptoms. Internet Interv. Jul 2014;1(3):132-140. [FREE Full text] [CrossRef]
  109. Schneider J, Sarrami Foroushani P, Grime P, Thornicroft G. Acceptability of online self-help to people with depression: users' views of MoodGYM versus informational websites. J Med Internet Res. Mar 28, 2014;16(3):e90. [FREE Full text] [CrossRef] [Medline]
  110. Kivi M, Eriksson MC, Hange D, Petersson EL, Vernmark K, Johansson B, et al. Internet-based therapy for mild to moderate depression in Swedish primary care: short term results from the PRIM-NET randomized controlled trial. Cogn Behav Ther. 2014;43(4):289-298. [FREE Full text] [CrossRef] [Medline]
  111. Geraedts AS, Kleiboer AM, Twisk J, Wiezer NM, van Mechelen W, Cuijpers P. Long-term results of a web-based guided self-help intervention for employees with depressive symptoms: randomized controlled trial. J Med Internet Res. Jul 09, 2014;16(7):e168. [FREE Full text] [CrossRef] [Medline]
  112. Høifødt RS, Lillevoll KR, Griffiths KM, Wilsgaard T, Eisemann M, Waterloo K, et al. The clinical effectiveness of web-based cognitive behavioral therapy with face-to-face therapist support for depressed primary care patients: randomized controlled trial. J Med Internet Res. Aug 05, 2013;15(8):e153. [FREE Full text] [CrossRef] [Medline]
  113. Bolier L, Haverman M, Kramer J, Westerhof GJ, Riper H, Walburg JA, et al. An internet-based intervention to promote mental fitness for mildly depressed adults: randomized controlled trial. J Med Internet Res. Sep 16, 2013;15(9):e200. [FREE Full text] [CrossRef] [Medline]
  114. Kelders SM, Bohlmeijer ET, Van Gemert-Pijnen JE. Participants, usage, and use patterns of a web-based intervention for the prevention of depression within a randomized controlled trial. J Med Internet Res. Aug 20, 2013;15(8):e172. [FREE Full text] [CrossRef] [Medline]
  115. Williams AD, Blackwell SE, Mackenzie A, Holmes EA, Andrews G. Combining imagination and reason in the treatment of depression: a randomized controlled trial of internet-based cognitive-bias modification and internet-CBT for depression. J Consult Clin Psychol. Oct 2013;81(5):793-799. [FREE Full text] [CrossRef] [Medline]
  116. Burns MN, Begale M, Duffecy J, Gergle D, Karr CJ, Giangrande E, et al. Harnessing context sensing to develop a mobile intervention for depression. J Med Internet Res. Aug 12, 2011;13(3):e55. [FREE Full text] [CrossRef] [Medline]
  117. Levin W, Campbell DR, McGovern KB, Gau JM, Kosty DB, Seeley JR, et al. A computer-assisted depression intervention in primary care. Psychol Med. Jul 2011;41(7):1373-1383. [CrossRef] [Medline]
  118. Berger T, Hämmerli K, Gubser N, Andersson G, Caspar F. Internet-based treatment of depression: a randomized controlled trial comparing guided with unguided self-help. Cogn Behav Ther. 2011;40(4):251-266. [CrossRef] [Medline]
  119. Mohr DC, Duffecy J, Jin L, Ludman EJ, Lewis A, Begale M, et al. Multimodal e-mental health treatment for depression: a feasibility trial. J Med Internet Res. Dec 19, 2010;12(5):e48. [FREE Full text] [CrossRef] [Medline]
  120. Vernmark K, Lenndin J, Bjärehed J, Carlsson M, Karlsson J, Oberg J, et al. Internet administered guided self-help versus individualized e-mail therapy: a randomized trial of two versions of CBT for major depression. Behav Res Ther. May 2010;48(5):368-376. [CrossRef] [Medline]
  121. Ruwaard J, Schrieken B, Schrijver M, Broeksteeg J, Dekker J, Vermeulen H, et al. Standardized web-based cognitive behavioural therapy of mild to moderate depression: a randomized controlled trial with a long-term follow-up. Cogn Behav Ther. 2009;38(4):206-221. [CrossRef] [Medline]
  122. Perini S, Titov N, Andrews G. Clinician-assisted Internet-based treatment is effective for depression: randomized controlled trial. Aust N Z J Psychiatry. Jun 2009;43(6):571-578. [CrossRef] [Medline]
  123. de Graaf LE, Gerhards SA, Arntz A, Riper H, Metsemakers JF, Evers SM, et al. Clinical effectiveness of online computerised cognitive-behavioural therapy without support for depression in primary care: randomised trial. Br J Psychiatry. Jul 2009;195(1):73-80. [FREE Full text] [CrossRef] [Medline]
  124. Warmerdam L, van Straten A, Twisk J, Riper H, Cuijpers P. Internet-based treatment for adults with depressive symptoms: randomized controlled trial. J Med Internet Res. Nov 20, 2008;10(4):e44. [FREE Full text] [CrossRef] [Medline]
  125. Lukas CA, Berking M. Blending group-based psychoeducation with a smartphone intervention for the reduction of depressive symptoms: results of a randomized controlled pilot study. Pilot Feasibility Stud. Feb 24, 2021;7(1):57. [FREE Full text] [CrossRef] [Medline]
  126. Gräfe V, Berger T, Hautzinger M, Hohagen F, Lutz W, Meyer B, et al. Health economic evaluation of a web-based intervention for depression: the EVIDENT-trial, a randomized controlled study. Health Econ Rev. Jun 07, 2019;9(1):16. [FREE Full text] [CrossRef] [Medline]
  127. Wilkinson ST, Ostroff RB, Sanacora G. Computer-assisted cognitive behavior therapy to prevent relapse following electroconvulsive therapy. J ECT. Mar 2017;33(1):52-57. [FREE Full text] [CrossRef] [Medline]
  128. Anguera JA, Jordan JT, Castaneda D, Gazzaley A, Areán PA. Conducting a fully mobile and randomised clinical trial for depression: access, engagement and expense. BMJ Innov. Jan 2016;2(1):14-21. [FREE Full text] [CrossRef] [Medline]
  129. Robertson L, Smith M, Castle D, Tannenbaum D. Using the internet to enhance the treatment of depression. Australas Psychiat. Dec 2006;14(4):413-417. [FREE Full text] [CrossRef]
  130. Krämer LV, Mueller-Weinitschke C, Zeiss T, Baumeister H, Ebert DD, Bengel J. Effectiveness of a web-based behavioural activation intervention for individuals with depression based on the health action process approach: protocol for a randomised controlled trial with a 6-month follow-up. BMJ Open. Jan 24, 2022;12(1):e054775. [FREE Full text] [CrossRef] [Medline]
  131. Fitzsimmons-Craft EE, Taylor CB, Newman MG, Zainal NH, Rojas-Ashe EE, Lipson SK, et al. Harnessing mobile technology to reduce mental health disorders in college populations: a randomized controlled trial study protocol. Contemp Clin Trials. Apr 2021;103:106320. [FREE Full text] [CrossRef] [Medline]
  132. Krämer R, Köhler S. Evaluation of the online-based self-help programme "Selfapy" in patients with unipolar depression: study protocol for a randomized, blinded parallel group dismantling study. Trials. Apr 09, 2021;22(1):264. [FREE Full text] [CrossRef] [Medline]
  133. Lopes RT, Meyer B, Berger T, Svacina MA. Effectiveness of an internet-based self-guided program to treat depression in a sample of Brazilian users: a study protocol. Braz J Psychiatry. 2020;42(3):322-328. [FREE Full text] [CrossRef] [Medline]
  134. Addington EL, Cheung EO, Bassett SM, Kwok I, Schuette SA, Shiu E, et al. The MARIGOLD study: feasibility and enhancement of an online intervention to improve emotion regulation in people with elevated depressive symptoms. J Affect Disord. Oct 01, 2019;257:352-364. [FREE Full text] [CrossRef] [Medline]
  135. Karyotaki E, Klein AM, Riper H, de Wit L, Krijnen L, Bol E, et al. Examining the effectiveness of a web-based intervention for symptoms of depression and anxiety in college students: study protocol of a randomised controlled trial. BMJ Open. May 14, 2019;9(5):e028739. [FREE Full text] [CrossRef] [Medline]
  136. Mira A, Soler C, Alda M, Baños R, Castilla D, Castro A, et al. Exploring the relationship between the acceptability of an internet-based intervention for depression in primary care and clinical outcomes: secondary analysis of a randomized controlled trial. Front Psychiatry. May 10, 2019;10:325. [FREE Full text] [CrossRef] [Medline]
  137. Kraepelien M, Blom K, Lindefors N, Johansson R, Kaldo V. The effects of component-specific treatment compliance in individually tailored internet-based treatment. Clin Psychol Psychother. May 2019;26(3):298-308. [FREE Full text] [CrossRef] [Medline]
  138. Motter JN, Grinberg A, Lieberman DH, Iqnaibi WB, Sneed JR. Computerized cognitive training in young adults with depressive symptoms: effects on mood, cognition, and everyday functioning. J Affect Disord. Feb 15, 2019;245:28-37. [CrossRef] [Medline]
  139. Antle BF, Owen JJ, Eells TD, Wells MJ, Harris LM, Cappiccie A, et al. Dissemination of computer-assisted cognitive-behavior therapy for depression in primary care. Contemp Clin Trials. Mar 2019;78:46-52. [CrossRef] [Medline]
  140. Dahne J, Collado A, Lejuez CW, Risco CM, Diaz VA, Coles L, et al. Pilot randomized controlled trial of a Spanish-language behavioral activation mobile app (¡Aptívate!) for the treatment of depressive symptoms among united states Latinx adults with limited English proficiency. J Affect Disord. May 01, 2019;250:210-217. [FREE Full text] [CrossRef] [Medline]
  141. Grünzig SD, Baumeister H, Bengel J, Ebert D, Krämer L. Effectiveness and acceptance of a web-based depression intervention during waiting time for outpatient psychotherapy: study protocol for a randomized controlled trial. Trials. May 22, 2018;19(1):285. [FREE Full text] [CrossRef] [Medline]
  142. Löbner M, Pabst A, Stein J, Dorow M, Matschinger H, Luppa M, et al. Computerized cognitive behavior therapy for patients with mild to moderately severe depression in primary care: a pragmatic cluster randomized controlled trial (@ktiv). J Affect Disord. Oct 01, 2018;238:317-326. [CrossRef] [Medline]
  143. Fuhr K, Schröder J, Berger T, Moritz S, Meyer B, Lutz W, et al. The association between adherence and outcome in an internet intervention for depression. J Affect Disord. Mar 15, 2018;229:443-449. [CrossRef] [Medline]
  144. Weisel KK, Zarski AC, Berger T, Schaub MP, Krieger T, Moser CT, et al. Transdiagnostic tailored internet- and mobile-based guided treatment for major depressive disorder and comorbid anxiety: study protocol of a randomized controlled trial. Front Psychiatry. Jul 04, 2018;9:274. [FREE Full text] [CrossRef] [Medline]
  145. Richards D, Duffy D, Blackburn B, Earley C, Enrique A, Palacios J, et al. Digital IAPT: the effectiveness and cost-effectiveness of internet-delivered interventions for depression and anxiety disorders in the improving access to psychological therapies programme: study protocol for a randomised control trial. BMC Psychiatry. Mar 02, 2018;18(1):59. [FREE Full text] [CrossRef] [Medline]
  146. Mehrotra S, Sudhir P, Rao G, Thirthalli J, Srikanth TK. Development and pilot testing of an internet-based self-help intervention for depression for Indian users. Behav Sci (Basel). Mar 22, 2018;8(4):36. [FREE Full text] [CrossRef] [Medline]
  147. Williams C, McClay CA, Martinez R, Morrison J, Haig C, Jones R, et al. Online CBT life skills programme for low mood and anxiety: study protocol for a pilot randomized controlled trial. Trials. Apr 27, 2016;17(1):220. [FREE Full text] [CrossRef] [Medline]
  148. Brabyn S, Araya R, Barkham M, Bower P, Cooper C, Duarte A, et al. The second Randomised Evaluation of the Effectiveness, cost-effectiveness and Acceptability of Computerised Therapy (REEACT-2) trial: does the provision of telephone support enhance the effectiveness of computer-delivered cognitive behaviour therapy? A randomised controlled trial. Health Technol Assess. Nov 2016;20(89):1-64. [FREE Full text] [CrossRef] [Medline]
  149. Birney AJ, Gunn R, Russell JK, Ary DV. MoodHacker mobile web app with email for adults to self-manage mild-to-moderate depression: randomized controlled trial. JMIR Mhealth Uhealth. Jan 26, 2016;4(1):e8. [FREE Full text] [CrossRef] [Medline]
  150. Rickhi B, Kania-Richmond A, Moritz S, Cohen J, Paccagnan P, Dennis C, et al. Evaluation of a spirituality informed e-mental health tool as an intervention for major depressive disorder in adolescents and young adults - a randomized controlled pilot trial. BMC Complement Altern Med. Dec 24, 2015;15:450. [FREE Full text] [CrossRef] [Medline]
  151. Littlewood E, Duarte A, Hewitt C, Knowles S, Palmer S, Walker S, et al. REEACT Team. A randomised controlled trial of computerised cognitive behaviour therapy for the treatment of depression in primary care: the Randomised Evaluation of the Effectiveness and Acceptability of Computerised Therapy (REEACT) trial. Health Technol Assess. Dec 2015;19(101):viii-v171. [FREE Full text] [CrossRef] [Medline]
  152. Roepke AM, Jaffee SR, Riffle OM, McGonigal J, Broome R, Maxwell B. Randomized controlled trial of Superbetter, a smartphone-based/internet-based self-help tool to reduce depressive symptoms. Games Health J. Jun 2015;4(3):235-246. [CrossRef] [Medline]
  153. Santucci LC, McHugh RK, Elkins RM, Schechter B, Ross MS, Landa CE, et al. Pilot implementation of computerized cognitive behavioral therapy in a university health setting. Adm Policy Ment Health. Jul 2014;41(4):514-521. [CrossRef] [Medline]
  154. Richards D, Timulak L, Doherty G, Sharry J, Colla A, Joyce C, et al. Internet-delivered treatment: its potential as a low-intensity community intervention for adults with symptoms of depression: protocol for a randomized controlled trial. BMC Psychiatry. May 21, 2014;14:147. [FREE Full text] [CrossRef] [Medline]
  155. Wagner B, Horn AB, Maercker A. Internet-based versus face-to-face cognitive-behavioral intervention for depression: a randomized controlled non-inferiority trial. J Affect Disord. Jan 2014;152-154:113-121. [CrossRef] [Medline]
  156. Watts S, Mackenzie A, Thomas C, Griskaitis A, Mewton L, Williams A, et al. CBT for depression: a pilot RCT comparing mobile phone vs. computer. BMC Psychiatry. Feb 07, 2013;13:49. [FREE Full text] [CrossRef] [Medline]
  157. Andersson G, Hesser H, Veilord A, Svedling L, Andersson F, Sleman O, et al. Randomised controlled non-inferiority trial with 3-year follow-up of internet-delivered versus face-to-face group cognitive behavioural therapy for depression. J Affect Disord. Dec 2013;151(3):986-994. [CrossRef] [Medline]
  158. Johansson R, Sjöberg E, Sjögren M, Johnsson E, Carlbring P, Andersson T, et al. Tailored vs. standardized internet-based cognitive behavior therapy for depression and comorbid symptoms: a randomized controlled trial. PLoS One. 2012;7(5):e36905. [FREE Full text] [CrossRef] [Medline]
  159. Choi I, Zou J, Titov N, Dear BF, Li S, Johnston L, et al. Culturally attuned internet treatment for depression amongst Chinese Australians: a randomised controlled trial. J Affect Disord. Feb 2012;136(3):459-468. [CrossRef] [Medline]
  160. Titov N, Andrews G, Davies M, McIntyre K, Robinson E, Solley K. Internet treatment for depression: a randomized controlled trial comparing clinician vs. technician assistance. PLoS One. Jun 08, 2010;5(6):e10939. [FREE Full text] [CrossRef] [Medline]
  161. Hatcher S, Whittaker R, Patton M, Miles WS, Ralph N, Kercher K, et al. Web-based therapy plus support by a coach in depressed patients referred to secondary mental health care: randomized controlled trial. JMIR Ment Health. Jan 23, 2018;5(1):e5. [FREE Full text] [CrossRef] [Medline]
  162. Keyloun KR, Hansen RN, Hepp Z, Gillard P, Thase ME, Devine EB. Adherence and persistence across antidepressant therapeutic classes: a retrospective claims analysis among insured US patients with major depressive disorder (MDD). CNS Drugs. May 2017;31(5):421-432. [FREE Full text] [CrossRef] [Medline]
  163. Wright JH, Mishkind M. Computer-assisted CBT and mobile apps for depression: assessment and integration into clinical care. Focus (Am Psychiatr Publ). Apr 2020;18(2):162-168. [FREE Full text] [CrossRef] [Medline]
  164. Gibbons MB, Gallop R, Thompson D, Gaines A, Rieger A, Crits-Christoph P. Predictors of treatment attendance in cognitive and dynamic therapies for major depressive disorder delivered in a community mental health setting. J Consult Clin Psychol. Aug 2019;87(8):745-755. [FREE Full text] [CrossRef] [Medline]
  165. Renn BN, Hoeft TJ, Lee HS, Bauer AM, Areán PA. Preference for in-person psychotherapy versus digital psychotherapy options for depression: survey of adults in the U.S. NPJ Digit Med. Feb 11, 2019;2:6. [FREE Full text] [CrossRef] [Medline]
  166. Andrews G, Basu A, Cuijpers P, Craske MG, McEvoy P, English CL, et al. Computer therapy for the anxiety and depression disorders is effective, acceptable and practical health care: an updated meta-analysis. J Anxiety Disord. Apr 2018;55:70-78. [FREE Full text] [CrossRef] [Medline]
  167. Goldberg SB, Lam SU, Simonsson O, Torous J, Sun S. Mobile phone-based interventions for mental health: a systematic meta-review of 14 meta-analyses of randomized controlled trials. PLOS Digit Health. 2022;1(1):e0000002. [FREE Full text] [CrossRef] [Medline]
  168. Karyotaki E, Efthimiou O, Miguel C, Bermpohl FM, Furukawa TA, Cuijpers P, Individual Patient Data Meta-Analyses for Depression (IPDMA-DE) Collaboration; et al. Internet-based cognitive behavioral therapy for depression: a systematic review and individual patient data network meta-analysis. JAMA Psychiatry. Apr 01, 2021;78(4):361-371. [FREE Full text] [CrossRef] [Medline]
  169. Weisel KK, Fuhrmann LM, Berking M, Baumeister H, Cuijpers P, Ebert DD. Standalone smartphone apps for mental health-a systematic review and meta-analysis. NPJ Digit Med. Dec 02, 2019;2:118. [FREE Full text] [CrossRef] [Medline]
  170. Duffy D, Enrique A, Connell S, Connolly C, Richards D. Internet-Delivered Cognitive Behavior Therapy as a Prequel to Face-To-Face Therapy for Depression and Anxiety: A Naturalistic Observation. Front Psychiatry. Jan 09, 2019;10:902. [FREE Full text] [CrossRef] [Medline]
  171. Kolovos S, Kenter RM, Bosmans JE, Beekman AT, Cuijpers P, Kok RN, et al. Economic evaluation of internet-based problem-solving guided self-help treatment in comparison with enhanced usual care for depressed outpatients waiting for face-to-face treatment: a randomized controlled trial. J Affect Disord. Aug 2016;200:284-292. [CrossRef] [Medline]
  172. Levin ME, Hicks ET, Krafft J. Pilot evaluation of the stop, breathe and think mindfulness app for student clients on a college counseling center waitlist. J Am Coll Health. Jan 2022;70(1):165-173. [CrossRef] [Medline]
  173. Moshe I, Terhorst Y, Philippi P, Domhardt M, Cuijpers P, Cristea I, et al. Digital interventions for the treatment of depression: a meta-analytic review. Psychol Bull. Aug 2021;147(8):749-786. [CrossRef] [Medline]
  174. Anton MT, Greenberger HM, Andreopoulos E, Pande RL. Evaluation of a commercial mobile health app for depression and anxiety (AbleTo Digital+): retrospective cohort study. JMIR Form Res. Sep 21, 2021;5(9):e27570. [FREE Full text] [CrossRef] [Medline]
  175. Chien I, Enrique A, Palacios J, Regan T, Keegan D, Carter D, et al. A machine learning approach to understanding patterns of engagement with internet-delivered mental health interventions. JAMA Netw Open. Jul 01, 2020;3(7):e2010791. [FREE Full text] [CrossRef] [Medline]
  176. Perski O, Short CE. Acceptability of digital health interventions: embracing the complexity. Transl Behav Med. Jul 29, 2021;11(7):1473-1480. [FREE Full text] [CrossRef] [Medline]
  177. Jonassaint C, Edmunds M, Shaikh A, Equity H. Diversity is a critical ingredient for innovation: promoting greater racial/ethnic representation in digital health. Outlook, Society of Behavioral Medicine. 2020. URL: https://tinyurl.com/mr355xr6 [accessed 2021-12-10]
  178. Fain KM, Nelson JT, Tse T, Williams RJ. Race and ethnicity reporting for clinical trials in ClinicalTrials.gov and publications. Contemp Clin Trials. Feb 2021;101:106237. [FREE Full text] [CrossRef] [Medline]
  179. Polo AJ, Makol BA, Castro AS, Colón-Quintana N, Wagstaff AE, Guo S. Diversity in randomized clinical trials of depression: a 36-year review. Clin Psychol Rev. Feb 2019;67:22-35. [CrossRef] [Medline]
  180. The app evaluation model. American Psychiatric Association. 2021. URL: https://www.psychiatry.org/psychiatrists/practice/mental-health-apps/the-app-evaluation-model [accessed 2021-06-21]
  181. Neary M, Schueller SM. State of the field of mental health apps. Cogn Behav Pract. Nov 2018;25(4):531-537. [FREE Full text] [CrossRef] [Medline]
  182. What is the health of mental health apps? The Organisation for the Review of Care and Health Apps. 2020. URL: https://orchahealth.com/health-of-mental-health-apps/ [accessed 2021-06-21]
  183. Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth. Mar 11, 2015;3(1):e27. [FREE Full text] [CrossRef] [Medline]
  184. van Haasteren A, Gille F, Fadda M, Vayena E. Development of the mHealth app trustworthiness checklist. Digit Health. Nov 21, 2019;5:2055207619886463. [FREE Full text] [CrossRef] [Medline]
  185. Agarwal S, Jalan M, Wilcox H, Sharma R, Hill R, Pantalone E. Evaluation of mental health mobile applications. Agency for Healthcare Research and Quality. URL: https://www.ncbi.nlm.nih.gov/books/NBK580948/ [accessed 2023-03-22]
  186. Lagan S, Sandler L, Torous J. Evaluating evaluation frameworks: a scoping review of frameworks for assessing health apps. BMJ Open. Mar 19, 2021;11(3):e047001. [FREE Full text] [CrossRef] [Medline]
  187. Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res. Mar 24, 2021;23(3):e24387. [FREE Full text] [CrossRef] [Medline]
  188. Page MJ, Sterne JA, Higgins JP, Egger M. Investigating and dealing with publication bias and other reporting biases in meta-analyses of health research: a review. Res Synth Methods. Mar 2021;12(2):248-259. [CrossRef] [Medline]


ADT: antidepressant treatment
CAU: care as usual
CES-D: Center for Epidemiologic Studies Depression
DTx: digital therapeutics
MDD: major depressive disorder
PHQ-9: Patient Health Questionnaire-9
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RCT: randomized controlled trial
SLR: systematic literature review


Edited by T de Azevedo Cardoso; submitted 24.10.22; peer-reviewed by M Doraiswamy, S Parikh, C Coumoundouros, L Magoun; comments to author 21.02.23; revised version received 24.04.23; accepted 28.06.23; published 11.08.23.

Copyright

©Ainslie Forbes, Madeline Rose Keleher, Michael Venditto, Faith DiBiasi. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 11.08.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.