Published on in Vol 18, No 6 (2016): Jun

Changing Mental Health and Positive Psychological Well-Being Using Ecological Momentary Interventions: A Systematic Review and Meta-analysis

Changing Mental Health and Positive Psychological Well-Being Using Ecological Momentary Interventions: A Systematic Review and Meta-analysis

Changing Mental Health and Positive Psychological Well-Being Using Ecological Momentary Interventions: A Systematic Review and Meta-analysis

Original Paper

1Health, Medical and Neuropsychology Unit, Institute of Psychology, Leiden University, Leiden, Netherlands

2Clinical Psychology Unit, Institute of Psychology, Leiden University, Leiden, Netherlands

3Department of Psychiatry, Leiden University Medical Center, Leiden, Netherlands

Corresponding Author:

Anke Versluis, MSc

Health, Medical and Neuropsychology Unit

Institute of Psychology

Leiden University

Wassenaarseweg 52

Leiden, 2333 AK

Netherlands

Phone: 31 715276343

Fax:31 71 527 3619

Email: a.versluis@fsw.leidenuniv.nl


Background: Mental health problems are highly prevalent, and there is need for the self-management of (mental) health. Ecological momentary interventions (EMIs) can be used to deliver interventions in the daily life of individuals using mobile devices.

Objectives: The aim of this study was to systematically assess and meta-analyze the effect of EMI on 3 highly prevalent mental health outcomes (anxiety, depression, and perceived stress) and positive psychological outcomes (eg, acceptance).

Methods: PsycINFO and Web of Science were searched for relevant publications, and the last search was done in September 2015. Three concepts were used to find publications: (1) mental health, (2) mobile phones, and (3) interventions. A total of 33 studies (using either a within- or between-subject design) including 43 samples that received an EMI were identified (n=1301), and relevant study characteristics were coded using a standardized form. Quality assessment was done with the Cochrane Collaboration tool.

Results: Most of the EMIs focused on a clinical sample, used an active intervention (that offered exercises), and in over half of the studies, additional support by a mental health professional (MHP) was given. The EMI lasted on average 7.48 weeks (SD=6.46), with 2.80 training episodes per day (SD=2.12) and 108.25 total training episodes (SD=123.00). Overall, 27 studies were included in the meta-analysis, and after removing 6 outliers, a medium effect was found on mental health in the within-subject analyses (n=1008), with g=0.57 and 95% CI (0.45-0.70). This effect did not differ as function of outcome type (ie, anxiety, depression, perceived stress, acceptance, relaxation, and quality of life). The only moderator for which the effect varied significantly was additional support by an MHP (MHP-supported EMI, g=0.73, 95% CI: 0.57-0.88; stand-alone EMI, g=0.45, 95% CI: 0.22-0.69; stand-alone EMI with access to care as usual, g=0.38, 95% CI: 0.11-0.64). In the between-subject studies, 13 studies were included, and a small to medium effect was found (g=0.40, 95% CI: 0.22-0.57). Yet, these between-subject analyses were at risk for publication bias and were not suited for moderator analyses. Furthermore, the overall quality of the studies was relatively low.

Conclusions: Results showed that there was a small to medium effect of EMIs on mental health and positive psychological well-being and that the effect was not different between outcome types. Moreover, the effect was larger with additional support by an MHP. Future randomized controlled trials are needed to further strengthen the results and to determine potential moderator variables. Overall, EMIs offer great potential for providing easy and cost-effective interventions to improve mental health and increase positive psychological well-being.

J Med Internet Res 2016;18(6):e152

doi:10.2196/jmir.5642

Keywords



One in every 3 individuals worldwide will be affected by one or more mental health problems during their lives [1]. Yet, only a small portion of those individuals is receiving help for their problems (with numbers varying from 7% to 25% in industrialized countries) [2,3]. To help those in need, new strategies for enhancing access to and quality of care are needed, and this is recognized in a new policy of the World Health Organization [4]. This newly introduced policy requests methods to increase self-management or self-care of health by, for instance, using electronic and mobile devices. In line with this, Wanless [5] argues that health care productivity can be increased using self-care and that this can have cost-effective benefits. All in all, there appears to be a future for the self-management of (mental) health.

One method that can be used to enhance health self-management is ecological momentary interventions (EMIs) [6]. The key to these interventions is that they can be tailored to the individual and be implemented in real time (ie, daily life). Mobile or electronic devices can be used to provide these interventions in the daily lives of individuals. With a Web-based survey, Proudfoot et al [7] showed that 76% of the general population is interested in using mobile technology for either self-monitoring or self-management of health (ie, if the service was free). Using EMIs has numerous advantages such as the ability to reach large populations at lower costs [8,9].

Training people in situ could be highly relevant for learning new, healthy behaviors, considering that people under stress typically switch from goal-directed behavior to habit behavior [10-13]. In other words, when a person experiences stress, that person is more likely to rely on the “old” behavior routine than display the newly learned behavior routine. In line with this, it might make more sense to learn a new behavioral routine in daily life compared with an artificial surrounding (eg, the therapist’s office) that generally does not resemble daily life. Indeed, research shows that although new behaviors can be effectively learned in artificial surroundings, this knowledge does not always generalize to real-life settings [14]. According to Neal et al [15], this is understandable, given that the association between context and the maladaptive behavior may still be in place after traditional treatment. As a consequence, the context (eg, setting or time of day) can still trigger the maladaptive behavior. Therefore, EMIs may provide a more effective way to train people in daily life than conventional treatment, by training people in the very context in which the maladaptive behavior occurs. As a result, this could lead to the (faster) formation of a new and more adaptive association between context and behavior.

Given that the number of worldwide mobile phone users is immense and continues to expand [16], it is not surprising that EMI is considered to be the future for therapeutic interventions [17]. Numerous authors highlight that EMI is a relatively new research field, and that the field is constantly evolving due to improvements in mobile technology [17-19]. It is therefore important to know the current state of affairs in this field. Current reviews suggest that EMIs can be effective, but these reviews are limited for different reasons. First, some reviews focus on a specific intervention [20] or on a specific target population [21]. Second, their sole or main focus is the effect of EMIs on health behaviors (eg, physical activity, smoking cessation, diabetes management) and not mental health [18,22,23]. Third, the current reviews are outdated, especially considering the developmental pace of EMIs (eg, [19]). A more recent review has been conducted by Donker et al [24]; however, it included only studies that investigated directly downloadable apps. This substantially limited the number of included studies (n=8). Fourth, the effect of EMIs on positive psychological well-being (eg, relaxation, acceptance) has not yet been reviewed, although these outcome types have been included as dependent variables in previous studies [25,26]. Considering that a person’s well-being is not equal to the absence of disease and is associated with increased positive cognitions and even physical health, it is important to also study these positive experiences [27]. To conclude, an up-to-date comprehensive overview or a meta-analysis of the effect of EMIs on mental health, including positive health outcomes, is missing.

This systematic review and meta-analysis therefore attempts to expand the current knowledge by including both mental health outcomes (ie, perceived stress, anxiety, or depressive symptoms) and positive psychological outcomes (eg, positive affect or acceptance). For this quantitative analysis, randomization and the presence of a control group were optional. Although the absence of randomization and the lack of a control group may weaken the design and thus the ensuing conclusions, these criteria are necessary to ensure that the presented overview of EMI studies is complete. This is considered critical because an extensive overview is currently lacking. It should be noted that study design was used in the moderator analyses.

Considering that the access to care needs improvement and EMIs can be used for this, it is important to investigate for whom these technologies can be appropriate and what EMI characteristics are associated with increased effects. Therefore, potentially promising moderators of effect size were investigated. Specifically, sample, type of training, how the training was triggered (ie, automatically or on-demand), support of mental health professional (MHP), and dosage were included because these can be considered key intervention components [28]. Including moderators allows us, for example, to investigate whether an EMI in its own right is effective or whether additional support by an MHP is necessary to accomplish change. In addition, the design of the study, sample size, and the quality of the study were studied to determine whether the effect size varied as a function of study characteristics. In short, we examined whether mobile technology provides an effective platform for mental health interventions and under which circumstances.


The preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines were followed [29].

Search Strategies

To find relevant publications concerning EMIs that target mental health, a database search was conducted in both PsycINFO and Web of Science (Core Collection). The search strings that were used consisted of 3 groups of words, namely words related to: (1) mental health, (2) mobile phones, and (3) interventions. See Multimedia Appendix 1 for the complete search strings. In both the databases, the search was limited to English publications that were peer reviewed. The search strategy was not restricted based on publication year as we aimed to provide a comprehensive overview of how mobile technology can be used to improve mental health. Naturally, the technologies that are used in more recent publications may be more advanced compared with earlier publications, but the idea of repeatedly training people in their daily lives is equal in older and newer publications. The last search was conducted on September 17, 2015. In addition, 2 other search strategies were used. First, the reference lists of previous reviews in the field of EMI were screened for relevant publications. Second, the reference lists of our primary selected papers were examined.

To ensure that no relevant publications were missed with the aforementioned search strategies, an extra search with a similar search string was conducted in the PubMed database on November 2, 2015. This resulted in 3505 publications, and the first 10% was screened to determine whether potentially relevant studies had been missed. However, no relevant publications—that had not already been identified in the other databases—were found, indicating that the used search strategies were sufficient.

Study Selection

Titles and abstracts of publications were first screened for eligibility, and if insufficient information was described in the abstract, the full-text papers were obtained. When a full-text paper was not available, a request was sent to the authors. A number of inclusion criteria were used for both within- and between-subject studies, which were established by authors AV, BV, and JB. First, publications were included when an EMI was studied (eg, via smartphone or personal digital assistant)—either as a stand-alone intervention or in combination with other treatment components. Second, the EMI should be automated and operated independently from a therapist. Thus, studies were excluded when the therapist administered the therapy—for instance—via mobile phone or conference call. This criterion was chosen because of our interest in how new technologies could be used to deliver cost-effective treatments in daily life, which precluded those requiring comparatively conventional therapist’s efforts. Third, a mental health–related outcome should be targeted (eg, anxiety, depression, or positive psychological well-being and not a health-related outcome such as physical activity). Fourth, the EMI should be studied in an ambulatory setting and not in standard therapy sessions. Publications were excluded if a mental health–related outcome was included, but the training was not directly focused on improving mental health (eg, psychoeducation for health behaviors or hypertension management). Moreover, studies that did not discuss post-intervention outcome data, without a baseline measure, methodological papers, case studies, reviews, non–peer-reviewed papers, and non-English papers were excluded. Three publications were additionally excluded because the samples were already discussed in other, already included publications. If a study included a control group—in addition to the group that received the EMI—it was coded as a between-subject study (see Coding for further details). The screening was conducted by author AV, and uncertainty about the potential inclusion or exclusion of a paper was resolved with authors BV and JB.

Coding

To collect the relevant study characteristics from each publication, a standardized form was used. Using this form, the following data were collected: (1) first author and publication year, (2) design, (3) sample characteristics (clinical characteristics, age, gender, and sample size), (4) outcome type, (5) information on the EMI (training type, training trigger, number of training episodes, and whether training was supported by an MHP), and (6) type of control condition and sample size. When a publication reported on more than 1 EMI, information was extracted separately for each described EMI, and all EMIs were included separately in the within-subject analyses. For the between-subject analyses, however, only 1 EMI was included thereby ensuring that each participant is represented only once in the analyses [30]. The EMI that was included in the between-subject analyses was the most “complete” intervention. In the case of Grassi et al [25], the Vnar intervention was chosen because it included both video and audio components compared with a video- or audio-only intervention. For both the studies by Repetto et al [31] and Pallavicini et al [32], the virtual reality intervention with biofeedback was chosen above the intervention using only virtual reality.

In the meta-analysis, the primary outcome of interest was “mental health.” Mental health encompasses an anxiety, depression, or stress outcome. Per publication, a set of guidelines was used to determine which specific questionnaire was used to represent this primary outcome. If a study reported 1 primary outcome, this measure was chosen as an indicator of mental health. When no or multiple primary outcomes were defined, a measure was chosen that was most likely to be affected given the aim of the training. For example, if the training focused on reducing anxiety, then, an anxiety questionnaire was preferred over a questionnaire measuring depression. In this process of selecting questionnaires, comprehensive questionnaires were chosen over restricted questionnaires (if there was such a choice), and the most valid questionnaire was chosen (idem). In addition to the coding of the primary outcome for each publication, the different outcome types per study were also coded. Thus, all questionnaires measuring anxiety, depression, perceived stress, and positive psychological well-being outcomes were listed per publication. A questionnaire was considered to represent positive psychological well-being, when it specifically identified positive emotions or processes that were targeted with the intervention. The only positive psychological well-being outcomes that were identified in the publications were acceptance, feelings of relaxation and quality of life; positive affect, for instance, was not studied in the included publications. By listing all the questionnaires that measured mental health and positive psychological well-being, it was possible to examine whether the effectiveness of EMI differed per outcome type (eg, anxiety or depression).

With regard to the information on the EMI, it was reported whether the training was active or passive. A training was labeled as active when participants had to carry out an exercise, for instance, a relaxation exercise [33]. In contrast, a passive training supplied information to the participants (eg, suggestions or tips) but did not require an immediate action from the participant. For example, participants are given messages that would support self-management [34]. Furthermore, when a trigger (using the EMI device) reminds participants to do the training at a specific moment, the training was coded as “triggered.” If participants could do the training whenever they preferred, the triggering of the training was said to be “on-demand.” Moreover, it was reported whether the EMI was used as a stand-alone intervention (coded as stand-alone EMI) or was part of a treatment package and was thus supported by an MHP (coded as MHP-supported EMI). This treatment package could consist of either an EMI in combination with therapy (eg, group therapy or exposure therapy) or an EMI with continued feedback (eg, feedback on homework exercises or messages to improve adherence). An introductory or kickoff session at the start of the intervention was not coded as support. When the effect of an EMI was studied in a population that had access to care as usual (eg, inpatient or outpatient setting), but this (additional) care was not the focus of the study or was not specifically related to the EMI, the EMI was coded as a stand-alone intervention in combination with care as usual. However, these studies often did not specify whether this available care was used by individuals or what this care specifically entailed. Finally, if a study included a control condition and was therefore eligible for the between-subject analyses, the type of control condition was reported (waitlist, placebo, or active treatment). Specifically, if more than 1 control condition was used, a placebo condition was chosen over a waitlist condition, and an active treatment control condition was chosen over both the placebo and waitlist condition. When multiple active treatment control conditions were included in the study, the condition was chosen that had the closest resemblance with the EMI condition, but without its “target ingredient.” This way it was possible to more precisely determine the added value of mobile technology when delivering interventions. Although it is possible to include all reported control conditions using multiple pairwise comparisons (eg, intervention group vs placebo and intervention group vs waitlist), this yields problems in the analyses as the same group is overrepresented (eg, twice). Therefore, in the case of the studies of Kenardy et al [35] and Newman et al [36], the 6-session cognitive behavioral therapy (CBT) was chosen to represent the control condition because it better resembled the EMI condition (6 sessions of computer-assisted CBT) compared with the 12-session CBT condition. Review author (AV) extracted all the relevant study characteristics from the included publications. To check the inter-rater reliability, a second reviewer (MvdP) assessed data from a subset of the selected papers (ie, 20%) [37]. For the nominal variables, the average Cohen’s kappa was .86 indicating strong agreement between the 2 raters. The other variables had an 88% (37/42) agreement, which demonstrates a high consistency among raters.

Quality Assessment

The risk of bias in individual studies was assessed using the Cochrane Collaboration tool [38]. This assessment tool uses 6 different domains for determining the quality of randomized trials: (1) selection bias concerns the method used to generate and conceal the allocation sequence (random sequence generation and allocation concealment, respectively); (2) performance bias deals with ways in which participants and personnel are blinded from knowing condition allocation; (3) detection bias relates to measures that are taken to blind the outcome assessment from knowledge of which intervention participants received; (4) attrition bias refers to whether the study attrition and exclusions from analysis are reported; (5) reporting bias is whether selective outcome reporting is examined and discussed; (6) other bias refers to any other problems or concerns that are not addressed by previous points. For each publication, the domains are rated with either a “high” or “low” risk. If insufficient information is provided in the paper, then, the level of risk is labeled “unclear.” Higgins et al [38] argues that within the domain “other bias,” the sources of bias should be prespecified. In this case, no other biases were specified in advance; therefore, this domain was omitted from the current quality assessment.

The quality assessment was done by the first author (AV), and a 20% sample was assessed by a second reviewer (MvdP). Inter-rater reliability, as assessed with Cohen’s kappa, indicated that there was moderate agreement between raters (ie, average kappa of .69).

Data Analysis

Hedges’ g was used as an estimate of the effect size. This estimate was calculated using the mean, SD, and sample size at post-intervention as reported in the paper or as based on contact with the authors. Moreover, to compute an effect, a correlation coefficient is needed that represents the correlation between the repeated measures of the outcome parameter. As this within-subject correlation was rarely reported, the correlation was set at .50 for all studies [39]. For interpreting the effect size, the guidelines for Cohen’s d were used because they are approximately compatible [40]. According to these guidelines, a value of 0.20 is small, 0.50 is medium, and 0.80 is large. Effect sizes are based on a random effect model because we expect the real effect to differ between studies.

To estimate the effect of EMI from pre intervention to postintervention, analyses were first run with all within-subject data. Furthermore, to determine whether this effect differed from a control condition, between-subject analyses were run. In both the within- and between-subject analyses, it was determined whether there was an effect on the primary outcome “mental health” (as measured with a single questionnaire). Second, it was investigated whether the effect differed per outcome type. That is, was the effect of EMI different for anxiety, depression, perceived stress, or positive psychological outcomes (acceptance, relaxation, and quality of life). To determine the effectiveness per outcome type, all relevant outcome types per publication were included in the analysis. When a study used multiple questionnaires to assess an outcome type (eg, anxiety), an overall mean was created by combining these different questionnaires. By combining multiple questionnaires per study, the data are unlikely to be independent, and this increases the type II error. Therefore, these analyses are only used to explore whether there are potential differences in effects between the outcome types. In addition, for the primary outcome “mental health,” subgroup analyses are done to determine whether the effect differed as a function of design (randomized controlled trial [RCT] or pre-post), sample (healthy or clinical), age, gender, sample size, training type (active or passive), training trigger (triggered, on-demand, or unspecified), daily training episodes (number), total training episodes (number), support by MHP (stand-alone EMI, MHP-supported EMI, or stand-alone EMI with access to care as usual), and quality assessment (0-6). Year of publication was not included as a moderator because there was little variation in this variable (ie, 25 of the 32 publications were published in 2010 or later). Moreover, type of control condition was not included as a moderator because only 13 studies had a between-subject design.

As a measure of heterogeneity, the Q and I2 statistics were used. A significant Q-statistic indicates that there is variation in the true effect size, and I2 reflects the amount of real variance—specifically, values of 25%, 50%, and 75% can be considered small, medium, and large values, respectively [41]. Moreover, the risk for publication bias was examined using different techniques [30]. First, the distribution in the funnel plot was visually inspected as a preliminary indication for publication bias. This plot represents the effect size against the standard error of the study. Generally, studies with a large sample size are represented at the top of the plot around the mean, and studies with a smaller sample size are located at the bottom of the plot with a wider distribution around the mean. In the case of publication bias, studies with a small sample size are more likely to fall to the right of the mean (indicating a positive effect size). In other words, when the distribution of studies becomes asymmetrical, there is indication for publication bias. To quantify the amount of bias, the Egger’s test of intercept was used. In this approach, the amount of bias is captured in the intercept value, and a significant intercept indicates that there is significant publication bias. Furthermore, to correct for the missing studies (to the left of the mean), a Duval and Tweedie’s trim and fill method was used. This method calculates where missing studies were most likely to fall and adds these studies to the analysis. The recomputed effect size and CI are thereby corrected for the missing studies and is assumed to be unbiased [30].

Outliers were identified using the value of the standardized residual in both the within- and between-subject analyses. Studies whose standardized residual was significant (values ± 1.96) were excluded from the analyses.

The software Comprehensive Meta-Analysis version 3.3.070 (Biostat) was used for all the described analyses including the calculation of effect sizes with 95% CIs. The forest plots were made using the metaphor package in R (version 3.0.3) [42].


A total of 2611 publications were identified with the search strategies after removing duplicates (see Figure 1) [29]. After screening the titles and abstracts, 127 full-text publications were screened for eligibility. Most of these publications were excluded because no (mobile phone) intervention was studied, the intervention was not automated (ie, not independent from therapist), or no outcome data were discussed (methodological paper). A total of 32 publications were considered relevant and were included in the analysis (see Tables 1 and 2). In these 32 publications, 33 different studies were reported using 43 samples that received an EMI (n=1301). The included study by Huffziger et al [26] was technically an ecological momentary assessment study (with an experimental manipulation) and not an EMI. However, considering that the manipulation that was used (mindfulness attention induction) can be seen as an intervention, the study was included.

For the meta-analysis, 5 publications were excluded because no means and SDs to calculate the effect size were reported or obtained after contacting the authors [43-47]. Therefore, 27 publications (27 studies) with 33 samples that received an EMI were included in the meta-analysis (n=1156).

Table 1. Characteristics of the ecological momentary intervention studies (part 1).
StudyaDesignbSampleAge (years)Gender (% female)ncMental Health MeasuredOutcome type(s)
Included in meta-analysis

Agyapong et al, 2012eRCTClinical48.005424BDIDepression

Ahtinen et al, 2013PrepostHealthy6014Stress single-itemStress
Acceptance
Quality of life

Aikens et al. 2015f (all pooled subjects)PrepostClinical51.4079221PHQ-8Depression

Askins et al, 2009RCTHealthy36.3010064POMSDepression

Ben-Zeev et al, 2014PrepostClinical45.903932BDIDepression

Burns et al, 2011ePrepostClinical37.40887GIDS-cDepression
Anxiety

Carissoli et al, 2015RCTHealthy38.115720MSPStress

Dagöö et al. 2014g (mCBT)RCTClinical34.704824LSAS-SRDepression
Anxiety
Quality of life

Dagöö et al, 2014g (mIPT)RCTClinical39.085619LSAS-SRDepression
Anxiety
Quality of life

Depp et al, 2015RCTClinical46.905441MADRSDepression

Enock et al. 2014RCTClinical34.8048120SIASDepression
Anxiety

Granholm et al, 2012PrepostClinical48.703141BDIDepression

Grassi et al, 2007 (Vnar)PreposthHealthy23.275030STAI-stateAnxiety
Relaxation

Grassi et al, 2007 (Nnar)PreposthHealthy23.275030STAI-stateAnxiety
Relaxation

Grassi et al, 2007e (MP3)PreposthHealthy23.275030STAI-stateAnxiety
Relaxation

Harrison et al, 2011PrepostClinical38.207128DASS total scoreDepression
Anxiety

Huffziger et al, 2013iPrepostHealthy22.906046Valence 2-itemsDepression
Relaxation

Kenardy et al, 2003eRCTClinical36.807641Anxiety composite scoreAnxiety

Lappalainen et al, 2013RCTClinical47.10011GSIDepression
Acceptance
Quality of life

Ly et al, 2014e (behavioral activation)RCTClinical36.607036BDIDepression
Anxiety
Acceptance
Quality of life

Ly et al, 2014 (mindfulness)RCTClinical35.607136BDIDepression
Anxiety
Acceptance
Quality of life

Ly et al, 2012PrepostHealthy29.503611DASS stressDepression
Anxiety
Stress
Quality of life

Newman et al, 2014RCTClinical42.455511STAI—traitAnxiety

Newman et al, 1997RCTClinical38.00839FQ—total scoreAnxiety

Pallavicini et al, 2009 (VRMB)PreposthClinical41.254GAD7Anxiety

Pallavicini et al, 2009 (VRM)PreposthClinical48.504GAD7Anxiety

Proudfoot et al, 2013RCTClinical39.0070126DASS total scoreDepression
Anxiety
Stress

Repetto et al, 2013 (VRMB)PreposthClinical647BAIAnxiety

Repetto et al, 2013 (VRM)PreposthClinical649BAIAnxiety

Rizvi et al, 2011PrepostClinical33.868222BSIDepression

Shapiro et al, 2010PrepostClinical26.3010014BDIDepression

Watts et al, 2013eRCTClinical41.008010BDIDepression
Stress

Wenze et al, 2014PrepostClinical40.867114QIDS-cDepression
Not included in meta-analysis








Gorini et al, 2010 (VRMB)PreposthClinical8BAIAnxiety

Gorini et al, 2010 (VRM)PreposthClinical4BAIAnxiety

Grassi et al, 2011 (Vnar)PreposthHealthy20.8610015STAI-stateAnxiety
Relaxation

Grassi et al, 2011 (MP3)PreposthHealthy20.8610015STAI-stateAnxiety
Relaxation

Preziosa et al, 2009 (Vnar; study 1)PrepostHealthy23.481006STAI-stateAnxiety
Depression

Preziosa et al, 2009 (MP3; study 1)PrepostHealthy23.481006STAI-stateAnxiety
Depression

Preziosa et al, 2009 (study 2)RCTHealthy23.485030STAI-stateAnxiety
Depression
Relaxation

Riva et al, 2006RCTHealthy23.824811STAI-stateAnxiety
Depression
Relaxation

Zautra et al, 2012 (mindfulness)RCTClinical54.058225Depression 3-itemsDepression
Stress

Zautra et al, 2012 (mastery-control)RCTClinical54.058225Depression 3-itemsDepression
stress

aStudies are ordered by inclusion in the meta-analysis. Behind the study’s year of publication, between brackets, the sample (or condition) that received the ecological momentary intervention was specified; With mCBT: mobile cognitive behavioral therapy; mIPT: mobile interpersonal psychotherapy; MP3: audio only condition; Nnar: video only condition VRMB: virtual reality and mobile condition with biofeedback; VRM: virtual reality with mobile condition; Vnar: video narrative condition.

bDesign of study is labeled either randomized controlled trial (RCT) or prepost design.

cSample size at post-intervention in the condition receiving the ecological momentary intervention.

d The specific questionnaire that was used to represent the primary outcome “mental health” is listed. With BDI: Beck Depression Inventory; PHQ-8: Personal Health Questionnaire Depression scale; POMS: Profile of Mood States; GIDS-c: Quick Inventory of Depressive Symptoms-Clinician rated; MSP: Mesure du Stress Psychologique; LSAS-SR: Liebowitz Social Anxiety Scale Self-Report; MADRS: Montgomery–Åsberg Depression Rating Scale; SIAS: Social Interaction Anxiety Scale; BAI: Beck Anxiety Inventory; STAI: State-Trait Anxiety Inventory; DASS: Depression Anxiety Stress Scales; GSI: General Symptom Index; FQ: Fear Questionnaire; GAD7: Generalized Anxiety Disorder 7-item; BSI: Brief Symptom Inventory.

eStudy is considered an outlier in within-subject analyses.

fThe data used for the analyses consist of all pooled participants, the outcome questionnaire at pre-intervention is compared with last outcome questionnaire that participant completed.

gThe intervention could be accessed using the mobile phone, tablet, and computer.

hStudy is labeled as a prepost design because it is unclear whether participants were randomized across conditions.

iThe study technically is an ecological momentary assessment study with an experimental manipulation.

Table 2. Characteristics of the ecological momentary intervention studies (part 2).
StudyaIntervention techniqueTraining type (+ type of MHPb supportc)Training triggerNo. of training sessionsdControl (n)e
Included in meta-analysis

Agyapong et al, 2012fSelf-management and monitoringPassive (stand-alone + CAU)Triggered168 (2)Waitlist (n=28)

Ahtinen et al, 2013Acceptance and commitment therapyActiveOn-demand


Aikens et al, 2015g
(all pooled subjects)
Self-management and monitoringPassive (+MHP)Triggered26 (1)

Askins et al, 2009Self-management and monitoringActive (+MHP)......

Ben-Zeev et al, 2014Self-management and monitoringActive (+stand-alone + CAU)Triggered90 (3)

Burns et al, 2011fBehavioral activationActive (+MHP)Triggered280 (5)

Carissoli et al, 2015MindfulnessActiveOn-demand36 (2)Placebo (n=18)

Dagöö et al, 2014h
(mCBTb)
Cognitive behavioral therapyActive (+MHP)......

Dagöö et al 2014h
(mIPTb)
Interpersonal therapyActive (+MHP)......

Depp et al, 2015Self-management and monitoringPassive (+MHP)Triggered140 (2)Paper and pencil version (n=41)

Enock et al, 2014Cognitive bias modificationActiveTriggered84 (3)Placebo (n=104)

Granholm et al, 2012Cognitive behavioral therapyActive (stand-alone + CAU)Triggered216 (3)

Grassi et al, 2007 (Vnarb)RelaxationActive...4 (2)Waitlist
(n=30)

Grassi et al, 2007 (Nnarb)RelaxationActive...4 (2)

Grassi et al, 2007f (MP3b)RelaxationActive...4 (2)

Harrison et al, 2011Self-management and monitoringPassiveOn-demand...

Huffziger et al, 2013iMindfulnessPassiveTriggered10 (10)

Kenardy et al, 2003fCognitive behavioral therapyActive (+MHP)Triggered420 (5)CBT6 (n=44)

Lappalainen et al, 2013Cognitive behavioral therapy and acceptance and commitment therapyActive (+MHP)On-demand...Waitlist (n=12)

Ly et al, 2014f
behavioral activation
Behavioral activationActive (+MHP)......

Ly et al, 2014 mindfulnessMindfulnessActive (+MHP)......

Ly et al, 2014 mindfulnessAcceptance and commitment therapyActiveOn-demand...

Newman et al, 2014Cognitive behavioral therapyActive (+MHP)Triggered112 (4)CBT6 (n=14)

Newman et al, 1997Cognitive behavioral therapyActive (+MHP)Triggered336 (4)CBT12 (n=9)

Pallavicini et al, 2009
(VRMBb)
RelaxationActive (+MHP)On-demand...Waitlist (n=4)

Pallavicini et al, 2009
(VRMb)
RelaxationActive (+MHP)On-demand...

Proudfoot et al, 2013Self-management and monitoringPassiveOn-demand...Placebo (n=195)

Repetto et al, 2013 (VRMB)RelaxationActive (+MHP)On-demand...Waitlist (n=8)

Repetto et al, 2013 (VRM)RelaxationActive (+MHP)On-demand...

Rizvi et al, 2011Dialectical behavior therapyActive (+TAU)On-demand...

Shapiro et al, 2010Self-management and monitoringPassive (+MHP)168 (1)

Watts et al, 2013fCognitive behavioral therapyActive (+MHP)On-demand...Computer version (n=15)

Wenze et al, 2014Cognitive behavioral therapyPassive (stand-alone + CAUTriggered28 (2)
Not included in meta-analysis

Gorini et al, 2010 (VRMB)RelaxationActive (+MHP)On-demand...Waitlist (n=8)

Gorini et al, 2010 (VRM)RelaxationActive (+MHP)On-demand...

Grassi et al, 2011 (Vnar)RelaxationActive...6 (1)Waitlist (n=15)

Grassi et al, 2011 (MP3b)RelaxationActive...6 (1)

Preziosa et al, 2009 (Vnar; study 1)RelaxationActive...6 (1)Waitlist (n=6)

Preziosa et al, 2009 (MP3; study 1)RelaxationActive...6 (1)

Riva et al, 2006RelaxationActive...4 (2)Placebo (n=30)

Preziosa et al, 2009 (study 2)RelaxationActive...4 (2)Placebo (n=11)

Zautra et al, 2012 (mindfulness)MindfulnessActiveTriggered27 (1)Placebo (n=23)

Zautra et al, 2012
(mastery-control)
Behavioral activationActiveTriggered27 (1)

aStudies are ordered by inclusion in the meta-analysis. Behind the study’s year of publication, between brackets, the sample (or condition) that received the EMI was specified.

bmCBT: mobile cognitive behavioral therapy; mIPT: mobile interpersonal psychotherapy; MP3: audio only condition; MHP: mental health professional; Nnar: video only condition; Vnar: video narrative condition; VRMB: virtual reality and mobile condition with biofeedback; VRM: virtual reality with mobile condition.

cFollowing the type of training, the type of support by the mental health professional is reported between brackets. With +MHP=mental health professional–supported EMI; stand-alone + CAU=stand-alone EMI with access to care as usual. No information was displayed when the EMI was stand-alone.

dThe maximum number of total training sessions is reported. The maximum number of daily training sessions is reported between brackets.

eControl condition (and sample size at post-intervention) is listed if the study was included in the between-subject analyses. If the control condition is an active treatment, it is specified which specific active treatment condition is used to calculate the effect size. With CBT6=6-sessions of cognitive behavioral therapy; CBT12=12-sessions of cognitive behavioral therapy.

f Study is considered an outlier in within-subject analyses.

gThe data used for the analyses consist of all pooled participants, the outcome questionnaire at preintervention is compared with last outcome questionnaire that participant completed.

hThe intervention could be accessed using the mobile phone, tablet, and computer.

iThe study is technically an ecological momentary assessment study with an experimental manipulation.

Figure 1. PRISMA flow diagram for study inclusion.
View this figure

Study Characteristics

Of the 33 studies that were included, 17 had a prepost design, and 16 studies were an RCT. Of the total number of studies, 10 included healthy individuals [25,26,33,44,48-51] (studies 1 and 2 [45]), and the remaining studies focused on a clinical sample. Specifically, the focus of 8 studies was on anxiety disorders [31,32,35,36,43,52-54], 6 on depressive symptoms (ranging from mild symptoms to major depressive disorder) [34,47,55-58], 1 on perceived stress [59], 2 on anxiety, depression, and stress [60,61], 2 on bipolar disorder [62,63], 2 on schizophrenia [50,64], 1 on borderline personality disorder [65], and 1 on bulimia nervosa [66]. No study had positive psychological well-being as primary outcome. Across the studies, the average age ranged from 20.86 to 54.05 years with a mean of 37.33 (SD=9.37). Only female participants were included in 4 studies [44,48,66] (study 1 [45]), and 1 study included only males [59], and overall, the percentage of females was 64.79 (SD=22.72).

Intervention Characteristics

A range of different intervention techniques were studied: CBT [35,36,50,52,54,58,59,63], acceptance and commitment therapy [33,51,59], mindfulness [26,47,49,57], behavioral activation [47,56,57], relaxation [25,31,32,43-46], interpersonal therapy [52], dialectical behavior therapy [65], cognitive bias modification [53], and self-management and/or monitoring strategies [34,48,55,60-62,64,66]. The EMI was offered in combination with therapy in 10 studies (30%). Four studies combined the EMI with CBT [35,36,54,66], 3 with virtual reality including both relaxation and exposure [31,32,43], 1 with a problem-skill training [48], 1 with psychoeducation [62], and one with meetings including mindfulness and acceptance exercises [59]. In 5 studies, the EMI was a stand-alone intervention in combination with care as usual. This care focused on bipolar disorder [63], schizophrenia or schizoaffective disorder [50,64], major depressive disorder, and alcohol dependency [55], or on borderline personality disorder and substance abuse [65]. The other 18 studies investigated whether the use of an individual EMI can be effective without face-to-face therapy confounding the effect. Nevertheless, support by an MHP was included in 5 of these 18 studies. The MHP was for instance used to support the participant in the first phase of the intervention [58], to give feedback on the homework using Internet or email [52,57] or to increase adherence by telephone [34,56]. As can be seen in Table 2, 13 studies (39%) did not include support by an MHP after starting the EMI. In addition to the EMI and the potential support offered by the MHP, 6 of the 33 studies used a website for psychoeducation [51,57] or for providing therapy modules [56,59-61]. Most of the EMIs under investigation were “active” (25/33, 76%), meaning that participants had to carry out an exercise as part of the intervention. The EMIs in the remaining studies were classified as passive and only provided the participant with information.

On average, the EMI lasted for 7.47 weeks (SD=6.46), but this varied considerably. For example, the studies with the shortest EMI lasted only 1 or 2 days [25,26,46] (study 2 [45]), whereas the study with the longest EMI lasted for 26 weeks [34]. However, these numbers may be only modestly informative considering that the number of training episodes that people received (per day) varied highly across the studies. To explain, the study with the shortest length of training actually had the highest number of training episodes per day [26], whereas the study with the longest training length only trained people once a week [34]. Therefore, it may be more valuable to examine how many training episodes participants received per day and in total. Unfortunately, 13 studies did not specify the number of training episodes (per day or in total). Across the 20 other studies, the average number of training episodes was 2.80 per day (SD=2.12) ranging from 1 to 10, and on average 108.25 in total (SD=123.00) ranging from 4 to 420. The number of training episodes not only varied across studies but likely also varied across individuals within a given study. Fifteen of the 33 studies (ie, 45%) reported (some) information about compliance with the training, but the information used to represent compliance differed across studies. The average compliance with the sessions or treatment modules was 73.88% (SD=16.73) [26,47,50,52,53,57,58,60,62,63,66]. Burns et al [56] reported that the number of training sessions was on average 15.30 (SD=8.30) in the first week and that this decreased to 9.00 (SD=6.50) in the final week. In study of Ben-Zeev et al [64], participants used the training on 86.50% of the days and on these days used on average 5.19 sessions. Participants in the study by Aikens et al [34] participated in a median of 25 weeks (of the 26 weeks). Finally, Lappalainen et al [59] discloses that all participants tried at least 3 of the 6 available tools; however, no data are reported on the frequency of use.

The training episodes were automatically triggered by the device in 13 studies, and in 11 studies, the training episodes were not specifically triggered, and participants could complete the training whenever they wanted. Nine studies did not report whether the training was triggered or whether it was accessed on-demand.

Quality Assessment

The quality assessment of the studies is summarized in Table 3 and is on average 2.29 (SD=1.42, NB on a scale from 0 to 6), which can be considered low. Nine studies had a pre-intervention to post-intervention design, so the quality domain “selection bias”—as indexed by “random sequence generation” and “allocation concealment”—was not applicable (quality domain 1, see the previous section) [33,50,51,56,60,63-66]. Only 5 studies had a low risk of bias on this domain [52,57,58,61,62], with 5 other studies having a low risk of bias on “random sequence generation” and an unclear or high risk on “allocation concealment” [26,31,32,48,55]. In the remaining 14 studies, the risk was either unclear or high. The blinding of personnel (domain 2) was achieved in only 2 studies [61,62]. Moreover, most studies used self-report questionnaires, with only 2 studies using clinician-rated interviews (domain 3)—however, clinicians were not blinded for the condition of the participants [56,63]. There was a high risk for attrition (domain 4; ie, ≥ 20%) in 8 studies [48,50,53,58,60-62,66], and attrition (in the EMI group) was not disclosed in 7 studies [25,35,43,44,46] (studies 1 and 2 [45]). Finally, 7 studies failed to report the results for all prespecified outcome types (domain 5) [25,32,43,44,46] (studies 1 and 2 [45]).

Table 3. Quality assessment of the individual studies using the Cochrane Collaboration’s tool.
StudyRandom sequence generationaAllocation concealmentaPerformance biasbDetection biasAttrition biascReporting biasdOverall gradee
Agyapong et al, 2012+++3
Ahtinen et al, 2013N/AN/A++4
Aikens et al, 2015++2
Askins et al, 2009+?+2
Ben-Zeev et al, 2014N/AN/A++4
Burns et al, 2011N/AN/A?++4
Carissoli et al, 2015??++2
Dagöö et al, 2014++++4
Depp et al, 2015++++4
Enock et al, 2014???+1
Gorini et al, 2010f???0
Granholm et al, 2012N/AN/A+3
Grassi et al, 2011f???0
Grassi et al, 2007???0
Harrison et al, 2011N/AN/A+3
Huffziger et al, 2013+?++3
Kenardy et al, 2003???+1
Lappalainen et al, 2013??++2
Ly et al, 2014++++4
Ly et al, 2012N/AN/A++4
Newman et al, 2014??++2
Newman et al, 1997??++2
Pallavicini et al, 2009+?+2
Preziosa et al, 2009f (studies 1 and 2)???0
Proudfoot et al, 2013++++4
Repetto et al, 2013+?+2
Riva et al, 2006f???0
Rizvi et al, 2011N/AN/A++4
Shapiro et al, 2010N/AN/A+3
Watts et al. 2013+++3
Wenze et al, 2014N/AN/A?++4
Zautra et al, 2012f??++2








aThe label “not applicable” (N/A) is used in 1-armed studies.

bThe risk for performance bias is rated low if personnel are blinded irrespective of whether participants were blinded.

cThe bias for attrition is considered high when the attrition from pre-intervention to post-intervention is 20% or more.

dThe bias for selective reporting is labeled low if all prespecified outcomes are reported, it is not necessary that all statistical information is reported per outcome (eg, means, standard deviation, CI, P values).

eThe overall grade is determined by summing the number of low-risk categories and the number of N/A categories; +=low risk of bias; −=high risk of bias; ?=unclear risk of bias.

fStudy is not included in the meta-analysis.

Within-Subject Analyses

A total of 27 publications including 33 EMI groups (n=1156), were included in the within-subject analyses, and these studies had significant heterogeneity, Q (32)=188.80 with P<.001. The I2 statistic showed that the observed variance was high (I2=83.05). This further supports the use of a random effect model in the analyses.

The average effect on mental health from pre-intervention to post-intervention was g=0.73, 95% CI (0.56-0.90), P<.001 (see Figure 2 and Table 4), indicating a medium to large effect. To determine whether there was a risk for publication bias, the distribution in the funnel plot was examined. As can be seen in Figure 3, most of the studies (white circles) are centered at the top of the plot and are distributed to the right side of the mean as the sample size decreases. This reflects the presence of a publication bias, and an Egger’s test of intercept was used as a method to quantify the amount of bias. In this case, the intercept was 1.89, 95% CI (0.28-3.51), with t (31)=2.392 and 1-sided P=.01. In other words, there was a significant risk for bias. To correct for the missing studies to the left of the mean, the trim and fill method was used. Figure 3 shows that 2 studies (black circles) were added and the corrected effect size was g=0.70, 95% CI (0.52-0.87). The corrected effect is virtually identical to the unadjusted effect, which suggests that the reported findings are quite robust and are not simply due to publication bias.

The standardized residual identified 6 studies as outliers, and these were removed from the analyses [35,55,56,58] (MP3 condition [25]) (BA condition [57]). Removal of these studies resulted in a decrease in effect and heterogeneity (g=0.57, 95% CI: 0.45-0.70, P<.001; Q (26)=74.46, I2=65.08). Nevertheless, the effect was still medium for the 27 included EMI groups (n=1008), and the studies were significantly heterogeneous.

It was explored whether the effect was different per outcome type. Depressive symptoms were assessed in 17 studies; anxiety in 15 studies; quality of life in 6 studies; stress in 5 studies; acceptance in 4 studies, and relaxation in 3 studies. As can be seen in Table 5, there was evidence for an effect on anxiety (g=0.47, 95% CI: 0.32-0.63, P<.001), depression (g=0.48, 95% CI: 0.34-0.61, P<.001), perceived stress (g=0.40, 95% CI: 0.23-0.57, P<.001), acceptance (g=0.36, 95% CI: 0.13-0.59, P=.002), and quality of life (g=0.38, 95% CI: 0.19-0.56, P<.001). No effect was found on relaxation with g=0.28, 95% CI (−0.46 to 1.01), P=.46. However, there was no evidence that the effect differed significantly per outcome type with Q (5)=1.74, P=.88.

Furthermore, subgroup analyses were done to see whether the effect varied by moderator. Table 4 shows that “support by an MHP” was the only moderator for which the effect varied significantly, Q (2)=6.77, P=.03. Specifically, the effect was medium to large when the EMI included support by an MHP (g=0.73, 95% CI: 0.57-0.88), small to medium for the stand-alone EMI (g=0.45, 95% CI: 0.22-0.69), and small for those individuals who received a stand-alone EMI in combination with care as usual (g=0.38, 95% CI: 0.11-0.64).

Table 4. Effect sizes (Hedges’ g) of ecological momentary intervention on mental health by study and intervention characteristics (within-subject analyses)a.
OutcomeRandom effect modelHeterogeneityTest of difference
kbncg (95% CI)dQeIeQf
Mental health

2710080.57 (0.45-0.70)g74.46g65.08

Design





1.03


RCTh114810.65 (0.48-0.82)g24.10i58.50


Pre-post165270.52 (0.33-0.71)g47.34g68.32

Sample





1.79


Clinical207930.63 (0.50-0.76)g39.32i51.68


Healthy72150.40 (0.10-0.71)j26.76g77.58

Agek, years


2.19


≤ 38.15124260.61 (0.36-0.86)g54.38g79.77


> 38.15125520.51 (0.37-0.64)g17.64l37.65


Unspecified3300.80 (0.41-1.18)g0.400.00

Genderk




1.96


≤ 60% female144500.49 (0.28-0.70)g51.25g74.63


> 60% female11550 0.67 (0.53-0.81)g15.9437.26



Unspecified280.55 (−0.08 to 1.17)l1.1210.43

Sample sizek


1.18


≤ 22 participants131580.67 (0.46-0.87)g17.2430.39


> 22 participants148500.52 (0.36-0.69)g56.36g76.93

Training type


0.32


Active205180.60 (0.42-0.78)g57.51g66.96


Passive74900.53 (0.34-0.71)g16.65j63.97

Training trigger


1.65


Triggered95350.52 (0.33-0.71)g26.96i70.45


On-demand112560.49 (0.37-0.62)g9.410.00


Unspecified72170.76 (0.38-1.14)g35.69g83.19

No. of daily training episodesk


0.53


≤ 273700.55 (0.24-0.87)i32.65g81.62


> 262590.51 (0.20-0.82)i22.81g78.08


Unspecified143790.63 (0.49-0.77)g17.4825.62

No. of total training episodesk


0.92


≤ 8474810.48 (0.21-0.75)i36.62g83.62


> 8461480.62 (0.27-0.97)i17.77i71.86


Unspecified143790.63 (0.49-0.77)g17.4825.62

Support MHPm


6.77j


MHP-supported EMI144740.73 (0.57-0.88)g20.67l37.10


Stand-alone EMI94250.45 (0.22-0.69)g35.81j77.66


Stand-alone EMI with access to care as usual41090.38 (0.11-0.64)i5.3743.97

Quality assessmentk


0.01


≤ 3177810.57 (0.39-0.76)g57.68j72.26


> 3102270.59 (0.42-0.76)g16.78l46.38

aOutliers were excluded from the presented moderation analyses (ie, 6 studies).

bk=number of studies.

cn=number of participants.

dg=effect size Hedges’ g with 95% CI.

eQ and I2=heterogeneity statistics.

fQ=contrast between subgroups.

gP<.001.

hRCT=randomized controlled trial.

iP<.01.

jP<.05.

kData were categorized based on the median.

lP<.10.

mMHP=mental health professional.

Table 5. Effect sizes (Hedges’ g) of ecological momentary intervention by outcome type (within-subject analyses)a.


Random effect modelHeterogeneityTest of difference
Outcome
kbncg (95% CI)dQeIeQf
Overall
501830


1.74

Anxiety154680.47 (0.32-0.63)g28.28h50.49

Depression178700.48 (0.34-0.61)g46.48g65.58

Perceived stress51990.40 (0.23-0.57)g4.5912.79

Relaxation31060.28 (−0.46 to 1.01)25.28g92.09

Acceptance4720.36 (0.13-0.59)i2.790.00

Quality of life6115 0.38 (0.19-0.56)g4.250.00

aOutliers were excluded from the presented moderation analyses (ie, 6 studies).

bk=number of studies.

cn=number of participants.

dg=effect size Hedges’ g with 95% confidence interval.

eQ and I2=heterogeneity statistics.

fQ=contrast between subgroups.

gP<.001.

hP<.05.

iP<.01.

Figure 2. Forest plot showing the effect of ecological momentary interventions (EMIs) on mental health complaints for all within-subject studies. The EMI sample (or condition) is reported after the year of publication when multiple EMI samples were included in a publication.
View this figure
Figure 3. Funnel plot of standard error by Hedges’ g with imputed values based on Duval and Tweedie’s trim and fill method (within-subject studies).
View this figure

Between-Subject Analyses

In the between-subject analyses, only 1 EMI group per study was included (see “Coding”). A total of 13 studies were included with 454 participants in the EMI condition and 522 participants in a control condition (waitlist, placebo, or active treatment control). The included studies were not significantly heterogeneous, Q (12)=17.17, P=.14. Moreover, the observed true variance was small (I2=30.13). A small value of I2 indicates that a large part of the variance is the result of random error. If one tries to explain this variance (with subgroup analyses), one tries to find an explanation for something that is in essence random [30]. Therefore, no attempt will be made to explain the variance in effect by testing differences due to outcome types and other moderators. Still, a random effect model was adopted because we do not assume a common effect size (despite the lack of statistical significant variance between studies) [30].

The effect for EMI in between-subject studies was g=0.40, 95% CI (0.22-0.57), P<.001 (see Figure 4). This effect can be considered small to medium. The funnel plot (see Figure 5) shows that there is indication for publication bias; the distribution of effects is asymmetrical as the sample size decreases. Specifically, effect sizes are more likely to fall to the right side of the mean when the sample size is small. Furthermore, the Egger’s test of intercept is significant, indicating that there is a risk for bias (intercept is 1.50, 95% CI: 0.28-2.72) with t (11)=2.708, 1-sided P=.01). The trim and fill method was used to account for the missing studies. Six studies were added to the left of the mean (black circles in Figure 5), and the corrected effect size was g=0.23, 95% CI (0.04-0.42). The corrected effect is considerably smaller than the uncorrected effect, which indicates that the uncorrected effect may be subject to publication bias and needs to be interpreted carefully. On the basis of the standardized residuals, no study was identified as an outlier.

Figure 4. Forest plot showing the effect of ecological momentary interventions (EMIs) on mental health complaints for all between-subject studies. The EMI sample (or condition) that was used to represent the active treatment condition is reported after the year of publication.
View this figure
Figure 5. Funnel plot of standard error by Hedges’ g with imputed values based on Duval and Tweedie’s trim and fill method (between-subject studies).
View this figure

Principal Findings

The systematic review and meta-analysis was a first attempt to examine whether mobile technologies can be used to provide an effective intervention for mental health and under which circumstances this is the case. A total of 33 studies (n= 1301) were used to answer this question, and the included studies varied considerably in terms of study and intervention characteristics. The quality assessment indicated that the reported study quality was generally low. Specifically, the studies were at risk for bias caused by attrition, reliance on self-report measures, and the failure to blind personnel. Moreover, only a few studies reported using strategies to randomly allocate participants to conditions.

In the within-subject studies (n=1008), a significant medium effect size (Hedges’ g) of 0.58 was found. The estimated effect size did not significantly differ per outcome type (ie, anxiety, depression, perceived stress, acceptance, relaxation, and quality of life), although no significant effect was found for relaxation. Moderation analysis suggested that the effect on mental health was 62% larger when the EMI was part of a treatment package that included support of an MHP compared with stand-alone EMI. Moreover, this moderation analyses showed that the effect of EMI was smaller, but significant, in the population that had access to care as usual while using the EMI (eg, inpatient or outpatient setting). It is possible to speculate about what caused this difference in effect; however, a clear comparison of the groups is complicated by the fact that the groups (and included studies) are very diverse. More specifically, the group that received EMIs while also having access to care as usual consisted largely of patients with severe complaints that might be less susceptible to change (eg, schizophrenia or schizoaffective disorders, borderline personality disorder, and substance abuse).

With regard to the between-subject studies (n=454), the estimated effect size was 0.40. The effect was, however, subject to publication bias, and the corrected effect was considered small, but significant (g=0.23).

Both the within- and the between-subject analyses indicate that mobile technologies can be effectively used to deliver interventions for mental health. When interpreting this effect, it must be acknowledged that the effects were considerable smaller in the between-subject studies compared with the within-subject studies. A larger effect in within-subject studies is frequently observed. However, within-subject studies are limited because causality can—generally—not be interfered from these studies. Moreover, these studies have an increased risk for type-II errors, which implies that the conclusions from within-subject studies must be interpreted with caution [67]. Nevertheless, both study types provide a first—and positive—insight into how mobile technology can be used to improve mental health.

The finding that the effect of EMIs was stronger when support by an MHP was included is in line with findings from research on Internet interventions (eg, [68,69]). Therefore, although fully automated EMIs can have a positive effect on mental health, it is additionally beneficial to include contact between researcher (or therapist) and participant. This contact could be a helpful tool to increase adherence and motivation, which in turn could result in a stronger effect. Unfortunately, it is currently unknown what levels of support are needed to optimize the effectiveness of EMIs. Future studies should differentiate what kind of contact is necessary for improvement. Not only is it important that we learn how much contact is required, but the when (eg, beginning or during intervention), how (eg, via mobile phone, email, or face-to-face), and what (eg, should support focus on adherence or on the intervention) questions are also worth asking when developing evidence-based interventions [69]. In addition, it is worthwhile to consider which individuals stand to benefit from the support and if support is necessary for everyone. To specify, EMIs can be a valuable (first) step to treat the “worried well” and individuals with mild symptoms. Using EMIs to treat this group could be economically efficient, as mild problems constitute a major part of all reported mental health problems [70]. Treating this group using the cost-effective EMI methodology, frees resources (such as therapists) for those individuals who are in greater need of more intensive interventions. Moreover, it could help to improve the access to and quality of psychological care. Ideally, the progress of the individuals using the EMIs could be monitored so that alternative intervention options can be recommended when an EMI fails to be effective. Alternative intervention options could entail extra support (while using the EMI), an Internet intervention, or face-to-face intervention. Incorporating EMI in a stepped-care program could help in providing intensive intervention only when needed [71].

Apart from the moderator “support by an MHP,” no moderation effects were found for the other study or intervention characteristics. The intervention was, for example, equally effective for healthy versus clinical individuals. The absence of significant moderator variables implies that any form of EMI, irrespective of for instance type of training or number of training episodes, is equally effective for all individuals. Obviously, this assumption is implausible, and it is more likely that the null findings are the result of the relative small number of studies that specifically reported the intervention characteristics (eg, number of training episodes and whether training was triggered) [72]. Considering that the research field of EMIs is relatively new, it is understandable that limited information is available on what characteristics of an intervention are considered effective (or active). It does, however, highlight the need for research that determines what the active features of an intervention are [73]. Potential questions that could be targeted relate to the frequency and duration of the intervention (eg, is daily practicing required, and if so, how many times a day?). Although initial research suggests that (daily) repetition is necessary to learn a new behavior [74], this should be further investigated using RCTs with EMIs. Another potential research endeavor is whether a training should be offered on-demand or whether it should be automatically triggered. A meta-analysis, investigating the use of triggers to stimulate engagement with digital interventions, found preliminary support for the use of technology (eg, texting or emails) to improve engagement [75]. This result is interesting, as mobile interventions would make it easy to trigger a training, but more studies are needed to establish if this effect is valid. Altogether, it is important that future research focuses on identifying the most potent feature(s) of an intervention.

Limitations

This meta-analysis is limited by the low reported study quality (ie, 2.29 on a scale from 0 to 6). When the reported study quality is low, the study may be subject to weakness in the experimental setup or to problems in the processing of the data. These shortcomings can influence the true effect and lead to an overrepresentation or underrepresentation [38]. However, reported study quality must not be confused with the actual quality of the study. To explain, studies may have used excellent set-ups but may have failed to adequately report their precise procedure. Indeed, most of the studies failed—on one or more occasions—to provide sufficient information to establish whether there was a risk of bias. To perform correct quality assessments, it is recommended that authors of future studies follow publication guidelines such as the CONSORT statement for RCT [76].

In line with the previous limitation, it is also important that sufficient intervention details are described so that other researchers can fully comprehend what the intervention entailed. In the included studies, the content of the intervention was described, yet other important intervention components—as suggested by Davidson et al [28]—were not always disclosed. For instance, 10 of the 33 studies (30%) failed to report how the intervention was triggered, and more than half of the studies did not explicate what the compliance with the intervention was. It is imperative that studies describe the full details of used intervention and the compliance with the intervention, and the guidelines by Davidson et al [28] can be used for this purpose. This information can ultimately be used to determine which interventions (or intervention characteristics) are the most effective.

Another limitation is that the larger part of the included studies used a within-subject design. Although this design can yield valuable information, RCTs (which use a between-subject design) are considered superior when evaluating interventions because these can be used to establish a causal relation. Moreover, some of the included studies (both within- and between-subject) had small sample sizes. Studies with small sample sizes may be statistically underpowered to detect an effect and have a lower study validity [72,77]. To further strengthen the body of knowledge on the effectiveness of EMIs, RCTs using adequate numbers of participants are needed.

Conclusions

To conclude, the meta-analysis found a small to medium effect of EMIs on mental health, and this effect did not differ across the different outcome types. Furthermore, the effect appeared to be larger when the EMI was supported by an MHP. It is important that future research determines how support by an MHP can best be implemented and if this support is a necessity for everyone. In addition, new research studies should investigate what the active features of an EMI are. Overall, the use of EMIs for improving mental health is supported; EMIs offer great potential for providing easy and cost-effective strategies to improve mental health and positive psychological well-being in the population.

Acknowledgments

This work was supported by the “Top”-grant of the Netherlands Organisation for Health Research and Development (ZON-MW) to Jos F. Brosschot, under grant number 40-0081 2-98-1 I 029.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Specific search strings used to find publications.

PDF File (Adobe PDF File), 6KB

  1. Kessler RC, Angermeyer M, Anthony JC, Demyttenaere K, Gasquet I, Gluzman S, et al. Lifetime prevalence and age-of-onset distributions of mental disorders in the World Health Organization's World Mental Health Survey Initiative. World Psychiatry 2007 Oct;6(3):168-176 [FREE Full text] [Medline]
  2. Alonso J, Angermeyer MC, Bernert S, Bruffaerts R, Brugha TS, Bryson H, ESEMeD/MHEDEA 2000 Investigators‚ European Study of the Epidemiology of Mental Disorders (ESEMeD) Project. Use of mental health services in Europe: results from the European Study of the Epidemiology of Mental Disorders (ESEMeD) project. Acta Psychiatr Scand Suppl 2004(420):47-54. [CrossRef] [Medline]
  3. Bijl RV, de GR, Hiripi E, Kessler RC, Kohn R, Offord DR, et al. The prevalence of treated and untreated mental disorders in five countries. Health Aff (Millwood) 2003;22(3):122-133 [FREE Full text] [Medline]
  4. World Health Organization. 2013 May 27. Comprehensive mental health action plan   URL: http://apps.who.int/gb/ebwha/pdf_files/WHA66/A66_R8-en.pdf?ua=1 [WebCite Cache]
  5. Wanless D. Securing our Future Health: Taking a Long-Term View.   URL: http://si.easp.es/derechosciudadania/wp-content/uploads/2009/10/4.Informe-Wanless.pdf [accessed 2016-02-03] [WebCite Cache]
  6. Mehl R, Conner T. Handbook of research methods for studying daily life. New York: Guilford Press; 2012.
  7. Proudfoot J, Parker G, Hadzi PD, Manicavasagar V, Adler E, Whitton A. Community attitudes to the appropriation of mobile phones for monitoring and managing depression, anxiety, and stress. J Med Internet Res 2010;12(5):e64 [FREE Full text] [CrossRef] [Medline]
  8. Olff M. Mobile mental health: a challenging research agenda. Eur J Psychotraumatol 2015;6:27882 [FREE Full text] [Medline]
  9. Price M, Yuen EK, Goetter EM, Herbert JD, Forman EM, Acierno R, et al. mHealth: a mechanism to deliver more accessible, more effective mental health care. Clin Psychol Psychother 2014;21(5):427-436 [FREE Full text] [CrossRef] [Medline]
  10. Schwabe L, Wolf OT. Stress prompts habit behavior in humans. J Neurosci 2009 Jun 3;29(22):7191-7198 [FREE Full text] [CrossRef] [Medline]
  11. Soares JM, Sampaio A, Ferreira LM, Santos NC, Marques F, Palha JA, et al. Stress-induced changes in human decision-making are reversible. Transl Psychiatry 2012;2:e131 [FREE Full text] [CrossRef] [Medline]
  12. Schwabe L, Wolf OT. Socially evaluated cold pressor stress after instrumental learning favors habits over goal-directed action. Psychoneuroendocrinology 2010 Aug;35(7):977-986. [CrossRef] [Medline]
  13. Otto AR, Raio CM, Chiang A, Phelps EA, Daw ND. Working-memory capacity protects model-based learning from stress. Proc Natl Acad Sci U S A 2013 Dec 24;110(52):20941-20946 [FREE Full text] [CrossRef] [Medline]
  14. Kalichman S, Stein JA, Malow R, Averhart C, Dévieux J, Jennings T, et al. Predicting protected sexual behaviour using the Information-Motivation-Behaviour skills model among adolescent substance abusers in court-ordered treatment. Psychol Health Med 2002;7(3):327-338 [FREE Full text] [CrossRef] [Medline]
  15. Neal DT, Wood W, Quinn JM. Habits? A Repeat Performance. Current Directions in Psychol Sci 2006 Aug;15(4):198-202. [CrossRef]
  16. International Telecommuncation Union. 2005-2015 ICT data for the world, by geographic regions and by level of development   URL: http://www.itu.int/en/ITU-D/Statistics/Documents/statistics/2015/ITU_Key_2005-2015_ICT_data.xls [accessed 2016-02-03] [WebCite Cache]
  17. Kazdin AE, Blase SL. Rebooting Psychotherapy Research and Practice to Reduce the Burden of Mental Illness. Perspect Psychol Sci 2011 Jan;6(1):21-37. [CrossRef] [Medline]
  18. Kaplan RM, Stone AA. Bringing the laboratory and clinic to the community: mobile technologies for health promotion and disease prevention. Annu Rev Psychol 2013;64:471-498. [CrossRef] [Medline]
  19. Heron KE, Smyth JM. Ecological momentary interventions: incorporating mobile technology into psychosocial and health behaviour treatments. Br J Health Psychol 2010 Feb;15(Pt 1):1-39 [FREE Full text] [CrossRef] [Medline]
  20. Serino S, Triberti S, Villani D, Cipresso P, Gaggioli A, Riva G. Toward a validation of cyber-interventions for stress disorders based on stress inoculation training: a systematic review. Virtual Reality 2013 Oct 26;18(1):73-87. [CrossRef]
  21. Marsch LA. Leveraging technology to enhance addiction treatment and recovery. J Addict Dis 2012;31(3):313-318 [FREE Full text] [CrossRef] [Medline]
  22. Krishna S, Boren SA, Balas EA. Healthcare via cell phones: a systematic review. Telemed J E Health 2009 Apr;15(3):231-240. [CrossRef] [Medline]
  23. Ehrenreich B, Righter B, Rocke DA, Dixon L, Himelhoch S. Are mobile phones and handheld computers being used to enhance delivery of psychiatric treatment? A systematic review. J Nerv Ment Dis 2011 Nov;199(11):886-891. [CrossRef] [Medline]
  24. Donker T, Petrie K, Proudfoot J, Clarke J, Birch M, Christensen H. Smartphones for smarter delivery of mental health programs: a systematic review. J Med Internet Res 2013;15(11):e247 [FREE Full text] [CrossRef] [Medline]
  25. Grassi A, Preziosa A, Villani D, Riva G. A relaxing journey: the use of mobile phones for well-being improvement. Annual Review of Cybertherapy and Telemedicine. 2007. p. 123-131   URL: https:/​/fe083ca7-a-62cb3a1a-s-sites.​googlegroups.com/​site/​unicatt-it-arctt-teams-edition-backup/​volume-5--summer-2007/​Grassi_ARCTT_2007.​pdf?attachauth=ANoY7cqPfJtdFEuboV2VVsvqkJ-AobMxbhlRWZ3tG5pI8spMXlsfOO2LthN3vKefal8wYT- rk2jTdw54BDaQ8geXQdqxdJlmUTHOCE3LNfw8K2a3Q8VFiNAX3C5yUH3SwxitlwHoaPt3P2UuhG_ m8C9iUbnh98X3oI3gF4p4j_1OxrN1FXg7QxaFaJccjYFQgsET6a7YF7jzScSldGubRaXvnnL4pboQW8U0LtZo5MovbDOBW8LOd_ oTTavyazhV3fiHSl9RPqLlHc7WGsHDoE5NTbaP-9LXR-_ugfIE2t4xmY5NLeQqZbI%3D&attredirects=0 [WebCite Cache]
  26. Huffziger S, Ebner-Priemer U, Eisenbach C, Koudela S, Reinhard I, Zamoscik V, et al. Induced ruminative and mindful attention in everyday life: an experimental ambulatory assessment study. J Behav Ther Exp Psychiatry 2013 Sep;44(3):322-328 [FREE Full text] [CrossRef] [Medline]
  27. Huppert FA. Psychological well-beingvidence regarding its causes and consequences. Appl Psychol Health Well-Being Jul 2009 Mar 09;1(2):137-164. [CrossRef]
  28. Davidson KW, Goldstein M, Kaplan RM, Kaufmann PG, Knatterud GL, Orleans CT, et al. Evidence-based behavioral medicine: what is it and how do we achieve it? Ann Behav Med 2003 Dec;26(3):161-171. [Medline]
  29. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [FREE Full text] [CrossRef] [Medline]
  30. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis. Oxford: Wiley; 2009.
  31. Repetto C, Gaggioli A, Pallavicini F, Cipresso P, Raspelli S, Riva G. Virtual reality and mobile phones in the treatment of generalized anxiety disorders: a phase-2 clinical trial. Pers Ubiquit Comput 2013 Feb;17(2):253-260. [CrossRef]
  32. Pallavicini F, Algeri D, Repetto C, Gorini A, Riva G. Biofeedback virtual reality and mobile phones in the treatment of generalized anxiety disorder (GAD): a phase-2 controlled clinical trial. J Cyber Ther Rehabil. 2009. (4) p. 315-327   URL: https:/​/www.​researchgate.net/​publication/​289089225_Biofeedback_virtual_reality_and_mobile_phones_in _the_treatment_of_generalized_anxiety_disorder_gad_A_phase-2_controlled_clinical_trial [WebCite Cache]
  33. Ahtinen A, Mattila E, Välkkynen P, Kaipainen K, Vanhala T, Ermes M, et al. Mobile mental wellness training for stress management: feasibility and design implications based on a one-month field study. JMIR Mhealth Uhealth 2013;1(2):e11 [FREE Full text] [CrossRef] [Medline]
  34. Aikens JE, Trivedi R, Heapy A, Pfeiffer PN, Piette JD. Potential Impact of Incorporating a Patient-Selected Support Person into mHealth for Depression. J Gen Intern Med 2015 Jun;30(6):797-803. [CrossRef] [Medline]
  35. Kenardy JA, Dow MG, Johnston DW, Newman MG, Thomson A, Taylor CB. A comparison of delivery methods of cognitive-behavioral therapy for panic disorder: an international multicenter trial. J Consult Clin Psychol 2003 Dec;71(6):1068-1075. [CrossRef] [Medline]
  36. Newman MG, Przeworski A, Consoli AJ, Taylor CB. A randomized controlled trial of ecological momentary intervention plus brief group therapy for generalized anxiety disorder. Psychotherapy (Chic) 2014 Jun;51(2):198-206 [FREE Full text] [CrossRef] [Medline]
  37. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb) 2012;22(3):276-282 [FREE Full text] [Medline]
  38. Higgins Julian P T, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Cochrane Bias Methods Group, Cochrane Statistical Methods Group. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ 2011;343:d5928 [FREE Full text] [Medline]
  39. Follmann D, Elliott P, Suh I, Cutler J. Variance imputation for overviews of clinical trials with continuous response. J Clin Epidemiol 1992 Jul;45(7):769-773. [Medline]
  40. Cohen J. A power primer. Psychol Bull 1992 Jul;112(1):155-159. [Medline]
  41. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ 2003 Sep 6;327(7414):557-560 [FREE Full text] [CrossRef] [Medline]
  42. Viechtbauer W. Conducting meta-analysis in R with the metafor package. J Stat Softw Aug 2010;36(3):1-48. [CrossRef]
  43. Gorini A, Pallavicini F, Algeri D, Repetto C, Gaggioli A, Riva G. Virtual reality in the treatment of generalized anxiety disorders. Stud Health Technol Inform 2010;154:39-43. [Medline]
  44. Grassi A, Gaggioli A, Riva G. New technologies to manage exam anxiety. Stud Health Technol Inform 2011;167:57-62. [Medline]
  45. Preziosa A, Grassi A, Gaggioli A, Riva G. Therapeutic applications of the mobile phone. British Journal of Guidance & Counselling 2009 Aug;37(3):313-325. [CrossRef]
  46. Riva G, Preziosa A, Grassi A, Villani D. Stress management using UMTS cellular phones: a controlled trial. Stud Health Technol Inform 2006;119:461-463. [Medline]
  47. Zautra AJ, Davis MC, Reich JW, Sturgeon JA, Arewasikporn A, Tennen H. Phone-based interventions with automated mindfulness and mastery messages improve the daily functioning for depressed middle-aged community residents. Journal of Psychotherapy Integration 2012;22(3):206-228. [CrossRef]
  48. Askins MA, Sahler OJ, Sherman SA, Fairclough DL, Butler RW, Katz ER, et al. Report from a multi-institutional randomized clinical trial examining computer-assisted problem-solving skills training for English- and Spanish-speaking mothers of children with newly diagnosed cancer. J Pediatr Psychol 2009 Jun;34(5):551-563 [FREE Full text] [CrossRef] [Medline]
  49. Carissoli C, Villani D, Riva G. Does a meditation protocol supported by a mobile application help people reduce stress? Suggestions from a controlled pragmatic trial. Cyberpsychol Behav Soc Netw 2015 Jan;18(1):46-53. [CrossRef] [Medline]
  50. Granholm E, Ben-Zeev D, Link PC, Bradshaw KR, Holden JL. Mobile Assessment and Treatment for Schizophrenia (MATS): a pilot trial of an interactive text-messaging intervention for medication adherence, socialization, and auditory hallucinations. Schizophr Bull 2012 May;38(3):414-425 [FREE Full text] [CrossRef] [Medline]
  51. Ly KH, Dahl J, Carlbring P, Andersson G. Development and initial evaluation of a smartphone application based on acceptance and commitment therapy. Springerplus 2012;1:11 [FREE Full text] [CrossRef] [Medline]
  52. Dagöö J, Asplund RP, Bsenko HA, Hjerling S, Holmberg A, Westh S, et al. Cognitive behavior therapy versus interpersonal psychotherapy for social anxiety disorder delivered via smartphone and computer: a randomized controlled trial. J Anxiety Disord 2014 May;28(4):410-417. [CrossRef] [Medline]
  53. Enock PM, Hofmann SG, McNally RJ. Attention Bias Modification Training Via Smartphone to Reduce Social Anxiety: A Randomized, Controlled Multi-Session Experiment. Cogn Ther Res 2014 Mar 4;38(2):200-216. [CrossRef]
  54. Newman MG, Kenardy J, Herman S, Taylor CB. Comparison of palmtop-computer-assisted brief cognitive-behavioral treatment to cognitive-behavioral treatment for panic disorder. J Consult Clin Psychol 1997 Feb;65(1):178-183. [Medline]
  55. Agyapong VI, Ahern S, McLoughlin DM, Farren CK. Supportive text messaging for depression and comorbid alcohol use disorder: single-blind randomised trial. J Affect Disord 2012 Dec 10;141(2-3):168-176. [CrossRef] [Medline]
  56. Burns MN, Begale M, Duffecy J, Gergle D, Karr CJ, Giangrande E, et al. Harnessing context sensing to develop a mobile intervention for depression. J Med Internet Res 2011;13(3):e55 [FREE Full text] [CrossRef] [Medline]
  57. Ly KH, Trüschel A, Jarl L, Magnusson S, Windahl T, Johansson R, et al. Behavioural activation versus mindfulness-based guided self-help treatment administered through a smartphone application: a randomised controlled trial. BMJ Open 2014;4(1):e003440 [FREE Full text] [CrossRef] [Medline]
  58. Watts S, Mackenzie A, Thomas C, Griskaitis A, Mewton L, Williams A, et al. CBT for depression: a pilot RCT comparing mobile phone vs. computer. BMC Psychiatry 2013;13:49 [FREE Full text] [CrossRef] [Medline]
  59. Lappalainen P, Kaipainen K, Lappalainen R, Hoffrén H, Myllymäki T, Kinnunen M, et al. Feasibility of a personal health technology-based psychological intervention for men with stress and mood problems: randomized controlled pilot trial. JMIR Res Protoc 2013;2(1):e1 [FREE Full text] [CrossRef] [Medline]
  60. Harrison V, Proudfoot J, Wee PP, Parker G, Pavlovic DH, Manicavasagar V. Mobile mental health: review of the emerging field and proof of concept study. J Ment Health 2011 Dec;20(6):509-524. [CrossRef] [Medline]
  61. Proudfoot J, Clarke J, Birch M, Whitton AE, Parker G, Manicavasagar V, et al. Impact of a mobile phone and web program on symptom and functional outcomes for people with mild-to-moderate depression, anxiety and stress: a randomised controlled trial. BMC Psychiatry 2013;13:312 [FREE Full text] [CrossRef] [Medline]
  62. Depp CA, Ceglowski J, Wang VC, Yaghouti F, Mausbach BT, Thompson WK, et al. Augmenting psychoeducation with a mobile intervention for bipolar disorder: a randomized controlled trial. J Affect Disord 2015 Mar 15;174:23-30. [CrossRef] [Medline]
  63. Wenze SJ, Armey MF, Miller IW. Feasibility and Acceptability of a Mobile Intervention to Improve Treatment Adherence in Bipolar Disorder: A Pilot Study. Behav Modif 2014 Jan 8;38(4):497-515. [CrossRef] [Medline]
  64. Ben-Zeev D, Brenner CJ, Begale M, Duffecy J, Mohr DC, Mueser KT. Feasibility, acceptability, and preliminary efficacy of a smartphone intervention for schizophrenia. Schizophr Bull 2014 Nov;40(6):1244-1253. [CrossRef] [Medline]
  65. Rizvi SL, Dimeff LA, Skutch J, Carroll D, Linehan MM. A pilot study of the DBT coach: an interactive mobile phone application for individuals with borderline personality disorder and substance use disorder. Behav Ther 2011 Dec;42(4):589-600. [CrossRef] [Medline]
  66. Shapiro JR, Bauer S, Andrews E, Pisetsky E, Bulik-Sullivan B, Hamer RM, et al. Mobile therapy: Use of text-messaging in the treatment of bulimia nervosa. Int J Eat Disord 2010 Sep;43(6):513-519. [CrossRef] [Medline]
  67. Charness G, Gneezy U, Kuhn MA. Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior & Organization 2012 Jan;81(1):1-8. [CrossRef]
  68. Andersson G, Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cogn Behav Ther 2009;38(4):196-205. [CrossRef] [Medline]
  69. Johansson R, Andersson G. Internet-based psychological treatments for depression. Expert Rev Neurother 2012 Jul;12(7):861-9; quiz 870. [CrossRef] [Medline]
  70. Frances A. The past, present and future of psychiatric diagnosis. World Psychiatry 2013 Jun;12(2):111-112 [FREE Full text] [CrossRef] [Medline]
  71. Davison GC. Stepped care: doing more with less? J Consult Clin Psychol 2000 Aug;68(4):580-585. [Medline]
  72. Hunter J, Schmidt F. Methods of meta-analysis: correcting error and bias in research findings. Thousand Oaks, California: Sage; 2004.
  73. Kazdin AE. Mediators and mechanisms of change in psychotherapy research. Annu Rev Clin Psychol 2007;3:1-27. [CrossRef] [Medline]
  74. Lally P, van Jaarsveld CHM, Potts HWW, Wardle J. How are habits formed: Modelling habit formation in the real world. Eur. J. Soc. Psychol 2010 Oct 16;40(6):998-1009. [CrossRef]
  75. Alkhaldi G, Hamilton FL, Lau R, Webster R, Michie S, Murray E. The Effectiveness of Prompts to Promote Engagement With Digital Interventions: A Systematic Review. J Med Internet Res 2016;18(1):e6 [FREE Full text] [CrossRef] [Medline]
  76. Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials. BMC Med 2010;8:18 [FREE Full text] [CrossRef] [Medline]
  77. Moher D, Dulberg CS, Wells GA. Statistical power, sample size, and their reporting in randomized controlled trials. JAMA 1994 Jul 13;272(2):122-124. [Medline]


CBT: cognitive behavioral therapy
EMI: ecological momentary intervention
MHP: mental health professional
N/A: not applicable
PRISMA: Preferred Reporting Items Systematic reviews and Meta-Analyses
RCT: randomized controlled trial


Edited by G Eysenbach; submitted 12.02.16; peer-reviewed by U Ebner-Priemer, J Inauen, J Apolinário-Hagen; comments to author 10.03.16; revised version received 04.04.16; accepted 21.04.16; published 27.06.16

Copyright

©Anke Versluis, Bart Verkuil, Philip Spinhoven, Melanie M van der Ploeg, Jos F Brosschot. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.06.2016.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.