Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51875, first published .
The Relation Between Passively Collected GPS Mobility Metrics and Depressive Symptoms: Systematic Review and Meta-Analysis

The Relation Between Passively Collected GPS Mobility Metrics and Depressive Symptoms: Systematic Review and Meta-Analysis

The Relation Between Passively Collected GPS Mobility Metrics and Depressive Symptoms: Systematic Review and Meta-Analysis

Review

1Department of Clinical Psychology and Psychotherapy, Institute of Psychology and Education, University Ulm, Ulm, Germany

2Department of Psychology, Ludwig Maximilian University of Munich, Munich, Germany

3German Center for Mental Health (DZPG), Partner Site Munich-Augsburg, Munich, Germany

4Department of Clinical Child and Adolescent Psychology and Psychotherapy, Institute of Psychology, University of Wuppertal, Wuppertal, Germany

Corresponding Author:

Yannik Terhorst, BSc, MSc

Department of Clinical Psychology and Psychotherapy

Institute of Psychology and Education

University Ulm

Lise-Meitner-Str. 16

Ulm, 89081

Germany

Phone: 49 7315032820

Email: yannik.terhorst@uni-ulm.de


Background: The objective, unobtrusively collected GPS features (eg, homestay and distance) from everyday devices like smartphones may offer a promising augmentation to current assessment tools for depression. However, to date, there is no systematic and meta-analytical evidence on the associations between GPS features and depression.

Objective: This study aimed to investigate the between-person and within-person correlations between GPS mobility and activity features and depressive symptoms, and to critically review the quality and potential publication bias in the field.

Methods: We searched MEDLINE, PsycINFO, Embase, CENTRAL, ACM, IEEE Xplore, PubMed, and Web of Science to identify eligible articles focusing on the correlations between GPS features and depression from December 6, 2022, to March 24, 2023. Inclusion and exclusion criteria were applied in a 2-stage inclusion process conducted by 2 independent reviewers (YT and JK). To be eligible, studies needed to report correlations between wearable-based GPS variables (eg, total distance) and depression symptoms measured with a validated questionnaire. Studies with underage persons and other mental health disorders were excluded. Between- and within-person correlations were analyzed using random effects models. Study quality was determined by comparing studies against the STROBE (Strengthening the Reporting of Observational studies in Epidemiology) guidelines. Publication bias was investigated using Egger test and funnel plots.

Results: A total of k=19 studies involving N=2930 participants were included in the analysis. The mean age was 38.42 (SD 18.96) years with 59.64% (SD 22.99%) of participants being female. Significant between-person correlations between GPS features and depression were identified: distance (r=–0.25, 95% CI –0.29 to –0.21), normalized entropy (r–0.17, 95% CI –0.29 to –0.04), location variance (r–0.17, 95% CI –0.26 to –0.04), entropy (r=–0.13, 95% CI –0.23 to –0.04), number of clusters (r=–0.11, 95% CI –0.18 to –0.03), and homestay (r=0.10, 95% CI 0.00 to 0.19). Studies reporting within-correlations (k=3) were too heterogeneous to conduct meta-analysis. A deficiency in study quality and research standards was identified: all studies followed exploratory observational designs, but no study referenced or fully adhered to the international guidelines for reporting observational studies (STROBE). A total of 79% (k=15) of the studies were underpowered to detect a small correlation (r=.20). Results showed evidence for potential publication bias.

Conclusions: Our results provide meta-analytical evidence for between-person correlations of GPS mobility and activity features and depression. Hence, depression diagnostics may benefit from adding GPS mobility and activity features as an integral part of future assessment and expert tools. However, confirmatory studies for between-person correlations and further research on within-person correlations are needed. In addition, the methodological quality of the evidence needs to improve.

Trial Registration: OSF Registeries cwder; https://osf.io/cwder

J Med Internet Res 2024;26:e51875

doi:10.2196/51875

Keywords



Depressive disorders are one of the most prevalent mental disorders worldwide. The global prevalence rate is estimated to be 4.4% [1,2]. Associated consequences of depression are severe not only for affected individuals (eg, globally, 43 million years lived with disability in 2017) but also for the economy and society [3-5]. Costs are even higher and long-term health consequences are more severe if the disorder is not properly diagnosed or treated [6,7]. To target the global burden caused by depression and to initiate effective treatment, timely diagnosis is a key bottleneck in health care [8-10].

Various diagnostic approaches like structured clinical interviews [11], clinician-rated screening scales [12,13], and self-report screening assessments [14] are available. However, their reliability and accuracy are often hindered by social desirability, recall, or confirmation biases [15-17]. Therefore, augmentations and extensions of existing approaches by objective data sources could potentially make an important contribution to the improvement in the diagnostic process of depression and other mental disorders.

Facing an ever-growing digitalization in daily living, the availability of various sensor data in smartphones, wearable devices, cars, and smart home devices may provide a way toward objective diagnosis in a timely manner and sensor-informed diagnostic support tools for clinical personnel [18-21]. In the context of medical apps, the approach of using sensor data to infer mental health is referred to as smart sensing or digital phenotyping [18,19]. In short, raw data from software (eg, screen status of the smartphone [on or off]) and hardware-based (eg, GPS coordinates) sensors are processed to derive higher-level features (eg, total smartphone usage time, total distance, and circadian rhythm), which are then linked to clinical symptoms or diagnoses [18,19].

Earlier studies highlight the potential of smart sensing [22]: For instance, Opoku Asare et al [23] found an area under the curve (AUC) of the receiver operating curve (ROC) of 94.69% to 99.06% in the prediction of depression status (depressed or not depressed) using a supervised machine learning model on behavioral markers from the smartphone (ie, app usage, screen usage, and network usage features). Furthermore, studies have shown that smart sensing data can improve prediction and increase explained variance over self-report ratings such as ecological momentary assessments [24,25]. In line with these findings, various descriptive reviews underline the potential shown in the present literature [26-33].

While there is a broad array of sensors in the context of depression, in particular, GPS-based mobility and activity features (eg, total distance, places visited, and time spent at home) could be promising candidates for objective and unobtrusive measurement of core symptoms of depression such as reduced activity, fatigue, loss of functioning, and diminished interest [24,27,34,35]. Individual studies find medium to high correlations between depressive symptoms and GPS mobility and activity features (eg, number of location clusters: r=–0.38, circadian movement: r=–0.34, home stay: r=0.22 [34]; eg, [24,35-37]). However, the heterogeneity in the findings and in the methodology between studies is substantial [27,31]. In addition, a mean sample size of 23.1 (SD 27.9) was reported in the systematic review of Cornet and Holden [26], clearly highlighting the issue of underpowered trials in the field. Meta-analytical evidence is highly needed to provide an estimate of the relationship between GPS mobility and activity features and depressive symptoms to determine their applicability for clinical settings and to transfer research from exploratory investigations to confirmatory studies. However, to date, no quantitative meta-analysis has been conducted on the associations between GPS mobility and activity features and depression.

Therefore, this study systematically reviewed and meta-analyzed the current evidence on the associations between GPS mobility and activity features and depression. Besides, core aspects of study quality (eg, adherence to international reporting guidelines, preregistrations, presence of a-priori and post hoc power analysis [31,38,39]) and the potential of publication bias in the field were investigated. Research questions are (1) What is the pooled between-person correlation of GPS mobility and activity features and depression severity? (2) What is the pooled within-person correlation of GPS mobility and activity features and depression severity? (3) To what extent are international reporting guidelines (ie, STROBE [Strengthening the Reporting of Observational studies in Epidemiology] [38]) reflected in the studies? and (4) Are small study effects as an indicator of potential publication bias observable in the literature?


Study Design

This study is a systematic review and meta-analysis. All procedures have been preregistered in the open science framework under trial registration link. Reporting is conducted in accordance with the updated PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines for the reporting of systematic reviews [40] (Multimedia Appendix 1).

Search and Eligibility Criteria

Given the interdisciplinary nature of the field, we searched MEDLINE, PsycINFO, Embase, CENTRAL, ACM, IEEE Xplore, PubMed, and Web of Science. The database-specific search strings can be found in Multimedia Appendix 2. After the removal of duplicates by automatic tools of the databases, all identified articles were screened in a two-stage process: (1) title and abstract screening and (2) full-text screening, both performed by 2 independent reviewers (YT and JK). If duplicates were encountered in this screening process, they were removed by hand. In addition to the database search, we searched all reference lists of the included studies for further eligible studies. This search and inclusion process was started on December 6, 2022 (database searches from December 06 until June 13), and completed by March 24, 2023 (last reference search). Any disagreements between the reviewers were resolved in discussions.

To be eligible, the following inclusion criteria had to be met: (1) any kind of GPS sensor data collection, (2) the data were collected by a wearable device such as smartphone or smartwatch, (3) an assessment of depressive symptoms was conducted either by self-report or by clinical diagnostic scales, and (4) reported outcomes included correlations between the collected GPS sensor data and depressive symptoms. Studies that (1) comprised participants younger than 18 years of age or (2) included participants with disorders other than depression (eg, bipolar disorder) were excluded.

Measured Variables and Coding

All data were extracted by 2 independent reviewers (JK and YT). The following study characteristics and empirical data points were obtained.

Study Characteristics

To describe the included studies, we extracted the authors’ names, publication year, measures of depression (eg, Patient Health Questionnaire-9 [PHQ-9]), GPS-sensor features (ie, mobility and activity features such as total distance), study design, study setting, sensing framework (app name) and sample characteristics (age, gender, population, and country).

Quantitative and Empirical Data

For the statistical analysis, correlation coefficients (both between-person and within-person) and sample size were extracted (see further analysis details below). Between-person correlations were defined as the interindividual associations between 2 variables (eg, do individuals with higher time spent at home tend to show higher depression?). Within-person correlations were defined as intraindividual associations of 2 variables across time (eg, does an individual tend to show higher depression in weeks with more time spent at home?). Corresponding authors of eligible studies with insufficiently reported information for meta-analysis (eg, reporting of P values without correlation coefficients) were contacted for the missing information (ie, repeated email containing a study description and extraction template for the needed information). Furthermore, all included studies were compared against international guidelines for reporting observational studies (STROBE [38]). In addition, the presence of a preregistration and a priori or post hoc power analysis were rated as additional criteria (assessment of research standards and small study effects are provided below). The rating was performed independently by 2 researchers (JK and YT). Disagreements were resolved in discussion. Study designs other than exploratory, observational, intensive, longitudinal designs were planned to be compared against corresponding guidelines of the EQUATOR Network but were not present in the included studies.

Statistical Analysis

Meta-Analysis of Correlation

We conducted a random effects meta-analysis of correlations. Pooling was based on inverse variance weighting [41]. Maximum likelihood was used as the estimator. Following the Cochrane handbook for meta-analysis, the minimum required number of studies reporting on the correlations between a feature (eg, home stay) and depression to run meta-analysis was 2 [42]. For all significance tests and range of CI, was set to 5%. Heterogeneity of effect sizes was evaluated by I2 [43]. For heterogeneity, we defined an I2 of 25% as the threshold for low, 50% for moderate, and 75% for high heterogeneity in the present meta-analysis. 95% CI and prediction intervals were calculated along the pooled correlation estimates.

Assessment of Research Standards and Small Study Effects

As a general criterion for the adherence to international reporting guidelines, we assessed the reference and adherence to the STROBE guidelines [38] of the included observational studies. The STROBE checklist was rated for each study by 2 independent researchers (YT and JK). Disagreements were resolved in discussion.

Facing the replication crisis in research [39,44], we additionally investigated the percentage of preregistrations and the presence of a priori and post hoc power calculation. Potential publication bias was investigated using (1) funnel plots and (2) Egger test [45,46]. Egger test was only conducted in analyses with at least 10 studies [47]. Funnel plots display the effect sizes reported in studies (x-axis) in respect to the SE (y-axis). The core assumption is that in case of no publication bias, all points in the funnel plot should be distributed equally around an average effect with the high precision studies (low SE) at the top of the figure and closely around the average, and studies with low precision (high SE) should be broadly distributed around the average effect. In contrast to this, a scenario with publication bias (eg, small studies are only published if they show high effects, while large studies with lower effects are published either way due to the number of participants) would lead to an asymmetric funnel plot, where an association between SE and reported effect is visible. Capitalizing on this idea, Eggert test is a regression model investigating whether the SE is significantly influencing the average reported effect. Both asymmetries in funnel plots and significant Eggert tests indicate potential publication bias, and hence, potentially biased meta-analytically results (eg, systematic overestimation of the true effect due to unpublished high SE studies with no effects). For a more in-depth introduction, refer previously published studies (eg, [41]).

Software

All analyses were conducted in R (R Foundation for Statistical Computing). Meta was used as the core package in the analysis [48]. For an overview of all loaded packages and versions, see Multimedia Appendix 3. Both the analysis code and the dataset containing the correlations are available under CC-BY 4.0 license at the Open Science Framework (Multimedia Appendix 4).


Study Selection

We identified a total of k=9499 unique records in the systematic literature search. One additional study was screened and included after being forwarded by researchers contacted due to insufficient data reported in an identified study. In the following screening and inclusion process conducted by 2 independent reviewers (YT and JK), k=19 studies were finally included [24,34-37,49-62]. In total, 2 of the studies were eligible for between- and within-person correlation meta-analysis [34,53]. For further details, please refer to the PRIMSA flowchart in Figure 1.

Figure 1. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flowchart. Records identified from (*) MEDLINE, PsycINFO, Embase, CENTRAL, ACM, IEEE Xplore, PubMed, and Web of Science.

Study Characteristics

Combined, the included studies comprised a total of N=2930 participants. The sample sizes ranged from a minimum of n=18 to a maximum of n=1046 (mean 154.21, SD 235.54; median 72, IQR 83). The mean age across all included studies was 38.42 (SD 18.96) years. The average percentage of female participants was mean 59.64% (SD 22.99%). Of the k=19 included studies, k=8 (42%) studies explicitly targeted students and young adults (eg, recruitment at universities), k=3 (16%) studies recruited from the general public, k=5 (26%) studies used nontailored online and social media recruitment, k=2 (11%) studies targeted older adults, and k=1 (5%) study provided no further information on the recruitment strategies and targeted population. Please refer to Table 1 for further details on the descriptive characteristics of the studies.

A total of k=13 unique sensing frameworks were applied among the included studies. PurpleRobot was the most frequently used framework (k=3/19, 16%). Based on the sensed GPS data, the studies reported n=12 distinct activity and mobility features. The number of unique studies per activity and mobility feature ranged from k=1 to k=14 and a combined sample size of n=69 to n=2287 per feature. A summary of the features can be found in Table 2 for between-person features and Table 3 for within-person features.

Table 1. Main characteristics of included studies.
ReferenceCountryNDepression scaleAge (y), mean (SD)SoftwarePeriodTarget populationSeverity
Boukhechba et al [56]United States72DASS-21a19.8 (2.4)Sensus2 weeksStudents and young adults3.5
Canzian and Musolesi [57]United Kingdom28 PHQ-8b31 (—c)Mood- TracesAverage 71 daysGeneral public
Currey and Torous [37]United States147PHQ-9dmind-LAMP4 weeksStudents and young adults
DeMasi and Recht [58]United states33BDIe8 weeksStudents and young adults12.7
Di Matteo [59]Canada71PHQ-830.6 (9.4)Logger2 weeksNontailored online and social media recruitment9.1
Farhan et al [62]United States79PHQ-9Life-RhythmStudents and young adults
Giannouli et al [60]Germany69GDSf69.5 (4.9)Ufall & custom app1 weekOlder adults1.39
Lu et al [50]United States103QIDSgLife-Rhythm & FitbitStudents and young adults
MacLeod et al [49]Canada121CES-DCh18 (2.76)Custom app2 weeksStudents and young adults32.6
Stamatis et ali
[61]
United States1046PHQ-840.9 (—)LifeSense16 weeksNontailored online and social media recruitment9.14
Moshe et al [24]Various European55DASS-2142.8 (11.6)AWARE30 daysNontailored online and social media recruitment3.8
Nickels et al [51]United States379PHQ-912 weeksPeople who are or are not depressed
Saeb et al (2015a) [35]United States28PHQ-928.9 (10.1)Purple-Robot2 weeksGeneral public5.6
Saeb et al (2015b) [54]18PHQ-9Purple-Robot2 weeksNontailored online and social media recruitment5.8
Saeb et al (2016) [34]United States48PHQ-9Student-Life10 weeksStudents and young adults
Saeb et al (2017) [55]United States206PHQ-939.3 (10.3)Purple-Robot6 weeksGeneral public9.7
Tung et al [52]Canada54GDS72.6VALMA3 daysOlder adults
Wang et al [36]United States83PHQ-8 and PHQ-4j20.1 (—)Student-Life9 weeksStudents and young adults6.1
Zhang et al [53]Netherlands, Spain, United Kingdom290PHQ-82 weeks

aDASS-21: Depression Anxiety and Stress Scale.

bPHQ-8: Patient Health Questionnaire-8.

cNot applicable.

dPHQ-9: Patient Health Questionnaire-9.

eBDI: Beck’s Depression Inventory.

fGDS: Geriatric Depression Scale.

gQIDS: Quick Inventory of Depression Symptomatology.

hCES-DC: Center for Epidemiological Studies–Depression.

iStamatis et al [61] provided additional data on depression and GPS mobility metrics obtained from the LifeSense project for this meta-analysis. For further details on the LifeSense project, please refer to [61,63,64].

jPHQ-4: Patient Health Questionnaire-4.

Table 2. Between-person features.
GPS featureDefinitionStudiesFrequencyTotal NValue, mean (SD)
HomestayPercentage of time spent homeBoukhechba et al [56], Currey and Torous [37], Di Matteo [59], Farhan et al (Android) [62], Farhan et al (iOS) [62], Lu et al (Android) [50], Lu et al (iOS) [50], Stamatis et ala [61], Moshe et al [24], Nickels et al [51], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al 2016 [34], Saeb et al (2017) [55]142216158.29 (266.4)
Location varianceVariance of latitude and longitude valuesCurrey and Torous [37], DeMasi and Recht [58], Di Matteo [59], Farhan et al (Android) [62], Farhan et al (iOS) [62], Lu et al (Android) [50], Lu et al (iOS) [50], Stamatis et al [61], Moshe et al [24], Nickels et al [51], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al (2016) [34], Zhang et al [53]142287163.36 (276.1)
EntropyDistribution of time spent at different location clustersCurrey and Torous [37], Di Matteo [59], Farhan et al (Android) [62], Farhan et al (iOS) [62], Lu et al (Android) [50], Lu et al (iOS) [50], Stamatis et al [61], Moshe et al [24], Nickels et al [51], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al (2016) [34], Zhang et al [53]132254173.38 (284.71)
N clustersNumber of unique location clustersCurrey and Torous [37], Di Matteo [59], Farhan et al (Android) [62], Farhan et al (iOS) [62], Lu et al (Android) [50], Lu et al (iOS) [50], Stamatis et al [61], Nickels et al [51], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al (2016) [34], Wang et al [36], Zhang et al [53]132282175.54 (283.85)
Norm entropyEntropy normalized by the number of location clustersDi Matteo [59], Farhan et al (Android) [62], Farhan et al (iOS) [62], Lu et al (Android) [50], Lu et al (iOS) [50], MacLeod et al [49], Stamatis et al [61], Moshe et al [24], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al (2016) [34], Zhang et al [53]121849154.08 (290.46)
DistanceTotal distance between coordinatesDi Matteo [59], Lu et al (Android) [50], Lu et al (iOS) [50], MacLeod et al [49], Stamatis et al [61], Moshe et al [24], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al (2016) [34], Tung et al [52], Zhang et al [53]111824165.82 (301.64)
Speed movingSpeed at GPS data point collectionFarhan et al (Android) [62], Farhan et al (iOS) [62], Lu et al (Android) [50], Lu et al (iOS) [50], Stamatis et al [61], Saeb et al (2016) [34]61266211 (409.36)
Time movingTime spent in moving states in percentageFarhan et al (Android) [62], Farhan et al (iOS) [62], Lu et al (Android) [50], Lu et al (iOS) [50], MacLeod et al [49], Zhang et al [53], Di Matteo [59], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al (2016) [34]1074874.8 (81.54)
Circadian movementAmount of energy in frequency periods (eg, 30 min) based on least-squares spectral analysisDi Matteo [59], Stamatis et al [61], Saeb et al (2015a) [35], Saeb et al (2015b) [54], Saeb et al (2016) [34]51201240.2 (450.9)
Life space areaConvex hull of GPS coordinatesGiannouli et al [60]16969 (—b)
Maximum action rangeLongest (straight-line) distance away from homeGiannouli et al [60]16969 (—)

aStamatis et al [61] provided additional data on depression and GPS mobility metrics obtained from the LifeSense project for this meta-analysis. For further details on the LifeSense project, please refer to [61,63,64] bNot applicable.

bNot applicable.

Table 3. Within-person features.
GPS featureDefinitionStudiesFrequencyTotal NValue, mean (SD)
DistanceTotal distance between coordinatesCanzian and Musolesi [57], Saeb et al (2016) [34], Zhang et al [53]3356118.67 (148.46)
EntropyDistribution of time spent at different location clustersSaeb et al (2016) [34], Zhang et al [53]2328164 (178.19)
Location varianceVariance of latitude and longitude valuesSaeb et al (2016) [34], Zhang et al [53]2328164 (178.19)
N clustersNumber of unique location clustersSaeb et al (2016) [34], Zhang et al [53]2328164 (178.19)
Norm entropyEntropy normalized by the number of location clustersSaeb et al (2016) [34], Zhang et al [53]2328164 (178.19)
Circadian movementAmount of energy in frequency periods (eg, 30 min) based on least-squares spectral analysis.Saeb et al (2016) [34]13838 (—a)
HomestayPercentage of time spent homeSaeb et al (2016) [34]13838 (—)
Speed movingSpeed at GPS data point collectionSaeb et al (2016) [34]13838 (—)
Time movingTime spent in moving states in percentageSaeb et al (2016) [34], Zhang et al [53]2328164 (178.19)

aNot applicable.

Meta-Analysis: Between Person Correlations

Distance was the mobility and activity feature most strongly associated with depressive symptoms, with a meta-analytically pooled between-person correlation of r=–0.25 (95% CI –0.29 to –0.21), followed by normalized entropy (r=–0.17, 95% CI –0.29 to –0.04), location variance (r=–0.17, 95% CI –0.26 to –0.06), entropy (r=–0.13, 95% CI –0.23 to –0.04), number of clusters (r=–0.11, 95% CI –0.18 to 0.03), and home stay (r=0.10, 95% CI 0.00 to 0.19). In contrast, the features circadian movement, transition time, speed moving and time moving indicate no significant correlations with depression. Table 4 summarizes all of the pooled between-person correlations. Feature-specific forest plots can be found in Multimedia Appendix 5.

Table 4. Pooled between-person correlations.
FeatureNCorrelation95% CIPrediction intervalI2
Circadian movement1201–0.36–0.71 to 0.12–0.91 to 0.6486%
Distance1824–0.25–0.29 to –0.21–0.30 to –0.200%
Norm entropy1849–0.17–0.29 to –0.04–0.44 to 0.1369%
Location variance2287–0.17–0.26 to –0.06–0.36 to 0.0458%
Entropy2254–0.13–0.23 to –0.04–0.35 to 0.1057%
N cluster2282–0.11–0.18 to –0.03–0.25 to 0.0436%
Homestay22160.100.00 to 0.19–0.09 to 0.2750%
Speed moving1266–0.01–0.07 to 0.06–0.09 to 0.070%
Time moving748–0.05–0.21 to 0.11–0.45 to 0.3668%

Meta-Analysis: Within-Person Correlations

The 3 identified studies reporting on within-person correlations (Table 3) differed widely in the applied methodology of analysis. Saeb et al [34] correlated GPS mobility and activity features with the change in PHQ-9 scores, while Zhang et al [53] used autoregressive models to estimate the correlations over time, and Canzian and Musolesi [57] applied time series analysis to investigate the correlations within each participant across multiple days. Due to this heterogeneity in these studies to derive the reported correlations, we did not perform a meta-analysis for within-person correlations.

Assessment of Research Standards and Small Study Effects

All included studies (k=19) followed an observational study design. Assessment of the international reporting standards for observational studies revealed that not a single study referenced the international reporting guidelines STROBE. Comparing the k=18 included published studies against the STROBE checklist showed an overall agreement across all items of 73.84% (Stamatis et al [61] excluded from comparison as the provided data originated from additional analyses not covered in that study). Only k=9 (50%) of the studies reported how potential sources of bias were addressed. Regarding missing data, k=6 (33%) reported how missing data was handled and k=3 (17%) reported rates of missingness for the variables of interest. A total of k=7 (39%) of the studies listed reasons for the exclusion of participants at each stage of the study. For all STROBE ratings, please refer to Multimedia Appendix 4.

In addition, neither preregistrations nor study protocols were found for the included studies. A priori power analyses for sample size planning or post hoc for discussion were also not reported in any of the included studies. Power analyses of the included studies indicate that k=16 (84%) studies were sufficiently powered with a power of 80% to detect a correlation of r=.50, k=7 (37%) studies for r=.30, k=4 (21%) studies for r=.20, and k=1 (5%) studies for r=.10.

For all mobility and activity features except for time moving, Egger test showed a significant asymmetry in the funnel plot (Table 5). Funnel plots for features with less than k=10 studies, in addition indicate asymmetry. Please refer to Multimedia Appendix 6 for the funnel plots of all between-person features.

Table 5. Eggert test for funnel plot asymmetry.
FeatureIntercept95% CIP value
Homestay1.550.56 to 2.54<.001
Location variance–1.44–2.58 to –0.31.03
Entropy–2.11–2.87 to –1.34<.001
N clusters–1.29–2.22 to –0.35.02
Norm entropy–2.30–3.71 to –1.33<.001
Distance1.010.46 to 1.56.01
Time moving–0.42–2.74 to 2.66.98

Principal Findings

This systematic review with meta-analysis provides pooled correlation estimates for the between-person correlation of GPS-based smart sensing mobility and activity features and depression. We identified robust small to medium between-person correlations for multiple mobility and activity features (ie, distance, normalized entropy, entropy, homestay, number of clusters, and homestay). In contrast to the findings on between-person correlations, this literature for within-person correlations did not allow for meaningful meta-analyses. Furthermore, this study clearly highlights a lack of quality in the literature. While all studies followed an observational study design, not a single study referenced the international reporting guidelines for observational studies (STROBE [38]). Although the reporting in studies addressed STROBE items in most cases (14/19, 74%), deficits were found in addressing sources of bias and missing data. Moreover, we found strong evidence for small study effects potentially indicating publication bias. Most studies were underpowered to detect correlations of the magnitude indicated by the meta-analysis.

Overall, the present analysis highlights the potential of GPS features to infer depression, which is in line with previous published descriptive reviews in the field [26,27,30,32,33]. For instance, with a total sample size of N=1824 and homogenous findings across studies (I2=0%), distance (r=–0.25) might be a promising objective and unobtrusively collectible GPS feature in expert systems for (assisted) diagnoses, just-in-time interventions and personalization of treatment of depression in future [20,65]. Analogously, normalized entropy, entropy, homestay, number of clusters, and homestay were significant markers (r=0.10-0.17). However, it is important to note that all studies followed an exploratory design. Both the correlations and the clinical feasibility of such smart sensing augmented expert systems need to be proven in confirmatory trials before clinical applications. The identified pooled correlations and their CI offer a strong foundation for a-priori power analysis to guide confirmatory studies in their study and sample planning [66,67].

However, while some of the investigated correlations are statistically significant and pooled estimates can serve as a basis for future study planning from a statistical point, this does not release from the question of what a clinically relevant correlation is. The process involved in obtaining GPS features (eg, collection of GPS coordinates) as well as the features itself (eg, how much time was spent at home or at work) are highly sensitive. Hence, a discussion taking the clinical significance as well as ethical and privacy issues into account is of utmost importance [20,21,65,68-70]. Furthermore, it needs to be discussed to which extent GPS features as a standalone metric can be used in the diagnostic process or rather as an add on to existing procedures and measurements (eg, patient-reported outcome, ecological momentary assessment, medical record data) [20,24,71]. This discussion would strongly benefit from future studies illuminating these questions from patients’ and health care providers’ (eg, psychotherapists) perspectives.

Another question in the field is, whether GPS features cannot only serve to determine depression from a between-person perspective but also to inform researchers and clinicians about changes within persons (eg, is the increase of time spent at home a reliable indicator for rising depression severity?). This systematic review showed too sparse evidence and too high heterogeneity in applied analyses to run a meta-analysis on the studies conducted so far. Although this study cannot provide answers regarding the robustness across studies and hence evidence for the significance of within-person correlations, in particular, the included study by Zhang et al [53] on the longitudinal relationship of GPS mobility and activity features and depression found higher within-person correlations than between person correlations, underlining the potential benefit of within-person features in clinical applications and the importance of further studies on within-person correlations.

In line with previous reviews on the methods used in smart sensing studies [26,27,31], this study critically highlights gaps in the adherence to international reporting guidelines, potential publication bias, and high heterogeneity in assessment and analysis methodology. Guidelines like STROBE [38] for observational studies and other study design–specific guidelines offer a strong starting point to increase the study and especially reporting quality in the field. However, research on smart sensing and objective sensor data collection comes with its own challenges like (1) high heterogeneity in hardware (eg, used devices) and GPS sampling (eg, many different manufacturers of sensors and variety in the precision of sensing and data quality), (2) a plethora of data preprocessing decisions (eg, sampling rate, outliner detection or feature selection and definition), (3) missing data handling in large fine-grained datasets, and (4) analyses methodology (eg, how to deal with dependencies resulting from multiple measurements from the same individual [24]) [31]. Preregistration and open-access scripts for data preparation and analyses and extensions to existing international guidelines (eg, standards for feature calculation, reporting guidelines, and missing data handling) are highly needed to move towards a standardized and reliable research field.

The current lack of standardization and heterogeneity between studies in assessment frameworks, samples (eg, age and range), sensor sampling rates, devices, feature calculation, methods of handling missing data or accounting for dependencies in the data structure should also be considered when interpreting these findings. Due to the already low number of studies, we were unable to conduct meaningful sensitivity analysis on more homogenous groups of studies (eg, based on data quality, and devices) or meta-regression models to control for other variables (eg, for systematic mobility and activity differences across age groups). As the number of studies and their quality are likely to increase over time, analyses on homogenous groups and meta-regression analyses might become feasible in future. Nevertheless, we want to point out that the heterogeneity for some features (eg, distance) was very low, underlining the robustness of the pooled correlation estimates for these features. Moreover, the weighted pooling in the meta-analysis gave stronger influence on larger studies with more precise estimations of correlations, counteracting the influence of small studies to some extent [41].

Besides analyses on GPS mobility and activity features in more homogenous study groups, other GPS features should be focused on in the future. For instance, GPS sensor data can be used to derive environmental features such as green space, blue space, air pollution, or even regional data like social deprivation scores (eg, zip code based). First studies using smartphone-based GPS collection to infer depression based on such environmental GPS features are promising, and it needs to be investigated to what extent such features show meta-analytically robust correlations [72-74]. Analogously, meta-analytical evidence for other sensors and features is lacking (eg, smartphone screen usage, app usage, and communication and sociability features) and of high importance to inform researchers and practitioners on which objective markers might be suited for depression diagnosis, early-warning, just-in-time interventions, and other clinical applications [21,30-32,65,69,75]. Furthermore, extensions to other prevalent mental disorders (eg, anxiety) and conditions of leading disease burden (eg, pain) are highly needed to investigate the potential of unobtrusive and objective smart sensing data in health care.

Conclusions

This systematic review and meta-analysis provides evidence for small to medium between-person correlations of GPS mobility and activity features collected through smartphones and depression. In the future, GPS mobility and activity features such as distance may become an important augmentation in the assessment of depression and clinical applications (eg, decision support systems). However, replications in confirmatory studies and improvements in the study quality and standards in the field are needed to draw robust conclusions. Besides, more research on the within-person correlations is necessary to determine the potential of GPS features in not only between-person applications but also from a longitudinal perspective. To fully exploit the potential of smart sensing, further research on associations of depression and other mental disorders with GPS mobility and activity features as well as environmental GPS features (eg, green space) and other sensor modalities (eg, smartphone usage features), and how smart sensing features can be integrated in existing information systems and complex prediction models alongside patient-reported outcomes, clinician ratings, ecological momentary assessments, and medical record data, is of high importance.

Acknowledgments

We would like to thank all the student assistants involved in the project. Generative artificial intelligence tools (eg, ChatGPT) were not used in any portion of the manuscript writing. This study was self-funded by the authors. YT is supported by the initial phase of the German Center for Mental Health (Deutsches Zentrum für Psychische Gesundheit [DZPG]; grant 01EE2303A).

Data Availability

The datasets generated during and/or analyzed during this study and analysis codes are available in the Open Science Framework project (Multimedia Appendix 4)

Authors' Contributions

YT contributed to conceptualization, handled methodology, visualization, formal analysis and project administration and wrote original draft. YT, PP, and JK performed data curation. HB handled funding acquisition. YT, PP, and JK performed investigation. YT, PP, JK, and HB managed resources. YT and JK contributed to software. YT and HB performed supervision. YT and JK managed validation. YT, PP, JK, and HB contributed to writing-review and editing.

Conflicts of Interest

None declared.

Multimedia Appendix 1

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

DOCX File , 21 KB

Multimedia Appendix 2

Search terms.

DOCX File , 14 KB

Multimedia Appendix 3

Analysis software.

DOCX File , 14 KB

Multimedia Appendix 4

Dataset and code.

DOCX File , 12 KB

Multimedia Appendix 5

Forest plots.

DOCX File , 708 KB

Multimedia Appendix 6

Funnel plots.

DOCX File , 150 KB

  1. Depression and Other Common Mental Disorders: Global Health Estimates. Geneva, Switzerland. World Health Organization; 2017.
  2. Walker ER, McGee RE, Druss BG. Mortality in mental disorders and global disease burden implications: a systematic review and meta-analysis. JAMA Psychiatry. Apr 2015;72(4):334-341. [FREE Full text] [CrossRef] [Medline]
  3. GBD 2017 DiseaseInjury IncidencePrevalence Collaborators. Global, regional, and national incidence, prevalence, and years lived with disability for 354 diseases and injuries for 195 countries and territories, 1990-2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. Nov 10, 2018;392(10159):1789-1858. [FREE Full text] [CrossRef] [Medline]
  4. Greenberg PE, Fournier AA, Sisitsky T, Pike C, Kessler R. The economic burden of adults with major depressive disorder in the United States (2005 and 2010). J Clin Psychiatry. 2015;76(2):155-162. [FREE Full text] [CrossRef] [Medline]
  5. Greenberg PE, Fournier AA, Sisitsky T, Simes M, Berman R, Koenigsberg S, et al. The economic burden of adults with major depressive disorder in the United States (2010 and 2018). Pharmacoeconomics. 2021;39(6):653-665. [FREE Full text] [CrossRef] [Medline]
  6. Trautmann S, Rehm J, Wittchen H-U. The economic costs of mental disorders: do our societies react appropriately to the burden of mental disorders? EMBO Rep. 2016;17(9):1245-1249. [FREE Full text] [CrossRef] [Medline]
  7. Saarni SI, Suvisaari J, Sintonen H, Pirkola S, Koskinen S, Aromaa A, et al. Impact of psychiatric disorders on health-related quality of life: general population survey. Br J Psychiatry. 2007;190:326-332. [CrossRef] [Medline]
  8. Kramer T, Als L, Garralda ME. Challenges to primary care in diagnosing and managing depression in children and young people. BMJ. 2015;350:h2512. [FREE Full text] [CrossRef] [Medline]
  9. Wurcel V, Cicchetti A, Garrison L, Kip M, Koffijberg H, Kolbe A, et al. The value of diagnostic information in personalised healthcare: a comprehensive concept to facilitate bringing this technology into healthcare systems. Public Health Genomics. 2019;22(1-2):8-15. [FREE Full text] [CrossRef] [Medline]
  10. Kroenke K. Depression screening and management in primary care. Fam Pract. 2018;35(1):1-3. [CrossRef] [Medline]
  11. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC. American Psychiatric Association; 2013.
  12. Rush AJ, Trivedi MH, Ibrahim HM, Carmody TJ, Arnow B, Klein DN, et al. The 16-item quick inventory of depressive symptomatology (QIDS), clinician rating (QIDS-C), and self-report (QIDS-SR): a psychometric evaluation in patients with chronic major depression. Biol Psychiatry. 2003;54(5):573-583. [CrossRef] [Medline]
  13. Fried EI, Flake J, Robinaugh D. Revisiting the theoretical and methodological foundations of depression measurement. Nat Rev Psychol. 2022;1(6):358-368. [FREE Full text] [CrossRef] [Medline]
  14. Levis B, Sun Y, He C, Wu Y, Krishnan A, Bhandari P, Depression Screening Data (DEPRESSD) PHQ Collaboration, et al. Accuracy of the PHQ-2 alone and in combination with the PHQ-9 for screening to detect major depression: systematic review and meta-analysis. JAMA. 2020;323(22):2290-2300. [FREE Full text] [CrossRef] [Medline]
  15. Jacobucci R, Grimm KJ. Machine learning and psychological research: the unexplored effect of measurement. Perspect Psychol Sci. 2020;15(3):809-816. [CrossRef] [Medline]
  16. McNamara ME, Zisser M, Beevers C, Shumake J. Not just "big" data: importance of sample size, measurement error, and uninformative predictors for developing prognostic models for digital interventions. Behav Res Ther. 2022;153:104086. [CrossRef] [Medline]
  17. Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc. 2016;9:211-217. [FREE Full text] [CrossRef] [Medline]
  18. Onnela JP, Rauch S. Harnessing smartphone-based digital phenotyping to enhance behavioral and mental health. Neuropsychopharmacology. 2016;41(7):1691-1696. [FREE Full text] [CrossRef] [Medline]
  19. Garatva P, Terhorst Y, Messner EM, Karlen W, Pryss R, Baumeister H. Smart sensors for health research and improvement. In: Montag C, Baumeister H, editors. Digital Phenotyping and Mobile Sensing. Berlin, Germany. Springer; 2023:395-411.
  20. Terhorst Y, Knauer J, Baumeister H. Smart sensing enhanced diagnostic expert systems. In: Montag C, Baumeister H, editors. Digital Phenotyping and Mobile Sensing. Berlin, Germany. Springer; 2023:413-425.
  21. Mohr DC, Zhang M, Schueller S. Personal sensing: understanding mental health using ubiquitous sensors and machine learning. Annu Rev Clin Psychol. 2017;13:23-47. [FREE Full text] [CrossRef] [Medline]
  22. Abd-Alrazaq A, AlSaad R, Shuweihdi F, Ahmed A, Aziz S, Sheikh J. Systematic review and meta-analysis of performance of wearable artificial intelligence in detecting and predicting depression. NPJ Digit Med. 2023;6(1):84. [FREE Full text] [CrossRef] [Medline]
  23. Opoku Asare K, Terhorst Y, Vega J, Peltonen E, Lagerspetz E, Ferreira D. Predicting depression from smartphone behavioral markers using machine learning methods, hyperparameter optimization, and feature importance analysis: exploratory study. JMIR Mhealth Uhealth. 2021;9(7):e26540. [FREE Full text] [CrossRef] [Medline]
  24. Moshe I, Terhorst Y, Opoku Asare K, Sander L, Ferreira D, Baumeister H, et al. Predicting symptoms of depression and anxiety using smartphone and wearable data. Front Psychiatry. 2021;12:625247. [FREE Full text] [CrossRef] [Medline]
  25. Terhorst Y, Messner EM, Opoku Asare K, Montag C, Kannen C, Baumeister H. Which smartphone-based sensing features matter in depression prediction? Results from an observation study. Department of Clinical Psychology and Psychotherapy, Ulm University. 2024. [FREE Full text]
  26. Cornet VP, Holden RJ. Systematic review of smartphone-based passive sensing for health and wellbeing. J Biomed Inform. 2018;77:120-132. [FREE Full text] [CrossRef] [Medline]
  27. Rohani DA, Faurholt-Jepsen M, Kessing LV, Bardram JE. Correlations between objective behavioral features collected from mobile and wearable devices and depressive mood symptoms in patients with affective disorders: systematic review. JMIR Mhealth Uhealth. 2018;6(8):e165. [FREE Full text] [CrossRef] [Medline]
  28. Benoit J, Onyeaka H, Keshavan M, Torous J. Systematic review of digital phenotyping and machine learning in psychosis spectrum illnesses. Harv Rev Psychiatry. 2020;28(5):296-304. [CrossRef] [Medline]
  29. Dlima SD, Shevade S, Menezes S, Ganju A. Digital phenotyping in health using machine learning approaches: scoping review. JMIR Bioinform Biotechnol. 2022;3(1):e39618. [FREE Full text] [CrossRef] [Medline]
  30. Nouman M, Khoo SY, Mahmud MAP, Kouzani AZ. Recent advances in contactless sensing technologies for mental health monitoring. IEEE Internet Things J. 2022;9(1):274-297. [FREE Full text] [CrossRef]
  31. de Angel V, Lewis S, White K, Oetzmann C, Leightley D, Oprea E, et al. Digital health tools for the passive monitoring of depression: a systematic review of methods. NPJ Digit Med. 2022;5(1):3. [FREE Full text] [CrossRef] [Medline]
  32. Zarate D, Stavropoulos V, Ball M, de Sena Collier G, Jacobson N. Correction: exploring the digital footprint of depression: a PRISMA systematic literature review of the empirical evidence. BMC Psychiatry. 2022;22(1):530. [FREE Full text] [CrossRef] [Medline]
  33. Peterson B, Gonzalez D, Perez-Haddock Y, Frias J, Tourgeman I. PO110 / #755 the utility of passive and portable sensor data for monitoring the symptomatology of depression and anxiety. Neuromodulation: Technology at the Neural Interface. 2022;25(7):S247. [FREE Full text] [CrossRef]
  34. Saeb S, Lattie EG, Schueller SM, Kording KP, Mohr DC. The relationship between mobile phone location sensor data and depressive symptom severity. PeerJ. 2016;4:e2537. [FREE Full text] [CrossRef] [Medline]
  35. Saeb S, Zhang M, Karr C, Schueller S, Corden M, Kording K, et al. Mobile phone sensor correlates of depressive symptom severity in daily-life behavior: an exploratory study. J Med Internet Res. 2015;17(7):e175. [FREE Full text] [CrossRef] [Medline]
  36. Wang R, Wang W, daSilva A, Huckins JF, Kelley WM, Heatherton TF, et al. Tracking depression dynamics in college students using mobile phone and wearable sensing. 2018. Presented at: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies; 2018 March 26:1-26; New York, NY, United States. URL: https://doi.org/10.1145/3191775 [CrossRef]
  37. Currey D, Torous J. Digital phenotyping correlations in larger mental health samples: analysis and replication. BJPsych Open. Jun 03, 2022;8(4):e106. [FREE Full text] [CrossRef] [Medline]
  38. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, et al. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. Ann Intern Med. 2007;147(8):573-577. [FREE Full text] [CrossRef] [Medline]
  39. Maxwell SE, Lau MY, Howard GS. Is psychology suffering from a replication crisis? What does "failure to replicate" really mean? Am Psychol. 2015;70(6):487-498. [CrossRef] [Medline]
  40. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71. [FREE Full text] [CrossRef] [Medline]
  41. Harrer M, Cuijpers P, Furukawa T, Ebert D. Doing Meta-Analysis With R: A Hands-On Guide. 1st ed. Boca Raton, FL and London. Chapman & Hall/CRC Press; 2021.
  42. Cochrane handbook for systematic reviews of interventions. Cochrane Training. URL: https://training.cochrane.org/handbook/current [accessed 2023-07-16]
  43. Borenstein M, Higgins JPT, Hedges LV, Rothstein HR. Basics of meta-analysis: i is not an absolute measure of heterogeneity. Res Synth Methods. 2017;8(1):5-18. [CrossRef] [Medline]
  44. Open Science Collaboration. PSYCHOLOGY. Estimating the reproducibility of psychological science. Science. 2015;349(6251):aac4716. [CrossRef] [Medline]
  45. Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315(7109):629-634. [FREE Full text] [CrossRef] [Medline]
  46. Mathur MB, VanderWeele T. Sensitivity analysis for publication bias in meta-analyses. J R Stat Soc Ser C Appl Stat. 2020;69(5):1091-1119. [FREE Full text] [CrossRef] [Medline]
  47. Sterne JAC, Sutton AJ, Ioannidis JPA, Terrin N, Jones DR, Lau J, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011;343:d4002. [CrossRef] [Medline]
  48. Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. 2019;22(4):153-160. [FREE Full text] [CrossRef] [Medline]
  49. MacLeod L, Suruliraj B, Gall D, Bessenyei K, Hamm S, Romkey I, et al. A mobile sensing app to monitor youth mental health: observational pilot study. JMIR Mhealth Uhealth. 2021;9(10):e20638. [FREE Full text] [CrossRef] [Medline]
  50. Lu J, Shang C, Yue C, Morillo R, Ware S, Kamath J, et al. Joint modeling of heterogeneous sensing data for depression assessment via multi-task learning. 2018. Presented at: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies; March 26, 2018:1-21; New York, NY. URL: https://doi.org/10.1145/3191753 [CrossRef]
  51. Nickels S, Edwards MD, Poole SF, Winter D, Gronsbell J, Rozenkrants B, et al. Toward a mobile platform for real-world digital measurement of depression: user-centered design, data quality, and behavioral and clinical modeling. JMIR Ment Health. 2021;8(8):e27589. [FREE Full text] [CrossRef] [Medline]
  52. Tung JY, Rose R, Gammada E, Lam I, Roy E, Black S, et al. Measuring life space in older adults with mild-to-moderate alzheimer's disease using mobile phone GPS. Gerontology. 2014;60(2):154-162. [CrossRef] [Medline]
  53. Zhang Y, Folarin AA, Sun S, Cummins N, Vairavan S, Bendayan R, et al. Longitudinal relationships between depressive symptom severity and phone-measured mobility: dynamic structural equation modeling study. JMIR Ment Health. 2022;9(3):e34898. [FREE Full text] [CrossRef] [Medline]
  54. Saeb S, Zhang M, Kwasny M, Karr C, Kording K, Mohr D. The relationship between clinical, momentary, and sensor-based assessment of depression. 2015. Presented at: 2015 9th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth); May 20-23, 2015:103-111; Istanbul, Turkey. URL: https://europepmc.org/abstract/MED/26640739 [CrossRef]
  55. Saeb S, Lattie EG, Kording KP, Mohr DC. Mobile phone detection of semantic location and its relationship to depression and anxiety. JMIR Mhealth Uhealth. 2017;5(8):e112. [FREE Full text] [CrossRef] [Medline]
  56. Boukhechba M, Daros AR, Fua K, Chow PI, Teachman BA, Barnes LE. DemonicSalmon: monitoring mental health and social interactions of college students using smartphones. Smart Health. 2018;9-10:192-203. [FREE Full text] [CrossRef]
  57. Canzian L, Musolesi M. Trajectories of depression: unobtrusive monitoring of depressive states by means of smartphone mobility traces analysis. 2015. Presented at: Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing; 2015 September 07:1293-1304; Osaka, Japan. URL: https://doi.org/10.1145/2750858.2805845
  58. DeMasi O, Recht B. A step towards quantifying when an algorithm can and cannot predict an individual's wellbeing. 2017. Presented at: Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers; 2017 September 11:763-771; Hawaii, Maui. URL: https://doi.org/10.1145/3123024.3125609
  59. Di Matteo D. Inference of anxiety and depression from smartphone-collected data. Diss Abstr Int Sect B Sci Eng. 2021:1-184. [FREE Full text]
  60. Giannouli E, Fillekes MP, Mellone S, Weibel R, Bock O, Zijlstra W. Predictors of real-life mobility in community-dwelling older adults: an exploration based on a comprehensive framework for analyzing mobility. Eur Rev Aging Phys Act. 2019;16:19. [FREE Full text] [CrossRef] [Medline]
  61. Stamatis CA, Liu T, Meyerhoff J, Meng Y, Cho YM, Karr CJ, et al. Specific associations of passively sensed smartphone data with future symptoms of avoidance, fear, and physiological distress in social anxiety. Internet Interv. 2023;34:100683. [FREE Full text] [CrossRef] [Medline]
  62. Farhan AA, Yue C, Morillo R, Ware S, Lu J, Bi J, et al. Behavior vs. introspection: refining prediction of clinical depression via smartphone sensing data. 2016. Presented at: 2016 IEEE Wireless Health (WH); October 25-27, 2016:1-8; Bethesda, MD. URL: https://nlab.engr.uconn.edu/papers/Farhan-behavior-2016.pdf [CrossRef]
  63. Meyerhoff J, Liu T, Kording KP, Ungar LH, Kaiser SM, Karr CJ, et al. Evaluation of changes in depression, anxiety, and social anxiety using smartphone sensor features: longitudinal cohort study. J Med Internet Res. 2021;23(9):e22844. [FREE Full text] [CrossRef] [Medline]
  64. Stamatis CA, Meyerhoff J, Liu T, Sherman G, Wang H, Liu T, et al. Prospective associations of text-message-based sentiment with symptoms of depression, generalized anxiety, and social anxiety. Depress Anxiety. 2022;39(12):794-804. [FREE Full text] [CrossRef] [Medline]
  65. Steele R, Hillsgrove T, Khoshavi N, Jaimes L. A survey of cyber-physical system implementations of real-time personalized interventions. J Ambient Intell Human Comput. 2021;13(5):2325-2342. [FREE Full text] [CrossRef]
  66. Schuster R, Kaiser T, Terhorst Y, Messner E, Strohmeier LM, Laireiter AR. Sample size, sample size planning, and the impact of study context: systematic review and recommendations by the example of psychological depression treatment. Psychol Med. 2021;51(6):902-908. [FREE Full text] [CrossRef] [Medline]
  67. Barnett I, Torous J, Reeder H, Baker J, Onnela J-P. Determining sample size and length of follow-up for smartphone-based digital phenotyping studies. J Am Med Inform Assoc. 2020;27(12):1844-1849. [FREE Full text] [CrossRef] [Medline]
  68. Montag C, Baumeister H, editors. Digital Phenotyping and Mobile Sensing. 2nd ed. Cham, Switzerland. Springer International Publishing; 2023.
  69. Terhorst Y, Weilbacher N, Suda C, Simon L, Messner EM, Sander LB, et al. Acceptance of smart sensing: a barrier to implementation-results from a randomized controlled trial. Front Digit Health. 2023;5:1075266. [FREE Full text] [CrossRef] [Medline]
  70. Nicholas J, Shilton K, Schueller SM, Gray EL, Kwasny MJ, Mohr DC. The role of data type and recipient in individuals' perspectives on sharing passively collected smartphone data for mental health: cross-sectional questionnaire study. JMIR Mhealth Uhealth. 2019;7(4):e12578. [FREE Full text] [CrossRef] [Medline]
  71. Terhorst Y, Sander LB, Ebert DD, Baumeister H. Optimizing the predictive power of depression screenings using machine learning. Digit Health. 2023;9:20552076231194939. [FREE Full text] [CrossRef] [Medline]
  72. Roberts H, Helbich M. Multiple environmental exposures along daily mobility paths and depressive symptoms: a smartphone-based tracking study. Environ Int. 2021;156:106635. [FREE Full text] [CrossRef] [Medline]
  73. Mennis J, Mason M, Ambrus A. Urban greenspace is associated with reduced psychological stress among adolescents: a geographic ecological momentary assessment (GEMA) analysis of activity space. Landsc Urban Plan. 2018;174:1-9. [FREE Full text] [CrossRef] [Medline]
  74. Liu Z, Chen X, Cui H, Ma Y, Gao N, Li X, et al. Green space exposure on depression and anxiety outcomes: a meta-analysis. Environ Res. 2023;231(Pt 3):116303. [CrossRef] [Medline]
  75. Rottstädt F, Becker E, Wilz G, Croy I, Baumeister H, Terhorst Y. Enhancing the acceptance of smart sensing in psychotherapy patients: findings from a randomized controlled trial. Front Digit Health. 2024;6:1335776. [FREE Full text] [CrossRef] [Medline]


AUC: area under the curve
PHQ: Patient Health Questionnaire-9
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
ROC: receiver operating curve
STROBE: Strengthening the Reporting of Observational studies in Epidemiology


Edited by T de Azevedo Cardoso; submitted 11.09.23; peer-reviewed by L Yi, K Aguiar; comments to author 18.06.24; revised version received 02.07.24; accepted 26.07.24; published 01.11.24.

Copyright

©Yannik Terhorst, Johannes Knauer, Paula Philippi, Harald Baumeister. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 01.11.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.