Published on in Vol 25 (2023)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/44853, first published .
Study Features and Response Compliance in Ecological Momentary Assessment Research in Borderline Personality Disorder: Systematic Review and Meta-analysis

Study Features and Response Compliance in Ecological Momentary Assessment Research in Borderline Personality Disorder: Systematic Review and Meta-analysis

Study Features and Response Compliance in Ecological Momentary Assessment Research in Borderline Personality Disorder: Systematic Review and Meta-analysis

Review

1Escuela de Psicología, Pontificia Universidad Católica de Chile, Instituto Milenio para la Investigación en Depresión y Personalidad, Santiago, Chile

2Department of Child and Adolescent Psychiatry Research, University Psychiatric Clinics, University of Basel, Basel, Switzerland

3University Hospital Heidelberg, Center for Psychotherapy Research, Heidelberg, Germany

*these authors contributed equally

Corresponding Author:

Alex Behn, PhD

Escuela de Psicología

Pontificia Universidad Católica de Chile

Instituto Milenio para la Investigación en Depresión y Personalidad

Av. Vicuña Mackenna 4860

Santiago, 7820436

Chile

Phone: 56 942152484

Email: albehn@uc.cl


Background: Borderline personality disorder (BPD) is characterized by frequent and intense moment-to-moment changes in affect, behavior, identity, and interpersonal relationships, which typically result in significant and negative deterioration of the person’s overall functioning and well-being. Measuring and characterizing the rapidly changing patterns of instability in BPD dysfunction as they occur in a person’s daily life can be challenging. Ecological momentary assessment (EMA) is a method that can capture highly dynamic processes in psychopathology research and, thus, is well suited to study intense variability patterns across areas of dysfunction in BPD. EMA studies are characterized by frequent repeated assessments that are delivered to participants in real-life, real-time settings using handheld devices capable of registering responses to short self-report questions in daily life. Compliance in EMA research is defined as the proportion of prompts answered by the participant, considering all planned prompts sent. Low compliance with prompt schedules can compromise the relative advantages of using this method. Despite the growing EMA literature on BPD in recent years, findings regarding study design features that affect compliance with EMA protocols have not been compiled, aggregated, and estimated.

Objective: This systematic meta-analytic review aimed to investigate the relationship between study design features and participant compliance in EMA research of BPD.

Methods: A systematic review was conducted on November 12, 2021, following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) and MOOSE (Meta-analyses of Observational Studies in Epidemiology) guidelines to search for articles featuring EMA studies of BPD that reported compliance rates and included sufficient data to extract relevant design features. For studies with complete data, random-effect models were used to estimate the overall compliance rate and explore its association with design features.

Results: In total, 28 peer-reviewed EMA studies comprising 2052 participants were included in the study. Design features (sampling strategy, average prompting frequency, number of items, response window, sampling device, financial incentive, and dropout rate) showed a large variability across studies, and many studies did not report design features. The meta-analytic synthesis was restricted to 64% (18/28) of articles and revealed a pooled compliance rate of 79% across studies. We did not find any significant relationship between design features and compliance rates.

Conclusions: Our results show wide variability in the design and reporting of EMA studies assessing BPD. Compliance rates appear to be stable across varying setups, and it is likely that standard design features are not directly responsible for improving or diminishing compliance. We discuss possible nonspecific factors of study design that may have an impact on compliance. Given the promise of EMA research in BPD, we also discuss the importance of unifying standards for EMA reporting so that data stemming from this rich literature can be aggregated and interpreted jointly.

J Med Internet Res 2023;25:e44853

doi:10.2196/44853

Keywords



Background

Borderline personality disorder (BPD) is a serious mental disorder affecting approximately 1% of the adult population, 12% of the adult outpatient population, and 22% of the adult inpatient population [1]. BPD is characterized by frequent and intense moment-to-moment changes in affect, behavior, identity, and interpersonal relationships, which typically result in a significant deterioration of the person’s overall functioning and well-being [2]. It is 1 of the 10 personality disorders listed in the latest edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM)–Fifth Edition [3]. Even though new international diagnostic classification systems, such as the International Classification of Diseases 11th Revision, removed specific types of personality disorder in favor of a general diagnosis of personality disorder, BPD was retained as a specific pattern [4].

According to established diagnostic criteria, BPD is marked by severe, unpredictable, and intense feelings; vulnerability to perceived rejection or alienation in the context of interpersonal hypersensitivity; behavioral dysregulation involving suicidality and often impulsiveness; and an impaired sense of self, usually contributing to identity disturbance, a poor sense of direction, poor self-esteem, and cognitive disorders [5]. All these areas of core BPD psychopathology are typically characterized by high instability over time [6,7]. Because of the high variability in BPD symptoms, it is desirable to study moment-to-moment variability to capture the true extent of the disorder. The ecological momentary assessment (EMA; sometimes referred to as experience sampling) [8] uses systematic and often very frequent self-report diaries for evaluating context, symptoms, stressors, and other factors as they occur in everyday life and in nonlaboratory or research settings.

EMA Studies

EMA studies are characterized by the use of repeated, often very frequent assessments that are delivered to participants in real-life settings and in real time using handheld devices capable of registering responses to short self-report questions in daily life. These repeated real-time measurements provide many methodological advantages over conventional assessment strategies, as highlighted in the study by Shiffman et al [9]. First and foremost, in the setting of highly and rapidly variable phenomena, recall biases encountered in conventional retrospective survey methods are diminished by the momentary evaluation of participants’ current experiences or behaviors. Second, measurements taken in the moment and in the context in natural settings yield information that is more pertinent to the existing social or physical contexts, providing information that is more ecologically valid. Third, the daily intensive repeated measurements capture within-day, within-person behavior as well as experience alterations over time, enabling studies of immediate causes and effects of behavior in real time [10].

EMA for the Study of BPD Symptoms

The EMA methodology is being increasingly used to study psychopathology processes in BPD, mainly because these are considered very unstable and variable over time, thus benefiting from frequent sampling schedules. Given the variability of BPD symptoms, several momentary ratings that minimize retrospective bias and have strong ecological validity can be useful in modeling the dynamic core psychopathology features of the disorder [11]. This has led to important scientific and clinical discoveries by expanding our understanding of diagnosis, symptomatic variability, and disease mechanisms [12]. For example, the studies by Gordon and Laws [13] and Sadikaj et al [7] reported using EMA that patients with BPD exhibit high emotional variability in their responses to interpersonal stressors. The study by Stiglmayr et al [14] found that dissociation was correlated with distress in patients with BPD, which varied over time. The study by Zeigler-Hill and Abraham [15] reported interesting findings revealing that patients with BPD show increased self-esteem variability, especially in response to stressful interpersonal situations with peers. The study by Links et al [16] examined the link between emotional disturbances (specifically, the intensity of negative mood, mood amplitude, mood dyscontrol, and mood triggering) and suicide attempts. Therefore, EMA appears to be an appropriate tool for measuring the variability in BPD symptoms. Another vital contribution of EMA to BPD research is the possibility of studying the causes and effects of behavior in real time [17]. This could be useful for the therapeutic management of risk situations [18], as has been developed in the dialectical behavior therapy chain analysis technique [19].

Even though BPD is characterized by severely debilitating symptoms, participant compliance with EMA protocols appears to be relatively high, suggesting that symptom severity may not per se affect compliance [20]. It is possible that in the case of BPD, the impact of severe symptoms on compliance is counteracted by a beneficial effect of testing reactivity. Testing reactivity refers to the extent to which the behavior of interest is modified by momentary questions [21]. Few studies have examined the specific effects of testing reactivity on BPD symptom severity and intensity, but available models suggest that through the processes of active reflection and self-monitoring, social desirability, or feedback processes, the intensity of symptoms may decrease as a result of unusual attention on target symptoms [22]. In fact, active and frequent self-monitoring may be a generic active ingredient across a variety of evidence-based psychosocial interventions for BPD (eg, the use of diary cards to identify triggers and patterns of affective dysregulation in dialectical behavior therapy) [23].

The Importance of Ensuring Compliance in EMA Studies

EMA studies offer numerous advantages over traditional retrospective reporting in the study of highly variable phenomena over time; however, they typically place a high demand on participants, which can, in turn, lead to low response compliance. Low response compliance occurs when the ratio of answered prompts in relation to the theoretical maximum number of prompts planned in the study is low and has a direct bearing on data quality [16]. If compliance is low, sparsely collected data are unlikely to be a valid measure of the intensity and variability of the phenomena of interest measured in daily life, eliminating all relative advantages of using experience sampling. Even though missingness is a frequent occurrence in EMA studies, when compliance falls critically low or when patterns of noncompliance occur systematically at specific measurement occasions (eg, a participant always misses prompts delivered when she is experiencing intense emotional distress), the validity of the study is compromised [14]. Valid inferences about moment-to-moment experiences reported by participants using EMA methodologies must yield sufficient data to be robust and valid. Therefore, EMA studies typically include different design features and use strategies for enhancing compliance with the study protocol. For instance, it is common for EMA researchers to offer some kind of financial compensation to participants, which may even be contingent on a certain percentage of compliance (eg, the study by Berenson et al [24]). Other strategies include briefing the participants about the intensive nature of the study and even arranging telephone calls when a participant engages in increased noncompliance [25].

Design Features of EMA Studies Affecting Compliance

Two recent meta-analyses examined design features that broadly affect response compliance in EMA research. The study by Wrzus and Neubauer [26] examined this issue in EMA research across research fields by sampling 477 articles including 677,536 participants. They found that most EMA studies involved 6 assessments per day and lasted 7 days, although the number of assessments was not related to response compliance. Across studies, a compliance rate of 79% was obtained for participants with BPD, and providing financial incentives significantly increased response compliance. Other design features had little or no effect on compliance. The authors also emphasized the high heterogeneity in study design and reporting and called for more standardized procedures for EMA planning and reporting. The study by Vachon et al [27] explored the issue of compliance with EMA protocols in studies that included different mental disorders. The authors sampled data from 8013 participants. Typically, EMA designs included an average of 6.9 assessments per day for an average of 11.2 days. The study by Vachon et al [27] found that compliance rates were significantly lower for studies that included a higher proportion of male participants and participants with psychotic disorders. Compliance seemed to increase when researchers used fixed sampling schemes (ie, prompts were delivered in preplanned intervals across the day), used higher financial incentives, and spaced successive assessment occasions more over time (ie, less intensive protocols). The study by Smyth et al [28] covered a different aspect and investigated the relationship between the intention to participate in EMA studies and aspects of design features (such as prompt frequency and study duration). They found that willingness to participate was higher in simpler and less intense protocols and suggested that future EMA studies explore how different combinations of design features may influence participation. Taken together, previous studies underline the vast heterogeneity in EMA planning and reporting and call for unified standards.

Aim of This Study

EMA appears to be particularly well suited for the study of basic BPD psychopathology. To capture all the benefits of using this methodology to advance the field and construct more robust and nuanced dynamic psychopathology models, data must be of the highest quality, and compliance with study protocols is a crucial component of data quality in EMA research. However, to date, there are no systematic reviews and meta-analyses examining compliance in EMA research of BPD psychopathology, including the role of standard design features in enhancing compliance. In this setting, this review had three specific goals: (1) to provide a narrative overview of the EMA methodologies used to investigate dynamic BPD psychopathology; (2) to obtain and report overall compliance rates and quantitatively examine the effect of regular study design features (ie, the number of daily prompts, measurement period of the study, number of items in each prompt, etc) and study procedures (ie, the use of financial incentives, the device type, etc) on compliance rates; and (3) to optimize the advantages of EMA techniques in investigating BPD, for which this study provides recommendations for future studies that will use mobile devices to collect real-time self-reported data from BPD participants.


This systematic review and meta-analysis were conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses; Multimedia Appendix 1) standards [29] as well as the MOOSE (Meta-Analyses of Observational Studies in Epidemiology) guidelines [30].

Literature Search

Bibliographic literature searches were conducted in 4 electronic databases (PsycINFO, PubMed, Scopus, and Web of Science) on November 12, 2021, without any time or language restrictions. Keywords based on Medical Subject Headings terms were used to identify peer-reviewed journal articles reporting on the use of EMA among the population with BPD. The search terms used in the individual databases are presented in Table S1 in Multimedia Appendix 2.

The search results for both the systematic review and meta-analysis were screened independently by 3 authors (AD, DH, and SS) using Covidence (Veritas Health Innovation) [31], a web-based review management tool for screening and data extraction. The screening process was carried out in 2 stages: title and abstract stage and full-text stage. Discrepancies were resolved through discussion according to the eligibility criteria previously defined.

Inclusion and Exclusion Criteria

Inclusion and Exclusion Criteria for the Systematic Review

Studies had to meet the inclusion criteria shown in Textbox 1 to be included in the systematic review.

Studies that involved participants without BPD or healthy participants in addition to participants with BPD were included only if separate analytic or descriptive results were presented for the BPD or BPD feature subgroup.

Studies that met the exclusion criteria shown in Textbox 2 were not included.

Inclusion criteria for the systematic review.
  • Should be an empirical study (not a review or comment)
  • Should use ecological momentary assessment strategies or experience sampling methods, including diary methods
  • Should use mobile or handheld technologies for ecological momentary assessment or experience sampling method data collection (eg, cell phones, PDAs, and smartphones)
  • Should include participants diagnosed with borderline personality disorder (BPD) or BPD features (ie, ≥1 of the 4 core areas of BPD: interpersonal problems, affect dysregulation, cognitive self-dysregulation, and behavioral dysregulation)
  • Should assess BPD or BPD features according to the Diagnostic and Statistical Manual for Mental Disorders or the International Classification of Diseases
Textbox 1. Inclusion criteria for the systematic review.
Exclusion criteria for the systematic review.
  • Studies did not use any electronic, wearable, or mobile technology
  • Studies used call-based (ie, phone interview) methods instead of prompts for data collection
  • Studies did not collect data in ecological or free-living natural setting (eg, studies collected data in artificial settings such as laboratories)
Textbox 2. Exclusion criteria for the systematic review.
Inclusion and Exclusion Criteria for the Meta-analysis

Studies included in the systematic review were entered into the meta-analytical part of this study if they fulfilled the following inclusion criterion: sufficient information was available that allowed the estimation of an average compliance rate (ie, the proportion of answered prompts) for the BPD or BPD feature subgroup.

Thus, studies that did not report compliance rate or in which compliance rate could not be extracted for the BPD or BPD feature subgroup were excluded from the meta-analytical part of the study.

Data Extraction

A total of 3 authors (AD, DH, and SS) independently extracted the study characteristics as well as EMA design features and procedures from each included study. The extracted study characteristics were location, BPD sample size, mean age of the BPD sample (in years), sex (percentage of female participants), and dropout rates of participants with BPD (in percentage). Furthermore, the extracted EMA design features and procedures were study length (in days), daily prompting frequency, extra assessment (ie, if there was a daily extra measurement in addition to the prompts), number of items (ie, items to be answered in each prompt), average time to answer a prompt (in minutes), assessment window in minutes (ie, how long each prompt is available to be answered before it is considered missed), device type (ie, smartphone or others), sampling scheme (ie, time-based or self-initiated prompts), prompt scheme (ie, fixed or random prompts), and the incentive given (ie, fixed or incremental incentive). Finally, if possible, data on the compliance rate were extracted. If the compliance rate was not explicitly reported, the number of total prompts sent and the number of answered prompts were extracted from each study, for the BPD sample and (if available) the non-BPD sample. Discrepancies in author coding assignments were discussed and resolved until a consensus was reached. If needed, attempts were made to contact the authors of the studies with incomplete or insufficient data.

We were unable to obtain full data from 7 studies that did not report separate results for control and BPD groups [32-38], which were, therefore, excluded from our study.

Quality Assessment

The methodologic quality of the included studies was independently rated by 3 authors (AD, DH, and SS) using the Newcastle-Ottawa Scale [39].

This tool was specifically developed for the quality assessment of nonrandomized studies in systematic reviews and meta-analyses. Using this tool, each study is judged on 8 items, which are categorized into three broad domains: (1) the selection of the study groups (comprising the items “Representativeness of the Exposed Cohort,” “Selection of non-Exposed Group,” “Ascertainment of Exposure,” and “Demonstration of Absence of Outcome at Start of Study”), (2) the comparability of the cohort groups (comprising the item “Comparability Based on the Design or Analysis”), and (3) the ascertainment of either the exposure or outcome of interest for case-control or control studies, respectively (comprising the items “Assessment of Outcome,” “Follow-up Long Enough for Outcomes to Occur,” and “Adequacy of Follow-up of Cohorts”). To rate the comparability of the study cohort groups, the Newcastle-Ottawa Scale requires the authors to predetermine 1 or 2 key control variables. We included age as the first control variable and sex as the second control variable. If 2 of the predefined control variables were met, the criterion “Comparability Based on the Design or Analysis’ was rated with 2 scores. For the items “Follow-up Long Enough for Outcomes to Occur” and “Adequacy of Follow-up of Cohorts,” the follow-up interval was defined as the time between the first EMA measurement point (ie, baseline) and the last EMA measurement point (ie, follow-up).

Each study was rated in terms of 8 criteria (7 criteria were scored 0 or 1, and 1 criterion was scored 0, 1, or 2), resulting in a maximum score of 9 for each study. Discrepancies in author coding assignments were discussed and resolved until consensus was reached. The quality of a study was assessed as high if 7 to all the 9 criteria were met, moderate if 4 to 6 criteria were met, and low if ≤3 criteria were met.

Statistical Analyses

The “meta” package in R (version 4.0.2, R Foundation for Statistical Computing) was used for all analyses and plots [40]. We used a 2-sided significance level of P<.05 to indicate statistical significance.

First, compliance rates were calculated by dividing the total number of prompts delivered by the study by the number of answered prompts at the end of the study for the BPD sample in each study. We chose a random-effects model for assessing the effect sizes rather than fixed-effects models because significant variability between studies was assumed [41]. We assessed the between-study heterogeneity of the results by calculating the Q statistic, I2, and τ2 [42-44]. In general, I2 can be interpreted as follows: a proportion of 25% is assumed to indicate a low heterogeneity; 50%, a moderate heterogeneity; and 75%, a high heterogeneity [43].

A total of 4 categorical design features and procedures of EMA studies (ie, instrument, device type, quality rating, and extra assessment) were tested on the effects of compliance rates in the BPD sample using univariate analyses with random-effects models. Analyses testing the effects of continuous design features and procedures on compliance rate (ie, study length, prompts per day, and total quality score) were omitted, as the included subgroups contained <10 studies. As is true for primary studies, which require an appropriately large ratio of participants to form meaningful subgroups, meta-analyses require an appropriately large number of studies. Therefore, the use of meta-regression is generally not recommended when the number of studies is small (ie, n<10) [43].


Study Selection

In total, the systematic literature search revealed 995 potentially relevant articles. The screening and full-text assessment resulted in 28 peer-reviewed EMA studies [16,17,24,45-69] including a total sample of 2052 participants (N=1158 for the BPD group, 56.43%; N=894 for the control group, 43.57%; refer to Figure 1 for a flowchart of study inclusion). All studies examined BPD symptoms and selected participants based on either DSM criteria (26/28, 93%) or International Classification of Diseases criteria (2/28, 7%). For a complete list of the included studies and their characteristics, refer to Tables 1 and 2.

Figure 1. Flowchart of study inclusion. BPD: borderline personality disorder; EMA: ecological momentary assessment; ESM: experience sampling method.
Table 1. Characteristics of the included studies (n=28).
StudyPublication yearLocationSample sizeAge (years), mean (SD)Sex (female; %)Dropout, (%)Diagnostic criteriaaClinical status
Andrewes et al [45]2017Oceania107 BPDb18.1 (2.7)83.21.92 in DSMcOutpatient
Berenson et al [24]2011United States45 BPD and 40 CGd33.5 (10.2)76.58.95 in DSMOutpatient
Briones-Buixassa et al [46]2021Europe22 BPD and 42 CG23.6 (5.1)90N/Re2 in DSMOutpatient
Chaudhury et al [47]2017United States50 BPD30.6 (11.0)86N/R5 in DSMOutpatient
Coifman et al [48]2012United States65 BPD and 61 CG32.3 (11.6)8205 in DSMN/R
Ebner-Priemer et al [49]2006United States and Europe50 BPD and 50 CG31.3 (8.1)100N/R1 in DSMOutpatient
Ellison et al [50]2020United States27 BPD and 15 CG32.5 (11.3)90.5305 in ICDfOutpatient
Hawkins et al [51]2014United StatesN/R41.068N/R5 in DSMOutpatient
Houben et al [52]2016Europe30 BPD and 28 CG29.0 (1.6)87N/R1 in DSMInpatient
Kaurin et al [53]2020United States43 BPD and 164 CG30.5 (6.8)54N/R5 in DSMOutpatient
Köhling et al [54]2016Europe20 BPD and 21 CG26.2 (6.5)100101 in DSMInpatient
Lane et al [55]2016United States56 BPD and 60 CG26.4 (7.1)78.5N/R1 in DSMOutpatient
Links et al [16]2007South America82 BPD33.5 (10.3)82.9N/R5 in DSMOutpatient
Moukhtarian et al [56]2020Europe41 BPD and 57 CG35.4 (11.4)100104 in DSMOutpatient
Rizk et al [57]2019United States38 BPD28.6 (9.5)100N/R5 in DSMOutpatient
Santangelo et al [12]2017Europe60 BPD and 60 CG27.2 (7.0)10005 in DSMOutpatient
Scala et al [58]2018United States36 BPD and 18 CG34.2 (12.4)92N/R5 in ICDOutpatient
Selby et al [59]2021United States1974 BPD and 2726 CG19.1 (1.8)68.1N/R1 in DSMOutpatient
Solhan et al [60]2009United States58 BPD and 42 CG32.393.1N/R1 in DSMOutpatient
Southward et al [61]2020United States8 BPD21.63.05N/R5 in DSMOutpatient
Stiglmayr et al [14]2008Europe51 BPD and 91 CG27.1 (6.7)100N/R5 in DSMInpatient
Trull et al [17]2008United States34 BPD and 26 CG33.3 (12.4)97.1N/R1 in DSMOutpatient
Veilleux et al [62]2021United States49 BPD and 50 CG19.3 (2.1)72.3N/R5 in DSMOutpatient
Weise et al [63]2020Europe43 BPD15.5 (1.2)95.4N/R1 in DSMOutpatient
Wright et al [70]2016United States5 BPDN/RN/RN/R5 in DSMOutpatient
Wycoff et al [64]2020United States54 BPD26.2 (7.2)81.5N/R2 in DSMOutpatient
Zaehringer et al [65]2019Europe26 BPD33.4 (11.1)10015.41 in DSMOutpatient
Zaki et al [66]2013United States38 BPD and 42 CG29.9 (10.6)84N/R5 in DSMOutpatient

a1=affect dysregulation, 2=behavioral dysregulation, 3=interpersonal instability, and 4=cognitive or self-disturbance.

bBPD: borderline personality disorder.

cDSM: Diagnostic and Statistical Manual for Mental Disorders.

dCG: control group.

eN/R: not reported.

fICD: International Classification of Diseases.

Table 2. Ecological momentary assessment.
StudySampling schemeaStudy length (days)bExtra assessmentcPrompts per day (n)dItems (n)eRequired answer time (minutes)fAssessment window (minutes)gDevice typeIncentive
Andrewes et al [45]Time based6No615815SmartphoneFixed
Berenson et al [24]Time based21No58N/Rh10OthersN/R
Briones-Buixassa et al [46]Time and event based15No3N/RN/R15SmartphoneFixed
Chaudhury et al [47]Time based7No6N/RN/RN/ROthersN/R
Coifman et al [48]Time based21No5N/R7.510OthersIncremental
Ebner-Priemer et al [49]Time based14NoN/RN/RN/RN/ROthersN/R
Ellison et al [50]Time and event based21Yes665N/RSmartphoneIncremental
Hawkins et al [51]Time based14Yes5N/RN/RN/ROthersFixed
Houben et al [52]Time based8No10N/RN/RN/ROthersNo incentive
Kaurin et al [53]Event based21NoN/R26N/RN/ROthersIncremental
Köhling et al [54]Time based7No518N/RN/RSmartphoneFixed
Lane et al [55]Time and event based21Yes6N/RN/RN/ROthersIncremental
Links et al [16]Time based21No6N/RN/RN/ROthersFixed
Moukhtarian et al [56]Time based5No87216OthersFixed
Rizk et al [57]Time based7No69N/RN/ROthersN/R
Santangelo et al [12]Time based4No126N/RN/ROthersIncremental
Scala et al [58]Time based21No6N/RN/RN/ROthersN/R
Selby et al [59]Time and event based14No532N/RN/RSmartphoneIncremental and fixed
Solhan et al [60]Time based28No621N/R15OthersIncremental
Southward et al [61]Time based2No12N/RN/RN/RSmartphoneN/R
Stiglmayr et al [14]Time based2No1232N/RN/ROthersN/R
Trull et al [17]Time based28No637N/R10OthersIncremental
Veilleux et al [62]Time and event based7Yes7N/R210SmartphoneFixed
Weise et al [63]Time based4N/RN/RN/RN/RN/RSmartphoneN/R
Wright et al [70]Event based21N/RN/R872N/RSmartphoneN/R
Wycoff et al [64]Time and event basedN/RYesN/RN/RN/RN/ROthersIncremental
Zaehringer et al [65]Time based4No1217N/RN/RSmartphoneFixed
Zaki et al [66]Time based21No5N/RN/RN/ROthersIncremental

aTime based refers to the format in which the prompts are sent to the participants according to a fixed or random scheme within a window of time, for example, every 2 hours during walking time. Event based refers to when participants are asked to self-initiate a prompt in X situation, for example, when they have a suicidal thought.

bTotal number of days of the study.

cA daily extra measurement in addition to the prompts.

dNumber of prompts sent each day of the study.

eItems to be answered in each prompt.

fEstimate of minutes required to complete each prompt.

gNumber of minutes for which the prompt was available before it was considered lost.

hN/R: not reported.

Quality of the Included Studies

The total quality scores of the eligible studies ranged from 5 (4/28, 14%) to 8 (8/8, 29%) out of 9 (total scores; refer to Tables 3 and 4). A total of 36% (10/28) of studies were of moderate quality, 29% (8/28) of studies were of high quality, and none of the included studies were rated as having low quality. All studies met the quality criterion “Ascertainment of Exposure” of the selection dimension and the criteria “Assessment of Outcome,” “Follow-up Long Enough for Outcomes to Occur,” and “Adequacy of Follow-up of Cohorts” of the outcome dimension, but the quality criterion “Demonstration of Absence of Outcome at Baseline” was met by none of the included studies.

Table 3. Quality assessment of the included studies—representativeness of the exposed cohort, Selection of nonexposed group, ascertainment of exposure, and demonstration of absence of outcome at baseline.
StudySelection

Representativeness of the exposed cohortSelection of nonexposed groupAscertainment of exposureDemonstration of absence of outcome at baseline
Andrewes et al [45]1N/Aa10
Berenson et al [24]1101
Briones-Buixassa et al [46]1100
Chaudhury et al [47]1N/A10
Coifman et al [48]1110
Ebner-Priemer et al [49]1100
Ellison et al [50]1110
Hawkins et al [51]1100
Houben et al [52]1110
Kaurin et al [53]1110
Köhling et al [54]1110
Lane et al [55]1100
Links et al [16]1N/A10
Moukhtarian et al [56]1010
Rizk et al [57]1N/A10
Santangelo et al [12]0010
Scala et al [58]1110
Selby et al [59]1110
Solhan et al [60]1110
Southward et al [61]1N/A10
Stiglmayr et al [14]1110
Trull et al [17]1110
Veilleux et al [62]1110
Weise et al [63]1N/A10
Wright et al [70]1N/A10
Wycoff et al [64]1N/A10
Zaehringer et al [65]0N/A10
Zaki et al [66]1110

aN/A: not applicable to the respective study owing to the lack of a control group.

Table 4. Quality assessment of the included studies—comparability based on the design or analysis, assessment of outcome, follow-up enough long for outcomes to occur, and adequacy of follow-up of cohorts.
StudyComparability based on the design or analysisOutcomeTotal quality score


Assessment of outcomeFollow-up long enough for outcomes to occurAdequacy of follow-up of cohorts
Andrewes et al [45]01115
Berenson et al [24]11117
Briones-Buixassa et al [46]01115
Chaudhury et al [47]11116
Coifman et al [48]21118
Ebner-Priemer et al [49]01115
Ellison et al [50]21118
Hawkins et al [51]01115
Houben et al [52]21118
Kaurin et al [53]0111
Köhling et al [54]21118
Lane et al [55]01115
Links et al [16]11116
Moukhtarian et al [56]11116
Rizk et al [57]11116
Santangelo et al [12]11115
Scala et al [58]21118
Selby et al [59]01
16
Solhan et al [60]21118
Southward et al [61]01115
Stiglmayr et al [14]11117
Trull et al [17]21118
Veilleux et al [62]01116
Weise et al [63]11116
Wright et al [70]11116
Wycoff et al [64]01115
Zaehringer et al [65]21116
Zaki et al [66]21118

Narrative Review

Overview

A total of 28 articles that met the eligibility criteria were included in the narrative review. All selected studies used EMA to investigate certain aspects of BPD psychopathology. Table 5 shows the descriptive statistics of the study variables for all the studies.

Table 5. Descriptive statistics of the included studies (n=28).
CharacteristicsValues
Age (years; n=27), mean (SD)28.58 (3.74)
Study length (n=28), mean (SD)13.32 (8.61)
Prompts per day (n=24), mean (SD)6.917 (2.64)
Number of items (n=14), mean (SD)15.88 (10.37)
Required answer time (minutes; n=6), mean (SD)4.41 (2.83)
Assessment window (minutes; n=10), mean (SD)22.10 (20.12)
Dropout of participants with BPDa (n=8), mean (SD)8.91 (10.15)
Quality score (n=28), mean (SD)6.36 (1.19)
Sex (female; n=27 studies), %86.1
Location (n=28), n (%)

Europe8 (29)

Oceania1 (4)

South America1 (4)

United States17 (61)

United States and Europe1 (4)
Sampling scheme (n=28), n (%)

Time based20 (71)

Event based2 (7)

Both6 (21)
Instrument (n=28), n (%)

DSMb26 (93)

ICDc2 (7)
Inpatient (n=28), n (%)

Yes3 (11)

No25 (89)
Extra assessment (n=26), n (%)

Yes5 (19)

No21 (81)
Device type (n=28), n (%)

Smartphone11 (39)

Others17 (61)
Fixed increment (n=19), n (%)

Fixed8 (42)

Incremental9 (47)

Both1 (5)

No incentive1 (5)
Quality rating (n=28), n (%)

Low0 (0)

Moderate18 (64)

High10 (36)

aBPD: borderline personality disorder.

bDSM: Diagnostic and Statistical Manual for Mental Disorders.

cICD: International Classification of Diseases.

Participant Characteristics

The average number of participants across the studies included in the systematic review was 42.88 (range 5-107). Of the 28 included studies, 27 (96%) reported the sex of the participants. The average proportion of female participants was 86.1%, whereas 25% (7/28) of studies recruited only female participants. Excluding these studies, the average proportion of female participants was 62.4% across the studies. The mean age of participants was 28.58 (SD 3.74) years. Overall, 11% (3/28) of studies were conducted among inpatients, and 89% (25/28) of studies recruited participants from community or nonclinical settings. In total, 54% (15/28) of studies investigated mixed BPD features, whereas 32% (9/28) of studies examined only affective dysregulation, 11% (3/28) of studies examined only behavioral dysregulation features, and only 4% (1/28) of studies examined cognitive or self-disturbance features.

Study Characteristics
Study Length (Measurement Period)

The length of the EMA studies ranged from 2 to 28 days, with an average of 13.32 (SD 8.61) days. A total of 43% (12/28) of studies conducted EMA for <10 days, including studies that delivered prompts for 2 days (2/28, 7%), 4 days (3/28, 11%), 5 days (1/28, 4%), 6 days (1/28, 4%), 7 days (4/28, 14%), and 8 days (1/28, 4%). A total of 54% (15/28) of studies conducted EMA for >10 days, with 11% (3/28) of studies conducting EMA for 14 days, 4% (1/28) of studies for 15 days, and 32% (9/28) of studies for >21 days. A total of 7% (2/28) of studies conducted EMA for 28 days, and only 4% (1/28) of studies did not report these data. In 18% (5/28) of studies, an extra EMA assessment was delivered each day to the participants in addition to prompts.

Sampling Strategy

A total of 71% (20/28) of studies used only time-based sampling protocols, 21% (6/28) of studies used a combination of both time-based and event-based sampling protocols, and only 7% (2/28) of studies used only event-based sampling protocols.

Sampling Frequency (Number of Daily Prompts)

Of the 28 studies included in the systematic review, 24 (86%) studies reported the prompting frequency. The average prompting frequency was 6.92 (SD 2.6; range 3-12) times per day. Most studies (14/28, 50%) reported prompting participants <10 times a day, with 4% (1/28) of studies reporting prompting participants 10 times a day and 14% (4/28) of studies reporting prompting participants 12 times a day.

Number of Items

The average number of items per prompt was 15.88 (SD 10.37; range 6-87) across studies. A total of 50% (14/28) of studies reported the actual number of items, with 18% (5/28) of studies assessing 6 to 9 items, 11% (3/28) of studies assessing 15 to 18 items, 7% (2/28) of studies assessing 21 to 26 items, 11% (3/28) of studies assessing 32 to 37 items, and 4% (1/28) of studies assessing 87 items.

Required Answer Time

Only 21% (6/28) of studies reported the required answer time. Across these studies, the average time to answer a prompt was 4.41 (SD 2.83; range 2-8) minutes.

Response Window

A total of 36% (10/28) of studies indicated the response or assessment window, with an average assessment window of 22.10 (SD 20.12; range 10-16) minutes per prompt.

Sampling Devices (Equipment Used)

All the included studies (28/28, 100%) reported their sampling device. A total of 39% (11/28) of studies used a smartphone as the EMA device type, whereas other studies (17/28, 61%) used different device tools, such as PDA (5/28, 18%), Tungsten E Palmplot (Palm Inc; 3/28, 11%), handheld electronic organizer (3/28, 11%), Apple iPods (Apple Inc; 1/28, 4%), Palmplot computer (Palm Inc; 1/28, 4%), Motorola Razr (Motorola Inc) with Android (Google LLC; 1/28, 4%), Palm Zire 31 (Palm Inc; 2/28, 7%), and handheld Zire 21 (Palm Inc; 1/28, 4%).

Financial Incentives for Participants

A total of 68% (19/28) of studies reported their incentive scheme. In 32% (9/28) of studies, the financial incentive was incremental; in 29% (8/28) of studies, it was fixed; 4% (1/28) of studies reported a mixed incentive scheme (fixed and incremental); and only 4% (1/28) of studies reported not using a financial incentive.

Dropout Rate for the BPD Participants

Only 25% (7/28) of studies reported the dropout rate of the participants with BPD. Of these, 7% (2/28) of studies reported no dropouts. The remaining 18% (5/28) of studies reported a dropout rate of 1.9% to 30%. The average dropout rate was 8.91% (SD 10.15%).

Meta-analysis

Overview

For the meta-analytical synthesis of compliance rates and predictors, we could only use a subset of 18 (64%) studies [12,16,17,45-48,50,52,54,56,58,60-62,64-66] drawn from the initial sample of 28 studies. This subset included all studies that used EMA to examine BPD psychopathology and also provided sufficient information that allowed the estimation of an average compliance rate for BPD or BPD features. Thus, 36% (10/28) of studies that did not provide information about compliance had to be excluded from the meta-analysis.

Compliance Rate

The pooled prevalence rate of compliance among BPD participants was 79% (95% CI 0.73-0.84; Figure 2). Almost complete between-study heterogeneity was observed (I2=99.7%,P<.001).

Figure 2. The pooled rate for compliance among participants with borderline personality disorder.
Potential Predictors of Compliance

In our meta-analysis, we were able to model the predictive value of 5 specific design features and procedures of EMA studies in the subgroup analysis for categorical variables (ie, diagnostic criteria for selection of participants with BPD, sampling scheme, device type, extra assessment, and total quality score) on pooled rates of compliance. None of the predictors were found to be significant in the subgroup analysis with weighted effect sizes (Table 6).

Table 6. Subgroup analyses of compliance rates for categorical moderator variables.

Compliance rate (95% CI)Heterogeneity test

Studies, n (%)Compliance rate (%)95% CIQ valueI2P value
Diagnostic criteria


0.39
.53

DSMa16 (57)78.3671.37-84.02
99.7

ICDb2 (7)83.864.34-93.68
99.4
Sampling scheme


0.64
.42

Time based14 (50)77.7370.13-83.84
99.8

Time and event based4 (14)83.0970.12-91.15
99.5
Device type


0.00
.96

Smartphone7 (25)78.8170.70-85.67
99.6

Other11 (39)79.1667.72-86.84
99.7
Quality rating


2.89
.09

LowN/AcN/AN/A
N/A

Moderate10 (36)74.4565.16-81.95
99.6

High8 (29)83.8475.96-89.49
99.7

Extra assessment


1.32
.25

Yes3 (11)85.571.83-93.17
99.4

No15 (54)77.4970.27-83.37
99.7

aDSM: Diagnostic and Statistical Manual for Mental Disorders.

bICD: International Classification of Diseases.

cN/A: not applicable.

Predictors With Missing Data

For 4 relevant variables (ie, number of items, required answer time, response window, and dropout rate), 43% (12/28) of studies did not report sufficient information to extract relevant data (Tables 1 and 2). Only 29% (8/28) of studies reported the actual number of items, which ranged from 6 to 37. Only 18% (5/28) of studies reported the required answer time, which ranged between 2 and 8 minutes, and only 25% (7/28) of studies indicated the response window, which ranged from 10 to 16 minutes. Finally, only 25% (7/28) of studies reported the dropout rate, which ranged between 0% and 30% across studies.


Principal Findings

To our knowledge, this is the first systematic review, narrative review, and meta-analysis describing EMA study design features and procedures as they are used in current BPD research and estimating the effects of EMA study features and procedures on the compliance rates of participants diagnosed with BPD. The aim of the narrative review was to describe the characteristics of EMA protocols (study features and procedures) with patients with BPD using a systematic review approach. High heterogeneity was found with respect to study features and procedures across the different studies. Our results regarding the heterogeneity of features and procedures in BPD research using EMA are in line with the general trends in broad EMA research identified previously by Wrzus and Neubauer [26] and Vachon et al [27]. Considering the above, we can point out that similar pitfalls and challenges related to such heterogeneity apply to EMA studies in BPD. These challenges are primarily related to the problem of integrating and aggregating data stemming from studies that use very different design features and procedures. In short, similar to the broad EMA use in health research, a more homogeneous set of features and procedures could allow for more direct comparisons and benefit the overall advancement of the field, further highlighting the benefits of using EMA for BPD psychopathology research.

Considering the need for more standardized EMA designs and the crucial importance of compliance rates to obtain robust data sets in EMA research of BPD, the aim of the meta-analysis was to estimate the effects of specific design features and procedures on participant compliance rates. Across the studies included in the meta-analysis, the pooled prevalence rate for compliance among participants with BPD was estimated to be 79%, which is very similar to the 80% compliance rate recommended by Stone and Shiffman [8]. This suggests that, on average, EMA research is feasible for participants with BPD. Nevertheless, the reported compliance rates were high across the included studies and varied from 52% [45] to 95% [54]. This high variability suggests that some design features or procedures may have a bearing on compliance rates and thus justifies a specific exploration of such potential effects. However, contrary to our predictions, we could not identify the features or procedures of EMA designs that have an effect on compliance. It is important to note that only a small set of studies was included in the analysis, and only a limited number of design features and procedures could be statistically tested. On the basis of an analysis of the available design features and procedures, study characteristics, and outcomes, it is possible that compliance is robust and stable across varying EMA prompt calibrations and designs with participants with BPD. Moreover, our analysis suggests that compliance of participants with BPD appears to exhibit significant tolerance for a variety of design features and procedures of EMA studies, including prompt intensities, the measurement period of the study, the number of items in each prompt, the estimated time for the completion of each prompt, and the time window from the prompt signal to answering the prompt. No effect was found for other relevant variables such as the nature of the incentive for participants and technology used (device type). However, further investigation is warranted once more studies are available. In such studies, it would be useful if authors would add additional information regarding procedures and design features, including the methods for enhancing participation and compliance, such as phone calls, intensive monitoring, and variable incentives. It would also be essential for future EMA research of BPD to have more standardized criteria for the reporting of the technical aspects, design features, and procedures. Such standardized criteria would certainly allow for a more focused and comprehensive testing of variables that may have a bearing on compliance. Thus, EMA design can be strengthened and yield a full range of benefits to our field of study.

Future Directions for EMA Studies in BPD: Can We Agree on the Reporting of Basic Design Features and Procedures of EMA Studies?

The third aim of this study was to provide recommendations to foster compliance by choosing design features and procedures of EMA that can enhance compliance for future studies that adopt EMA for research in BPD psychopathology [67]. Our study found that EMA studies use vastly varying setups. These include divergent within-day prompt frequencies, measurement periods, compensation schedules, number of items, required answering times, assessment windows, device types, and extra assessment plans. EMA is extremely useful for understanding complex dynamics in BPD psychopathology; however, the variety of designs and setups currently preclude the aggregation of results across studies and learning which study setups are best to foster compliance. Thus, it is rather difficult to draw stable conclusions regarding EMA findings. Shiffman and Stone [9] have established some general guidelines for designing and reporting in EMA studies, but they did not provide recommendations for the actual reporting of the features and procedures of EMA designs in the studies or in supplemental materials. Some reporting guidelines have been widely adopted in other types of studies. For example, STROBE (Strengthening the Reporting of Observational Studies in Epidemiology) [68] is a well-known checklist for observational studies. It consists of 22 items related to each section of the article (ie, title, abstract, introduction, methods, results, and discussion) with the aim of improving the quality of the reports. Following the same principle, a recent systematic review of methods and procedures used in EMA of nutrition and physical activity research in youths [69] developed a comprehensive checklist of specific items to be reported in EMA studies: Checklist for Reporting EMA Studies. More recently, Trull et al [71] made an effort to fill the gap of standardization in how data and procedures are reported in EMA studies related to psychopathology and published a review of recommended reporting guidelines and current practices for ambulatory assessment research in psychopathology. The authors reviewed publications from the last 7 years (2012-2018) from 3 major mental health journals (Journal of Abnormal Psychology, Psychological Medicine, and Clinical Psychological Science) and integrated the information on how the various studies reported the methods used in EMA with some previously existing guidelines from EMA study reports, making an effort to provide some recommendations for future EMA studies in mental health. Despite the growing number of EMA studies in BPD, there is still a lack of such specific guidelines. Santangelo et al [67] have already made an effort to synthesize the EMA literature on BPD symptomatology. They discussed the different characteristics of EMA studies, providing an overview of EMA findings in BPD based on DSM-IV criteria. The authors also mentioned different challenges of EMA and provided recommendations to enhance them. However, they did not specifically focus on the technical aspects of EMA as this study did. It is likely that to produce specific recommendations, as suggested by Santangelo et al [67], different setups need to be compared with each other in terms of their relative benefits for compliance, including additional variables and procedures used to enhance compliance. Therefore, standard reporting of design features and procedures of EMA studies is required to fully understand how to optimize participation and obtain robust data sets that can then be combined and compared across EMA studies that examine BPD psychopathology.

Limitations

Although this study is innovative and unique in its contribution to EMA research on BPD psychopathology, the results must be interpreted with certain limitations in mind. First, despite the fact that there are numerous published studies that use EMA to investigate BPD (we found 79 studies in our literature search), only approximately one-third of all published studies (28 of the 79 studies identified in our literature search, 35%) that use EMA for BPD psychopathology actually provide specific information that allows us to extract relevant design features and procedures. Thus, our narrative review was based on a limited number of studies and a limited number of design features and procedures.

Second, available studies were even fewer when we attempted to quantitatively analyze the predictors of compliance rates. Only about a quarter of all studies identified (18 of the 79 studies identified in the search, 23%) provided enough information to calculate compliance rates (eg, they provided the theoretical number of prompts to be answered and the actual average number of answered prompts among participants in the study). Trull et al [71] have already warned that systematic evaluation and aggregation of the findings of EMA studies by meta-analysis critically depends on the clear and explicit reporting of study characteristics. Among the studies included in this review, most studies did not report several design features and procedures of EMA, including the number of items, required answer time, assessment window, and dropout. Therefore, we were unable to perform quantitative analyses for these variables using the included studies to examine their influence on compliance.

Third, we found high heterogeneity in compliance rates, which was not explained by selected predictors (ie, study design features and procedures) or subgroup analyses (ie, age groups and sex). Furthermore, we were not able to examine the effect of patient-level data in our meta-regression analysis (ie, percentage of females and mean age of sample) because the results of a meta-regression based on the study means of patient-level variables are prone to bias (ie, ecological fallacy) [72].

Fourth, additional variables that were not examined in our study may have an influence on compliance. For example, it is possible that monitoring compliance and calling participants to improve participation may foster compliance. An important point to be noted is that there are only a few studies that we could include because other studies lacked the design variables examined in this systematic review and meta-analysis. In addition, within the included studies, a considerable amount of key information to be analyzed as moderators of compliance was missing (eg, the number of items and required answer time).

Finally, regarding compliance, there may be a systematic publication bias. Studies with very low compliance likely do not yield valid results, so publication is reasonably withheld. This means that there may be a restriction in the reported range of compliance rates. Therefore, we can say that the conclusions of this study apply to EMA setups that have been relatively successful in securing participation across studies, which, by definition, raises the average compliance rates. Indeed, much could be learned from studies that did not yield acceptable compliance rates, but this literature is typically not available in published form.

Conclusions

This systematic review and meta-analysis described the design features and procedures used in EMA studies investigating BPD and reported that overall compliance rates among the included studies appeared to be stable and acceptable across different EMA calibrations and study designs. In addition, our findings emphasize the importance of moving toward standardized reporting of EMA designs (eg, prompt frequency) and findings (eg, variability indices) to improve and aggregate our knowledge of basic BPD psychopathology research using EMA. As early indications are that compliance might be robust and stable across varying setups, we (1) propose that future EMA studies should consider other relevant design features and procedures, such as indices of variability, when designing EMA protocols to investigate BPD; (2) hypothesize that there may be nonspecific factors of EMA study design that may have an impact on compliance rates, which should be addressed in future EMA studies of patients with BPD; and finally, (3) suggest that standard reporting of the design features and procedures of EMA is thus required for future studies.

Acknowledgments

This study was funded by Foundation Botnar. The first author was funded by the National Agency for Research and Development/scholarship program/DOCTORADO BECAS CHILE/2019-21190831. This project was supported by the National Agency for Research and Development Millennium Science Initiative and Millennium Institute for Research on Depression and Personality-MIDAP ICS13_005.

The results of this study will be submitted for presentation in a panel at the 54th International Annual Meeting of the Society for Psychotherapy Research in Dublin from June 21 to 24, 2023.

Data Availability

Data will be made available upon reasonable request from the authors.

Conflicts of Interest

None declared.

Multimedia Appendix 1

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

PDF File (Adobe PDF File), 1332 KB

Multimedia Appendix 2

Search terms used in the individual databases.

DOCX File , 14 KB

  1. Ellison WD, Rosenstein LK, Morgan TA, Zimmerman M. Community and clinical epidemiology of borderline personality disorder. Psychiatr Clin North Am 2018 Dec;41(4):561-573. [CrossRef] [Medline]
  2. Bohus M, Stoffers-Winterling J, Sharp C, Krause-Utz A, Schmahl C, Lieb K. Borderline personality disorder. Lancet 2021 Oct 23;398(10310):1528-1540. [CrossRef] [Medline]
  3. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-5). 5th edition. Washington, DC, USA: American Psychiatric Association; 2013.
  4. Bach B, Kramer U, Doering S, di Giacomo E, Hutsebaut J, Kaera A, et al. The ICD-11 classification of personality disorders: a European perspective on challenges and opportunities. Borderline Personal Disord Emot Dysregul 2022 Apr 01;9(1):12 [FREE Full text] [CrossRef] [Medline]
  5. Gunderson JG, Herpertz SC, Skodol AE, Torgersen S, Zanarini MC. Borderline personality disorder. Nat Rev Dis Primers 2018 May 24;4:18029. [CrossRef] [Medline]
  6. Dixon-Gordon KL, Peters JR, Fertuck EA, Yen S. Emotional processes in borderline personality disorder: an update for clinical practice. J Psychother Integr 2017;27(4):425-438 [FREE Full text] [CrossRef] [Medline]
  7. Sadikaj G, Russell JJ, Moskowitz DS, Paris J. Affect dysregulation in individuals with borderline personality disorder: persistence and interpersonal triggers. J Pers Assess 2010 Nov;92(6):490-500. [CrossRef] [Medline]
  8. Stone AA, Shiffman S. Capturing momentary, self-report data: a proposal for reporting guidelines. Ann Behav Med 2002;24(3):236-243. [CrossRef] [Medline]
  9. Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol 2008;4:1-32. [CrossRef] [Medline]
  10. Myin-Germeys I, Kasanova Z, Vaessen T, Vachon H, Kirtley O, Viechtbauer W, et al. Experience sampling methodology in mental health research: new insights and technical developments. World Psychiatry 2018 Jun;17(2):123-132 [FREE Full text] [CrossRef] [Medline]
  11. Trull TJ, Ebner-Priemer U. Ambulatory assessment. Annu Rev Clin Psychol 2013;9:151-176 [FREE Full text] [CrossRef] [Medline]
  12. Santangelo PS, Reinhard I, Koudela-Hamila S, Bohus M, Holtmann J, Eid M, et al. The temporal interplay of self-esteem instability and affective instability in borderline personality disorder patients' everyday lives. J Abnorm Psychol 2017 Nov;126(8):1057-1065. [CrossRef] [Medline]
  13. Dixon-Gordon KL, Laws H. Emotional variability and inertia in daily life: links to borderline personality and depressive symptoms. J Pers Disord 2021 Mar;35(Suppl A):162-171. [CrossRef] [Medline]
  14. Stiglmayr CE, Ebner-Priemer UW, Bretz J, Behm R, Mohse M, Lammers CH, et al. Dissociative symptoms are positively related to stress in borderline personality disorder. Acta Psychiatr Scand 2008 Feb;117(2):139-147. [CrossRef] [Medline]
  15. Zeigler–Hill V, Abraham J. Borderline personality features: instability of self–esteem and affect. J Soc Clin Psychol 2006 Jun;25(6):668-687 [FREE Full text] [CrossRef]
  16. Links PS, Eynan R, Heisel MJ, Barr A, Korzekwa M, McMain S, et al. Affective instability and suicidal ideation and behavior in patients with borderline personality disorder. J Pers Disord 2007 Feb;21(1):72-86. [CrossRef] [Medline]
  17. Trull TJ, Solhan MB, Tragesser SL, Jahng S, Wood PK, Piasecki TM, et al. Affective instability: measuring a core feature of borderline personality disorder with ecological momentary assessment. J Abnorm Psychol 2008 Aug;117(3):647-661. [CrossRef] [Medline]
  18. Maas J, Hietbrink L, Rinck M, Keijsers GP. Changing automatic behavior through self-monitoring: does overt change also imply implicit change? J Behav Ther Exp Psychiatry 2013 Sep;44(3):279-284. [CrossRef] [Medline]
  19. Borges LM, Nazem S, Matarazzo BB, Barnes SM, Wortzel HS. Therapeutic risk management: chain analysis of suicidal ideation and behavior. J Psychiatr Pract 2019 Jan;25(1):46-53. [CrossRef] [Medline]
  20. Ebner-Priemer UW, Trull TJ. Ecological momentary assessment of mood disorders and mood dysregulation. Psychol Assess 2009 Dec;21(4):463-475. [CrossRef] [Medline]
  21. Barta WD, Tennen H, Litt MD. Measurement reactivity in diary research. In: Mehl MR, Conner TS, editors. Handbook of Research Methods for Studying Daily Life. New York, NY, USA: Guilford Press; 2012:108-123.
  22. Napa Scollon C, Prieto CK, Diener E. Experience sampling: promises and pitfalls, strength and weaknesses. In: Diener E, editor. Assessing Well-Being: The Collected Works of Ed Diener. Dordrecht, The Netherlands: Springer; 2009:157-180.
  23. Tsanas A, Saunders KE, Bilderbeck AC, Palmius N, Osipov M, Clifford GD, et al. Daily longitudinal self-monitoring of mood variability in bipolar disorder and borderline personality disorder. J Affect Disord 2016 Nov 15;205:225-233 [FREE Full text] [CrossRef] [Medline]
  24. Berenson KR, Downey G, Rafaeli E, Coifman KG, Paquin NL. The rejection-rage contingency in borderline personality disorder. J Abnorm Psychol 2011 Aug;120(3):681-690 [FREE Full text] [CrossRef] [Medline]
  25. Rintala A, Wampers M, Myin-Germeys I, Viechtbauer W. Response compliance and predictors thereof in studies using the experience sampling method. Psychol Assess 2019 Feb;31(2):226-235. [CrossRef] [Medline]
  26. Wrzus C, Neubauer AB. Ecological momentary assessment: a meta-analysis on designs, samples, and compliance across research fields. Assessment (forthcoming) 2022 Jan 11:10731911211067538 [FREE Full text] [CrossRef] [Medline]
  27. Vachon H, Viechtbauer W, Rintala A, Myin-Germeys I. Compliance and retention with the experience sampling method over the continuum of severe mental disorders: meta-analysis and recommendations. J Med Internet Res 2019 Dec 06;21(12):e14475 [FREE Full text] [CrossRef] [Medline]
  28. Smyth JM, Jones DR, Wen CK, Materia FT, Schneider S, Stone A. Influence of ecological momentary assessment study design features on reported willingness to participate and perceptions of potential research studies: an experimental study. BMJ Open 2021 Jul 30;11(7):e049154 [FREE Full text] [CrossRef] [Medline]
  29. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015 Jan 01;4(1):1 [FREE Full text] [CrossRef] [Medline]
  30. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 2000 Apr 19;283(15):2008-2012. [CrossRef] [Medline]
  31. Covidence: Better Systematic Review Management. Covidence.   URL: https://www.covidence.org/home [accessed 2023-02-15]
  32. Ebner-Priemer UW, Houben M, Santangelo P, Kleindienst N, Tuerlinckx F, Oravecz Z, et al. Unraveling affective dysregulation in borderline personality disorder: a theoretical model and empirical evidence. J Abnorm Psychol 2015 Feb;124(1):186-198. [CrossRef] [Medline]
  33. Fleming MN, Wycoff AM, Hepp J, Griffin SA, Helle AC, Freeman LK, et al. A daily-life study of interpersonal stressors and alcohol use in individuals with borderline personality disorder and community controls. Drug Alcohol Depend 2021 Nov 01;228:109021 [FREE Full text] [CrossRef] [Medline]
  34. Koenig J, Klier J, Parzer P, Santangelo P, Resch F, Ebner-Priemer U, et al. High-frequency ecological momentary assessment of emotional and interpersonal states preceding and following self-injury in female adolescents. Eur Child Adolesc Psychiatry 2021 Aug;30(8):1299-1308 [FREE Full text] [CrossRef] [Medline]
  35. Mackesy-Amiti ME, Donenberg G. Negative affect and emotion dysregulation among people who inject drugs: an ecological momentary assessment study. Psychol Addict Behav 2020 Sep;34(6):650-659 [FREE Full text] [CrossRef] [Medline]
  36. Law MK, Fleeson W, Arnold EM, Furr RM. Using negative emotions to trace the experience of borderline personality pathology: interconnected relationships revealed in an experience sampling study. J Pers Disord 2016 Feb;30(1):52-70 [FREE Full text] [CrossRef] [Medline]
  37. Scott LN, Wright AG, Beeney JE, Lazarus SA, Pilkonis PA, Stepp SD. Borderline personality disorder symptoms and aggression: a within-person process model. J Abnorm Psychol 2017 May;126(4):429-440 [FREE Full text] [CrossRef] [Medline]
  38. Selby EA, Ribeiro JD, Joiner Jr TE. What dreams may come: emotional cascades and nightmares in borderline personality disorder. Dreaming 2013 Jun;23(2):126-144 [FREE Full text] [CrossRef]
  39. Wells GA, Shea B, O´Connell D, Peterson J, Welch V, Losos M, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of Non-randomized studies in meta-analysis. The Ottawa Hospital Research Institute. 2000.   URL: https://www.ohri.ca/programs/clinical_epidemiology/oxford.asp [accessed 2023-02-15]
  40. R Core Team. R: a language and environment for statistical computing. R Foundation for Statistical Computing. 2022.   URL: https://www.r-project.org/ [accessed 2023-02-15]
  41. Cuijpers P, Huibers M, Ebert DD, Koole SL, Andersson G. How much psychotherapy is needed to treat depression? A metaregression analysis. J Affect Disord 2013 Jul;149(1-3):1-13. [CrossRef] [Medline]
  42. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ 2003 Sep 06;327(7414):557-560 [FREE Full text] [CrossRef] [Medline]
  43. Borenstein M, Higgins JP, Hedges LV, Rothstein HR. Basics of meta-analysis: I2 is not an absolute measure of heterogeneity. Res Synth Methods 2017 Mar;8(1):5-18. [CrossRef] [Medline]
  44. Björkenstam E, Björkenstam C, Holm H, Gerdin B, Ekselius L. Excess cause-specific mortality in in-patient-treated individuals with personality disorder: 25-year nationwide population-based study. Br J Psychiatry 2015 Oct;207(4):339-345. [CrossRef] [Medline]
  45. Andrewes HE, Hulbert C, Cotton SM, Betts J, Chanen AM. Ecological momentary assessment of nonsuicidal self-injury in youth with borderline personality disorder. Personal Disord 2017 Oct;8(4):357-365. [CrossRef] [Medline]
  46. Briones-Buixassa L, Alí Í, Schmidt C, Nicolaou S, Pascual JC, Soler J, et al. Predicting non-suicidal self-injury in young adults with and without borderline personality disorder: a multilevel approach combining ecological momentary assessment and self-report measures. Psychiatr Q 2021 Sep;92(3):1035-1054. [CrossRef] [Medline]
  47. Chaudhury SR, Galfalvy H, Biggs E, Choo TH, Mann JJ, Stanley B. Affect in response to stressors and coping strategies: an ecological momentary assessment study of borderline personality disorder. Borderline Personal Disord Emot Dysregul 2017 May 21;4:8 [FREE Full text] [CrossRef] [Medline]
  48. Coifman KG, Berenson KR, Rafaeli E, Downey G. From negative to positive and back again: polarized affective and relational experience in borderline personality disorder. J Abnorm Psychol 2012 Aug;121(3):668-679. [CrossRef] [Medline]
  49. Ebner-Priemer U, Kuo J, Welch S, Thielgen T, Witte S, Bohus M, et al. A valence-dependent group-specific recall bias of retrospective self-reports: a study of borderline personality disorder in everyday life. J Nerv Ment Dis 2006 Oct;194(10):774-779. [CrossRef] [Medline]
  50. Ellison WD, Levy KN, Newman MG, Pincus AL, Wilson SJ, Molenaar PC. Dynamics among borderline personality and anxiety features in psychotherapy outpatients: an exploration of nomothetic and idiographic patterns. Personal Disord 2020 Mar;11(2):131-140 [FREE Full text] [CrossRef] [Medline]
  51. Hawkins AA, Furr RM, Arnold EM, Law MK, Mneimne M, Fleeson W. The structure of borderline personality disorder symptoms: a multi-method, multi-sample examination. Personal Disord 2014 Oct;5(4):380-389 [FREE Full text] [CrossRef] [Medline]
  52. Houben M, Vansteelandt K, Claes L, Sienaert P, Berens A, Sleuwaegen E, et al. Emotional switching in borderline personality disorder: a daily life study. Personal Disord 2016 Jan;7(1):50-60. [CrossRef] [Medline]
  53. Kaurin A, Beeney JE, Stepp SD, Scott LN, Woods WC, Pilkonis PA, et al. Attachment and borderline personality disorder: differential effects on situational socio-affective processes. Affect Sci 2020 Sep;1(3):117-127 [FREE Full text] [CrossRef] [Medline]
  54. Köhling J, Moessner M, Ehrenthal JC, Bauer S, Cierpka M, Kämmerer A, et al. Affective instability and reactivity in depressed patients with and without borderline pathology. J Pers Disord 2016 Dec;30(6):776-795. [CrossRef] [Medline]
  55. Lane SP, Carpenter RW, Sher KJ, Trull TJ. Alcohol craving and consumption in borderline personality disorder: when, where, and with whom. Clin Psychol Sci 2016 Sep;4(5):775-792 [FREE Full text] [CrossRef] [Medline]
  56. Moukhtarian TR, Reinhard I, Morillas-Romero A, Ryckaert C, Mowlem F, Bozhilova N, et al. Wandering minds in attention-deficit/hyperactivity disorder and borderline personality disorder. Eur Neuropsychopharmacol 2020 Sep;38:98-109. [CrossRef] [Medline]
  57. Rizk MM, Choo TH, Galfalvy H, Biggs E, Brodsky BS, Oquendo MA, et al. Variability in suicidal ideation is associated with affective instability in suicide attempters with borderline personality disorder. Psychiatry 2019;82(2):173-178 [FREE Full text] [CrossRef] [Medline]
  58. Scala JW, Levy KN, Johnson BN, Kivity Y, Ellison WD, Pincus AL, et al. The role of negative affect and self-concept clarity in predicting self-injurious urges in borderline personality disorder using ecological momentary assessment. J Pers Disord 2018 Jan;32(Suppl):36-57. [CrossRef] [Medline]
  59. Selby EA, Kondratyuk S, Lindqvist J, Fehling K, Kranzler A. Temporal Bayesian Network modeling approach to evaluating the emotional cascade model of borderline personality disorder. Personal Disord 2021 Jan;12(1):39-50. [CrossRef] [Medline]
  60. Solhan MB, Trull TJ, Jahng S, Wood PK. Clinical assessment of affective instability: comparing EMA indices, questionnaire reports, and retrospective recall. Psychol Assess 2009 Sep;21(3):425-436 [FREE Full text] [CrossRef] [Medline]
  61. Southward MW, Semcho SA, Stumpp NE, MacLean DL, Sauer-Zavala S. A day in the life of borderline personality disorder: a preliminary analysis of within-day emotion generation and regulation. J Psychopathol Behav Assess 2020 Dec;42(4):702-713 [FREE Full text] [CrossRef] [Medline]
  62. Veilleux JC, Warner EA, Baker DE, Chamberlain KD. Beliefs about emotion shift dynamically alongside momentary affect. J Pers Disord 2021 Mar;35(Suppl A):83-113. [CrossRef] [Medline]
  63. Weise S, Parzer P, Zimmermann R, Fürer L, Resch F, Kaess M, et al. Emotion dysregulation and resting-state autonomic function in adolescent borderline personality disorder-a multimodal assessment approach. Personal Disord 2020 Jan;11(1):46-53. [CrossRef] [Medline]
  64. Wycoff AM, Carpenter RW, Hepp J, Lane SP, Trull TJ. Drinking motives moderate daily-life associations between affect and alcohol use in individuals with borderline personality disorder. Psychol Addict Behav 2020 Nov;34(7):745-755 [FREE Full text] [CrossRef] [Medline]
  65. Zaehringer J, Ende G, Santangelo P, Kleindienst N, Ruf M, Bertsch K, et al. Improved emotion regulation after neurofeedback: a single-arm trial in patients with borderline personality disorder. Neuroimage Clin 2019;24:102032 [FREE Full text] [CrossRef] [Medline]
  66. Zaki LF, Coifman KG, Rafaeli E, Berenson KR, Downey G. Emotion differentiation as a protective factor against nonsuicidal self-injury in borderline personality disorder. Behav Ther 2013 Sep;44(3):529-540. [CrossRef] [Medline]
  67. Santangelo P, Bohus M, Ebner-Priemer UW. Ecological momentary assessment in borderline personality disorder: a review of recent findings and methodological challenges. J Pers Disord 2014 Aug;28(4):555-576. [CrossRef] [Medline]
  68. von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche PC, Vandenbroucke JP, STROBE Initiative. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med 2007 Oct;45(4):247-251. [CrossRef] [Medline]
  69. Liao Y, Skelton K, Dunton G, Bruening M. A systematic review of methods and procedures used in ecological momentary assessments of diet and physical activity research in youth: an adapted STROBE Checklist for Reporting EMA Studies (CREMAS). J Med Internet Res 2016 Jun 21;18(6):e151 [FREE Full text] [CrossRef] [Medline]
  70. Wright AGC, Hallquist MN, Stepp SD, Scott LN, Beeney JE, Lazarus SA, et al. Modeling heterogeneity in momentary interpersonal and affective dynamic processes in borderline personality disorder. Assessment 2016 Aug;23(4):484-495 [FREE Full text] [CrossRef] [Medline]
  71. Trull TJ, Ebner-Priemer UW. Ambulatory assessment in psychopathology research: a review of recommended reporting guidelines and current practices. J Abnorm Psychol 2020 Jan;129(1):56-63. [CrossRef] [Medline]
  72. Thompson SG, Higgins JP. How should meta-regression analyses be undertaken and interpreted? Stat Med 2002 Jun 15;21(11):1559-1573. [CrossRef] [Medline]


BPD: borderline personality disorder
DSM: Diagnostic and Statistical Manual for Mental Disorders
EMA: ecological momentary assessment
MOOSE: Meta-Analyses of Observational Studies in Epidemiology
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
STROBE: Strengthening the Reporting of Observational Studies in Epidemiology


Edited by T Leung; submitted 06.12.22; peer-reviewed by J Betts, S Mangelsdorf; comments to author 28.12.22; revised version received 17.01.23; accepted 31.01.23; published 15.03.23

Copyright

©Antonella Davanzo, Delfine d´Huart, Süheyla Seker, Markus Moessner, Ronan Zimmermann, Klaus Schmeck, Alex Behn. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 15.03.2023.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.