Accessibility settings

Published on in Vol 28 (2026)

This is a member publication of University of Bristol (Jisc)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/79291, first published .
A Health-Related Digital Ecological Momentary Assessment in Children (Aged 5– 11 Years): Systematic Review

A Health-Related Digital Ecological Momentary Assessment in Children (Aged 5– 11 Years): Systematic Review

A Health-Related Digital Ecological Momentary Assessment in Children (Aged 5– 11 Years): Systematic Review

School of Engineering Mathematics and Technology, University of Bristol, 1 Cathedral Square, Bristol, United Kingdom

Corresponding Author:

Sydney Charitos, MSc


Background: Digital ecological momentary assessment (EMA) collects data on experiences as they occur in daily life, capturing dynamic, context-sensitive experiences often missed by retrospective reporting. While EMA shows promise for pediatric health research, preadolescents have distinct socioemotional and cognitive characteristics likely to affect engagement. Existing reviews have not focused on the acceptability and feasibility of EMA protocols for this age group.

Objective: This review aimed to examine digital EMA protocols used with children aged 5‐11 years across health domains, focusing on protocol characteristics, acceptability, and feasibility. We address 3 research questions (RQs)—RQ1: What are the characteristics of these protocols? RQ2: What is the feasibility and acceptability of these protocols? RQ3: What are the characteristics of high and low response rate protocols?

Methods: We searched 10 databases (CINAHL, Embase, ACM Digital Library, IEEE Xplore, Cochrane Library, PsycINFO, Web of Science, PubMed, Scopus, and MEDLINE) for peer-reviewed studies published up to October 2025. Eligible studies used EMA with children aged 5‐11 years to collect health data via digital devices. Two researchers independently screened studies (SC and LT); one (SC) conducted quality assessment and data extraction. Findings were narratively synthesized.

Results: We identified 17 protocols across 37 studies. Most targeted nonclinical populations, used handheld devices, spanned 3‐28 days, and applied interval-contingent prompting (RQ1). Response rates were available or calculable for 15 of 17 protocols, ranging from 48% to 92% (RQ2). Six protocols reported response rates of ≥80%. However, key data required for pooling (eg, raw counts for planned vs completed prompts) were missing or selectively reported. This contributed to 13 of 17 protocols being rated at critical risk of bias (ROBINS-I, v2). As a result, the strength of evidence was limited by poor reporting and high risk of bias. Facilitators included uncomplicated, engaging technology, reminders, and caregiver involvement. Barriers included device burden, restricted device access, difficulty with accurate reporting, stigma, limited device awareness, and insufficient caregiver support. High-response protocols (≥80%) often involved older children or clinical groups, ≥3-week duration, fixed schedules (≥20 items per prompt, 3 or 4 times per day), timing customization, and incentives (RQ3).

Conclusions: This review provides the first systematic synthesis on preadolescents, offering insight into EMA protocol design beyond prior work treating children as a single group. By examining 17 EMA protocols, the review identifies gaps in developmental appropriateness and reporting quality, highlighting where the evidence may differ from adolescent and adult EMA research. The results suggest that digital EMA for preadolescents requires greater focus on child-centered design to increase acceptability and adherence, alongside improved reporting standards, so protocols can be meaningfully compared. With these advances, EMA could be more effectively integrated into pediatric health monitoring, tailored to the needs of different age groups.

Trial Registration: Prospero ref-CRD42022373812; https://www.crd.york.ac.uk/PROSPERO/view/CRD42022373812

J Med Internet Res 2026;28:e79291

doi:10.2196/79291

Keywords



Background

Ecological momentary assessment (EMA) is a research method used to collect data on individuals’ behaviors, experiences, and physiological states as they occur in real time and in everyday settings [1]. EMA has proven particularly valuable in health research because it captures experiences that are often dynamic, personal, and shaped by context [2-4]. For example, symptoms such as pain, fatigue, or mood often fluctuate over short periods and may be influenced by daily routines, social interactions, or environmental triggers [5-7]. In contrast, traditional methods, such as retrospective reporting, may struggle to measure these experiences accurately due to recall bias and the influence of recency effects and emotional salience [1]. In recent years, the increasing availability of smartphones and wearable devices has supported the integration of EMA into individuals’ daily routines [8-10]. A typical EMA protocol uses these devices to deliver prompts, brief surveys sent to participants at scheduled times. Each prompt includes 1 or more questions, referred to as the item count, which can vary depending on the protocol’s aims and design [1].

EMA may be a particularly useful method for capturing health-related experiences in preadolescent children (aged 5‐11 years). Children at this age demonstrate metacognitive ability, reflected in their capacity to consider their own emotions and experiences, at a level similar to that of adults [11]. In contrast, traditional retrospective self-report approaches can be especially challenging for this age group [12]. At this developmental stage, children’s long-term memory is still maturing, which can limit how accurately they recall past events [13]. They can also find it difficult to verbally express their health experiences, especially in unfamiliar clinical settings or when speaking with health care professionals they do not know personally [14]. EMA may help overcome these barriers by enabling in-the-moment reporting rather than requiring reflection on past experiences.

However, a central challenge in implementing an EMA protocol is achieving both acceptability, that is, how suitable, engaging, or satisfying the method is perceived to be by participants, and feasibility, that is, how practical and realistic it is to implement [15]. These dimensions are essential for ensuring sustained participation and the collection of high-quality data. As a result, researchers have explored a variety of methodological adaptations aimed at improving protocol acceptability and feasibility. For example, micro-EMA is a protocol design that uses ultrabrief prompts (within a few seconds) to reduce participant burden [16]. Researchers have also incorporated context-aware delivery methods that adjust prompts based on user behavior or timing to minimize disruption [17,18]. Such adaptations reflect the ongoing evolution of EMA as researchers refine approaches to better support acceptability and feasibility across a range of participants and contexts.

EMA has consistently demonstrated acceptability and feasibility in adult populations, with average response rates (the proportion of scheduled prompts completed by participants) frequently exceeding 80% [19], a standard of high response rate in EMA studies [20]. Although certain protocol features, such as item count and prompt frequency, can influence response rates, adults generally maintain strong response levels (above 70%) despite these variations [19]. A factorial experiment designed to isolate the individual effects of specific protocol characteristics found that common variations did not significantly affect response rates [21]. Instead, individual differences and the interactions between protocol elements were identified as having a more substantial influence on response rates. These findings indicate that EMA protocols for adult populations can be flexibly adapted to specific research requirements without substantially compromising response rates.

Compared with adults, existing evidence on response rates in EMA protocols for preadolescent children remains limited and inconclusive. A 2023 meta-analysis suggested that age may not be a strong predictor of EMA response rate; however, this analysis included only 6 protocols involving children younger than 12 years (1.2% total sample), limiting confidence in the finding for this age group [22]. Furthermore, many reviews aggregate data across preadolescent (aged 5‐11 years) and adolescent (12‐18 years) age groups, making it difficult to isolate trends specific to younger children [23-25]. For example, Wen et al [23] examined mobile EMA with “children and adolescents,” but only one of their included studies overlaps with the present review, illustrating how younger children are rarely the focus of EMA research. Their review focused on mobile EMA in health science databases and largely adolescent samples, whereas the present review searched a wider range of interdisciplinary sources, including Human-Computer Interaction (HCI) venues, and targeted protocols specifically designed for preadolescent children (aged 5‐11 years). Within this aggregation, there is some evidence that children and adolescents may exhibit lower response rates than adults; a systematic review reported an average compliance rate of 78.3% among pediatric samples, below the levels typically observed in adult EMA studies [23]. Despite this tendency to group age bands or assume similar patterns across age groups, systematic reviews nonetheless emphasize the importance of adapting EMA protocols to better meet the developmental and contextual needs of younger children [23,24,26]. Recommendations include restricting prompts to occur outside school hours and limiting internet access to support caregiver oversight [24]. While such recommendations are noted within existing systematic reviews, they are not typically the primary focus, underscoring the need for more targeted research on how EMA protocols can be effectively adapted for younger children.

Aims

This systematic review aimed to investigate how EMA protocols are being used with children aged 5‐11 years, across health domains and population characteristics, focusing on their acceptability and feasibility for children. This review aimed to build on the established literature on the adherence of children to EMA protocols by investigating the acceptability and feasibility of these different protocol characteristics and understanding how current protocols attempt to improve adherence. In this review, we investigated the following research questions (RQs): (1) What are the characteristics of health-related EMA protocols being used with children aged 5-11 years? (2) What is the feasibility and acceptability of these EMA protocols with children aged 5-11 years? (3) What characteristics of EMA protocols, RQ1, are related to high and low response rate, RQ2, when using EMA with children between the ages of 5 and 11 years?


This review was registered in the PROSPERO database and follows the guidance of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [27]. See PRISMA and PRISMA-S checklists.

Search Strategy

The literature search was first conducted on March 13, 2023, and updated on October 28, 2024, and October 15, 2025, in the following databases: CINAHL, ACM Digital Library, PsycINFO, Embase, MEDLINE, Cochrane Library, IEEE Xplore, PubMed, Scopus, and Web of Science. Each database was searched via its native platform (eg, MEDLINE via Ovid, PsycINFO via Ovid, and Scopus via Elsevier), with full platform—database mappings and exact search strings provided in PRISMA-S checklist. These were chosen to span multiple disciplines including medical and HCI research. The search included key terms for (1) children aged 5-11 years, such as “school child” or “minor” and (2) EMA, such as “Experience Sampling” or “Ambulatory Assessment” (Multimedia Appendix 1 [28,29] provides the full list of search terms used). No other limits or restrictions were placed on searches other than date restrictions for subsequent literature reviews in 2024 and 2025. The search strategy was not formally peer reviewed but was discussed with the wider research team. No study registries, conference proceedings, organizational websites, or gray literature sources were searched, as the review focused on peer-reviewed empirical studies. The search and reporting methods were documented in accordance with the PRISMA-S (PRISMA Search) checklist, with full database-specific search strategies, platforms, and adaptations reported in PRISMA-S checklist.

Inclusion and Exclusion Criteria

Studies were included if they fulfilled the following criteria: (1) peer-reviewed or empirical studies with any study design, (2) published in English, (3) the research involved the use of EMA to collect health-related information (physical, mental, or social) on a child (aged 5‐11 years) either by the child themselves or proxy-reported by a parent or a caregiver, and (4) a digital technology (eg, smartphone) was used to collect EMA data. For the purpose of this review, EMA is defined as the assessment of a phenomenon in natural settings at least twice a day [30]. Studies were excluded if they fulfilled the following criteria: (1) the study included children younger than 5 years or older than 11 years or did not report adaptations specific to the preadolescent age range (5‐11 years). Protocols designed for a broader group of children (eg, aged 5‐18 years) without age-specific considerations were excluded, as the focus of this review was on protocols intentionally tailored for preadolescents; or (2) the paper was considered a review on previously reported studies, protocol, nonempirical or nonscientific paper (eg, book chapters), or when the full text was not available.

Screening Procedures

The search results were imported into Zotero (Corporation for Digital Scholarship) and duplicates were removed. In stage 1, each title and abstract was independently screened by 2 authors (SC and LT) for relevance. If an inclusion decision could not be made based on the title and abstract, the papers were included for full-text review. Papers deemed relevant at the title- and abstract-screening stage moved onto stage 2 where they were independently double-screened against the inclusion and exclusion criteria using the data management platform Rayaan (Rayyan Systems Inc), with reasons for exclusion recorded. Disagreements at both stages were discussed and resolved in meetings by the reviewers (SC and LT), with persistent disagreements being resolved through discussion with the wider research team (AB and JB). If the full text was not available through institutional subscriptions, interlibrary loan, or author contact (2 email attempts), the study was excluded. Where the full text was available but did not contain the information needed, 2 attempts were made to contact authors by email; if the information was not provided, the study was excluded. In addition, reference lists of included studies were manually screened to identify any additional eligible papers.

Data Extraction and Synthesis

Data were extracted by the primary author (SC), following the protocol in Textbox 1, which aligns with the Checklist for Reporting EMA Studies (CREMAS) [25]. In line with our PROSPERO registration, methods were adapted during analysis: RQ3 was added to examine protocol features linked to response rates, while planned technology-based taxonomies were not applied due to insufficient reporting. A quantitative meta-analysis was not feasible, as key data required for pooling (eg, raw counts for planned vs completed prompts and corresponding variance estimates) were missing or selectively reported across protocols. This missingness contributed to 13 of 17 protocols being judged at critical risk of bias during quality assessment (see Multimedia Appendix 2 [31-60] for full assessments and Figure 1 for a summary). As a result, a narrative synthesis was undertaken [61]. Pooled response rate estimates were calculated for the subset of protocols not judged at critical risk of bias using a random-effects model, with mean response rates, 95% CIs, and a prediction interval reported.

The search identified 31 eligible papers describing 37 studies. As several papers reported on the same underlying methodology, and some studies included distinct subgroups with methodological differences, these were consolidated into 17 unique EMA protocols, which formed the unit of analysis for this review. Where protocols spanned multiple papers, these were grouped under project or funding names, with “main” or “sub” labels applied where necessary. Protocols were named based on the explicit project name when provided or if this was not available, the funding body.

Textbox 1. Data extraction categories and details extracted.

General protocol aim and participants (study-level information included in Multimedia Appendix 3 [31-60]).

  • Publication details: authors, year of publication, country of publication, and publication location
  • Sample size
  • Demographics: participant age, health condition if specified
  • Protocol aim

Research question (RQ) 1: What are the characteristics of the ecological momentary assessment (EMA) protocol?

  • EMA purpose (eg, intervention or observational tool)
  • Training (included in the Checklist for Reporting EMA Studies [CREMAS]): child and parent
  • Technology (included in the CREMAS): reporter, device, software, response method, any additional devices (eg, accelerometer)
  • Monitoring period (included in the CREMAS)
  • Prompting design (included in the CREMAS): strategy (eg, interval, event-contingent), number of questions
  • Prompting frequency (included in the CREMAS)
  • Design features (included in the CREMAS): piloting (item and protocol level), reward for participation, customization, reminder systems, support networks, and protocol flexibility.

RQ 2: Is it feasible and acceptable to use digital devices to implement an EMA methodology with children between the ages of 5 and 11 years for health purposes?

  • Feasibility details: dropout, attrition, response latency, and response rate (planned vs complete).
  • Acceptability details: qualitative evidence relating to acceptability and feasibility (eg, exit interviews)
  • Any information captured about acceptability and feasibility but not captured through formal qualitative or quantitative data collection, that is, authors’ interpretations in the results and discussion.

RQ 3: What characteristics of EMA protocols, RQ1, are related to high and low adherence, RQ2, when using EMA with children between the ages of 5 and 11 years?

  • Additional synthesis of data extracted from RQ1 and RQ2, see “Data Extraction and Synthesis” section for details.
Figure 1. Summary of risk of bias for the 17 identified ecological momentary assessment protocols, assessed using the Risk Of Bias In Nonrandomized Studies of Interventions, version 2 (ROBINS-I v2) using robviz tool [33]. The figure displays domain-level and overall ratings, showing that most protocols (13 out of 17) were judged to be at critical risk of bias [31-60,62].

For RQ1, we extracted the frequency of each protocol feature and synthesized findings through descriptive statistics. To ensure consistency, ranges were reported when no overall metric was provided across subanalyses. Within the CREMAS design features category, 5 subcategories were identified to support consistent reporting: rewards for participation, piloting, customization, reminder systems, and support networks.

For RQ2, we extracted adherence-related metrics (including response rate, attrition, and latency) and qualitative data on acceptability and feasibility. If overall response rates were not reported, they were calculated from raw data (planned vs actual responses). Full calculations can be found in Multimedia Appendix 4 [31-60]. The planned metaethnography was not feasible, as only 2 of 17 protocols included formal qualitative research on acceptability and feasibility. Vilaysack et al [31] provided a summary of feedback; Norman et al [32] applied manifest content analysis to identify barriers (demanding, challenging in irregular situations) and facilitators (uncomplicated, engaging). Instead, we applied a thematic synthesis approach [63], using the framework of barriers and facilitators by Norman et al as a deductive starting point and expanding it inductively to incorporate additional author observations and participant quotes.

For RQ3, which examined how protocol characteristics (RQ1) relate to response rate patterns (RQ2), no additional data were extracted. Instead, to synthesize data for this RQ, protocols were categorized as high or low response rate using an 80% threshold (Stone and Shiffman [20]). Researchers then identified protocol characteristics (identified in RQ2) found in >50% of protocols within each group.

Quality Assessment

One author conducted a quality assessment, discussing any borderline assessments with the wider team. We used 2 complementary tools: the CREMAS [25], and the Risk Of Bias In Nonrandomized Studies of Interventions, version 2 (ROBINS-I v2) [64]. The CREMAS is a 16-item tool designed to enhance the transparency and rigor of EMA reporting. For the purposes of this review, each item was assessed using a binary coding scheme (“present” or “not present”), with ambiguous cases conservatively marked as “not present.” The ROBINS-I v2 tool was used to evaluate risk of bias across 7 domains. Full details of both assessments are provided in Multimedia Appendix 2, with a summary of ROBINS-I v2 ratings shown in Figure 1.


The PRISMA flowchart is represented in Figure 2. In this section, we synthesize data extracted to address our RQs as outlined in Textbox 1. Protocols are grouped by overall ROBINS-I rating (critical, serious, or moderate) and sorted within each group by response rate to facilitate comparison across quality levels.

Figure 2. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram. EMA: ecological momentary assessment.

Research question 1: What are the characteristics of the EMA protocol?

For RQ1, Tables 1-3 summarize key characteristics of the included protocols. Table 1 summarizes protocol domains and participants; Table 2 outlines training and technology features; and Table 3 details prompt implementation.

Table 1. Overview of included ecological momentary assessment protocols, including study location, sample size, child age, condition, and primary aim (RQ1).
ProtocolCountrySample sizeChild’s age (years)ConditionAim
Serious
Rosen and Epstein (2010) [33]United States28-11ADHDaTo assess links between parental stress, feeding practices, and child-eating behaviors via EMAb.
Appelhans et al (2025) [34]United States605-10NoneTo examine whether parent-supported recreational activities can displace discretionary eating and electronic entertainment.
Moschko et al (2022) [35]Germany709-11ADHDTo investigate daily self-regulation in children with ADHD and parent-child interactions.
Vilaysack et al (2016) [31]Australia105-7NoneTo assess EMA feasibility with typically developing children.
Critical
Rosen et al (2013)—Main [36]United States117-11ADHDTo examine parent EMA proxy reports of children’s emotional dysregulation.
Alacha et al (2024) [37]United States479-10NoneTo investigate effects of positive affect variability on homework problems in children with ADHD.
Jacobs (2021) [38-40]Germany848-11ADHDTo explore links between emotional regulation, sleep, and well-being using ambulatory methods.
Rosen et al (2013)—Sub [36]United States55-7NoneTo explore how children can use EMA to self-report on their own emotional dysregulation.
Family Matters B1 (2020) [41-47]United States128-1509-11NoneTo examine diets of low-income, racially and ethnically diverse families.
SASCHA B2 (2020) [40,48-50,62]Germany1085-7NoneTo assess how transition to secondary school affects well-being and academic success.
Engelen, Bundy, Lau et al (2015) [51]Australia209-11NoneTo examine links between children’s activity levels and contextual factors.
SASCHA B1 (2020) [48,50,62]Germany908-11NoneTo investigate self-esteem, peer ties, and academic functioning before secondary school.
FLUX (2013) [38,52-57]Germany82-1109-11ADHDTo examine how sleep, activity, affect, peers, and worry relate to working memory and well-being.
Engelen, Bundy, Bauman et al (2015) [58]Australia2465-7NoneTo assess EMA feasibility for describing after-school activity patterns.
Norman et al (2020) [32]Sweden205-7NoneTo examine feasibility and validity of photo-based EMA for children’s diets.
Rovane et al (2025) [59]United States925-11ASDcTo explore links between parental emotion regulation, stress, and child behavior in ASD.
Family Matters B2 (2025) [60]United States4365-7NoneTo examine how parental stress, mood, and coping relate to children’s physical activity and screen time in daily life.

aADHD: attention-deficit/hyperactivity disorder.

bEMA: ecological momentary assessment.

cASD: autism spectrum disorder.

Table 2. Summary of protocol training and technology features across ecological momentary assessment protocols with children aged 5-11 years (research question 1).
ProtocolReporterTraining child, parentTechnologyResponse methodReported question domain(s)
Serious
Rosen and Epstein (2010) [33]ParentNa, YbPDAcVASd (11: −5 to 5), VAS (10: 1 to 10), Categorical, and Likert (5)Affect
Appelhans et al (2025) [34]ParentNRe, YSmartphoneMulti-Choice, Single-ChoiceActivity (type), diet (food intake)
Moschko et al (2022) [35]Child, parentY, NRChild: smartphone; parent: survey (online or paper)Likert (6)Self-regulation, social relationships
Vilaysack et al (2016) [31]ChildY, YSmartphoneVAS (NR), Multi-Choice, CategoricalActivity
Critical
Rosen et al (2013)—Main [36]ParentNR, YPDAVAS (NR)Affect
Alacha et al (2024) [37]ChildNR, YMobile phoneLikert (5)Affect
Jacobs (2021) [38-40]Child, parentY, NRSmartphoneLikert (5), Working Memory Updating Task, and Time PickerSleep, affect, and working memory
Rosen et al (2013)—Sub [36]ParentNR, YPDAVAS (NR)Affect
Family Matters B1 (2020) [41-47]ChildNR, YTablet. optional: phone, accelerometerCategorical, VAS (NR), Multi-Choice, and Likert (NR)Food intake, stress, sleep, affect, and activity
SASCHA B2 (2020) [40,48-50,62]ParentY, YSmartphone (app)Likert (5), Working Memory Updating Task, and Time PickerSleep, affect, social relationships, working memory, self-esteem, achievement goals, self-regulation, and academic success
Engelen, Bundy, Lau et al (2015) [51]ChildNR, NRPDA, accelerometerMulti-Choice, VAS (NR)Activity
SASCHA B1 (2020) [48,50,62]ChildY, YSmartphoneLikert (5), Working Memory Updating Task, and Time PickerSocial relationships, affect, self-esteem, achievement goals, self-regulation, working memory, and academic success
FLUX (2013) [38,52-57]Child, ParentY; YSmartphone, accelerometerLikert (5), Working Memory Updating Task, and Time PickerAffect, working memory, and sleep
Engelen, Bundy, Bauman et al (2015) [58]ParentNR, YPDAMulti-Choice, VAS (NR)Activity
Norman et al (2020) [32]ChildNR, YMobile phonePhoto, Free TextFood intake
Rovane et al (2025) [59]ParentNR, YSmartphoneLikert (9), Categorical, and Likert (5)Stress, behavior
Family Matters B2 (2025) [60]ParentNR, YSmartphoneSingle-Choice, VAS (NR), Multi-Choice, and Likert (NR)Diet, stress, sleep, affect, and activity

aN: Training did not happen.

bY: Training is reported.

cPDA: personal digital assistant.

dVAS: visual analog scale.

eNR: not reported.

Table 3. Summary of prompt implementation details across ecological momentary assessment protocols with children aged 5-11 years (research question 1).
ProtocolPeriodStrategyTypeaItems, nFrequency, nPrompt interval
Serious
Rosen and Epstein (2010) [33]28 daysIntervalFixed293Before school, after school, and evening
Appelhans et al (2025) [34]17 daysIntervalPseudorandom4-53Pseudorandom times during non–school hours and through entire day on weekends
Moschko et al (2022) [35]18 days; 3× over 13 monthsIntervalRandom7- 83 children; 1 parentBefore school, afternoon, and evening
Vilaysack et al (2016) [31]7 daysIntervalRandom78Random during waking hours (including school)
Critical
Rosen et al (2013)—Main [36]28 daysIntervalFixed13Before school, after school, and evening
Alacha et al (2024) [37]7 daysIntervalFixed203Morning, afternoon or after school, and evening
Jacobs (2021) [38-40]21 daysIntervalNRb17-254Before school (later on weekends), afternoon, and evening
Rosen et al (2013)—Sub [36]28 daysIntervalFixed1 child; 2 parents3Before school, after school, and evening
Family Matters B1 (2020) [41-47]8-10 daysInterval, eventFixed, random10-305 + eventEven split across waking hours
SASCHA B2 (2020) [40,48-50,62]4 weeksIntervalNR28-414Before school, morning (including school), afternoon, and evening
Engelen, Bundy, Lau et al (2015) [51]4 days; 2× 13-week gapIntervalRandom123Between afternoon and evening
SASCHA B1 (2020) [48,50,62]4 weeksIntervalNR28-414Before school, morning (including school), afternoon, and evening.
FLUX (2013) [38,52-57]4 weeksIntervalFixed21-264Morning (including school), midday (including school), afternoon, and evening
Engelen, Bundy, Bauman et al (2015) [58]4 days; 2× 13-week gapIntervalRandom123Between afternoon and evening
Norman et al (2020) [32]3 daysInterval, eventFixed11 + eventEvening
Rovane et al (2025) [59]7 daysIntervalRandom35Random during waking hours
Family Matters B2 (2025) [60]7 days +IntervalFixed, random10-344Randomly within 3-hour window, EoDc available later in the day

aStudies typically described prompting as “random” or “pseudorandom,” but few reported whether constraints (eg, minimum time between prompts) were applied. Our synthesis, therefore, reflects the terminology reported by authors, acknowledging that “random” may have been implemented differently across protocols.

bNR not reported.

cEoD: end of day.

General Protocol Domains and Participants

Table 1 summarizes protocol domains and participants (RQ1). Across the 17 EMA protocols, most (n=13/17) were designed to focus solely on either key stage (KS) 1 (ages 5‐7 years; n=5) or KS2 (ages 8‐11 years; n=8) [33,35,36,38-40,48-50,52-57,62] compared with KS1 [31,32,41-47,51,58]. Protocols sampled typically developing children [31,32,34,36,37,40-51,58,60,62] more frequently than those with specific conditions: attention-deficit/hyperactivity disorder (ADHD; n=5) [33,35-37] and autism spectrum disorder (n=1) [59]. Female representation varied (0%‐56%) but was between 40% and 56% in 13 protocols [31,32,34,35,37-58,60,62] and often lower among condition-related protocols, with 4 of 6 having ≤23% female participants [33,35,36,59].

Research question 1: What are the characteristics of the EMA protocol?

Reporter Training

Most protocols (n=15/17) used a single reporter: usually parent-reported in KS1 (n=7) [32,34,37,41-47,51,58-60], whereas protocols involving KS2 were split between child self-reports and parent proxy reports (n=6 child; n=7 parent) [35,36,38-40,48-50,52-57,62]. Where reported (n=5), parent respondents were predominantly female (74%‐100%) [32,33,35,41-47,59]. Child training was reported in 6 protocols and always involved verbal instruction with hands-on practice, with session durations ranging from 45 minutes to 4.5 hours when reported (n=3) [31,38-40,52-57]. Parent training was reported in 14 protocols [31-34,36,37,40-50,58-60,62]. Of these, 9 provided verbal instruction [31,33,34,37,40-50,59,60,62], 6 provided written materials [32,38,40-50,52-58,62], and 6 offered hands-on practice [31,33,34,41-47,59,60]. When reported (n=3), parent training sessions ranged from 15 to 45 minutes [31,33,59]. Three protocols provided training to both children and parents when the child was the reporter [31,40,48-50,52,53,55-57,62], but only 1 protocol included practice-based training for parents in this context [31]. In protocols where both child and parent were reporters (n=2), only 1 member of the pair was reported to have received training [35].

Technology

Nearly all protocols (n=16/17) used dedicated handheld devices, most commonly smartphones (n=11) [31,32,34,35,37-40,48-50,52-57,59,60,62] or Personal Digital Assistants (n=5) [33,36,41-47,51,58], with 1 using a tablet [41-47]. Three protocols paired EMA with accelerometers for passive tracking [38,41-47,51-57]. Twelve protocols provided devices, typically restricting functionality to the research app (n=9) [33,35,36,38-50,55-57,62], while 5 allowed personal device use [32,34,35,37,60]; this occurred only when parents were reporting.

Response Methods

The most popular response collection method was a Likert scale (n=10/17) [33,35,37-48,50,52-57,59,60,62], most commonly 5-point scale (n=7) [33,37-40,48-50,52-57,59,62]. Nearly all (n=14) combined 2 or more response types, often pairing Likert scales with visual analog scales or categorical options.

EMA Purpose and Domain

All protocols used EMA as an observational tool and not an intervention. Twelve captured multiple domains, while 5 focused on a single domain [31,32,37,51,58]. Affect was most common (n=10), followed by activity (n=6) and sleep (n=5).

Monitoring Period

Monitoring periods across the 17 protocols spanned from 3 to 28 days for a single wave: ≤7 days (n=7) [31,32,37,51,58-60], 8‐14 days (n=1) [41-47], 15‐21 days (n=3) [35,38-40], and ≥22 days (n=6) [33,36,38,40,48-50,52-57,62]. Only the Family Matters protocols (n=2) [41-47] included flexibility beyond their stated monitoring periods—extending participation if families did not complete a “full day” of EMA (2 of 4 interval‐contingent prompts, 1 mealtime survey, and 1 end‐of‐day survey).

Prompt Design and Frequency

All protocols included interval-contingent prompting, typically 3-4 times per day (n=13/17) [33-40,48-58,60,62]. Prompt timing followed three main patterns: (1) 3 windows per day (before school, after school, and evening; n=5) [33,35-40], (2) 4 windows (adding a midmorning school prompt; n=4) [35,43-46,48-52], (3) or prompts spaced evenly or randomly across waking hours (n=5, one including schooltime prompts with child reporters) [31,34,41-47,59,60]. One protocol [32] added an end-of-day prompt mid-study after a poor response rate to daytime event-contingent prompts was observed. Six protocols varied item count by time of day (eg, shorter morning and longer end-of-day surveys) [35,38-50,52-57,62]. Response windows ranged from 3 minutes to 6 hours, with 2 protocols linking window length to the interval until the next prompt [40,48-50,62] and 2 adjusted it by time of day [41-47,60].

Design Features

Design features relevant to reporting bias and participant burden were categorized into 5 domains—rewards, piloting, customization, reminders, and support. These domains are summarized, with detailed protocol-level information provided in Multimedia Appendix 5 [31-60,62].

Rewards for Participation

Of the 17 protocols, 13 offered participant reimbursement. Fixed payments, meaning amounts provided regardless of response rate, were reported in 5 protocols [35,37-40,48-50,59,62]. Six protocols tied bonuses to response rate thresholds of 60%‐90%, typically offering an extra US $6 to US $12 for meeting these targets (n=3) [38-40,48-50,59,62], or stating that the total payment was dependent on response rate (eg, up to US $100) (n=5) [34,38,41-48,50,52-57,60,62]. The maximum payment per prompt ranged from US $0.48 [37] to US $2.68 [60], with an average of US $0.95 per prompt. Six protocols supplemented cash with nonmonetary incentives (eg, iPad [41-47], activity center tickets [32], and data summary [59]). Two protocols explicitly offered no reimbursement [36].

Piloting

Only 2 of the 17 protocols reported fully piloting its EMA protocol [31,60]. In terms of item piloting, 3 protocols reported piloting individual items with children from the target age group [38-40,52-57]. Three protocols adapted existing non-EMA measures for some, but not all, EMA items, with limited detail on sources or testing [32,35,40-50,54,55,61,62].

Customization to Protocol

Of the 17 protocols, 11 allowed some customization to support adherence, which always involved tailoring prompt timing to participants’ daily routines at the start of the study, with 1 protocol providing the option to update this midway through the study period [33]. Only 2 protocols offered multiple customization options including delivery method (text or email) and EMA question language [41-47], or letting parents choose between paper and digital surveys [35].

Reminder Systems

Four protocols explicitly referenced reminder systems [31,32,41-47,60]. To prompt event-contingent entries, 1 protocol used scheduled text reminders [32] and another embedded the event-contingent survey at the start of the interval-contingent prompt (n=1) [41-47]. One protocol triggered follow-up contact after 2 days of missed data [41-47]. Another adjusted prompt volume to minimize disruption—silent in the morning, vibrating during school hours, and audible in the evening [48,50,62]. Additional strategies included follow-up reminders after the main prompt [31,60], sticker tracking sheets [41-47], and regular caregiver check-ins (every 2-3 days) [31].

Support Networks

Eight protocols reported support systems via 3 main groups. Research staff often assisted (eg, hotline, in-person meeting, and scheduled calls) (n=5) [33,34,41-48,50,60,62]. Teaching staff or schools contributed to 4 protocols, more passively by allowing the protocol to occur [38-40,48-50,62] and more actively by monitoring survey completion and response times [38,52-57]. Parents were only formally integrated in an explicit support role in 1 protocol [52,55-57], where they were asked to track their child’s EMA completion.

Research question 2: Is it feasible and acceptable to use digital devices to implement an EMA methodology with children between the ages of 5 and 11 years for health purposes?

Research question 2: Is it feasible and acceptable to use digital devices to implement an Ecological Momentary Assessment methodology with children between the ages of 5 and 11 for health purposes?

In total, 11 of the 17 included protocols did not report the overall response rate across all question domains [32-35,43-53,55,62,63]. As a result, the values in Table 4 were calculated from available data, where possible; further details on these calculations can be found in Multimedia Appendix 1.

Table 4. Summary of feasibility and acceptability outcomes across ecological momentary assessment protocols with children aged 5-11 years (research question 2).
ProtocolDropout: reasonExclusion from analysis: criteriaTechnical issuesAverage response latency in minutes (range)aChild RRbParent RR
Serious
Rosen and Epstein (2010) [33]0%: NRc0%: NRNRNRN/Ad91%e
Appelhans et al (2025) [34]NRNRNRNRN/A70%
Moschko et al (2022) [35]30%f (between waves): NR4%: No parent data3%562%e56%e
Vilaysack et al (2016) [31]NRNRNR1.75 (0.5-3)48%eN/A
Critical
Rosen et al (2013)—Main [36]NR13%f: NR13%NRN/A87%
Alacha et al (2024) [37]NR36%f: <5 successive ratingsNRNRN/A86%
Jacobs (2021) [38-40]NRNRNR6.5 (3-10)85%eN/A
Rosen et al (2013)—Sub [36]NRNRNRNR84%NR
Family Matters B1 (2020) [41-47]NRNR; incomplete survey set. 4%-6% (accelerometer: <4 days and <4 hours per day)NR3.5 (2-5)N/A80%
SASCHA B2 (2020) [40,48-50,62]NRNRNRNR78%eN/A
Engelen, Bundy, Lau et al (2015)e [51]10% (accel): 1 misplaced, 1 NRNR5%, up to 20% (accelerometer)NRN/A75%e
SASCHA B1 (2020) [48,50,62]NRNRNRNR71%eN/A
FLUX (2013) [38,52-57]4%: NR. 17%f (accel): 13 NR, 5 lost, 1 stolen0%-1%: no paired data. 9% (accel): <4 days wear2% (accelerometer)12.5 (10-15)66%eN/A
Engelen, Bundy, Bauman et al (2015)f [58]2%: includes 1 school absence13%f: “nonvalid data”NRNRN/A51%
Norman et al (2020) [32]NR10%: late daily surveysNRNRN/A49%e
Rovane et al (2024) [59]NRNRNRNRN/ANR
Family Matters B2 (2025) [60]NR31%f: did not “complete” study daysNRNRN/ANR

aValues are the average time in minutes, and values within parentheses are the range, if reported.

bRR: response rate.

cNR: not reported.

dNot applicable.

eCalculated from provided study information (Multimedia Appendix 1).

fProtocols with >11% dropout were considered significant [22].

Quantitative Data

Exclusion from analysis was the most commonly reported source of attrition (n=8), with criteria including incomplete daily surveys [41-47], nonvalid data exclusions [51], and fewer than 5 successive ratings [37]. Dropout rates ranged from 2% to 30% (n=4) [35,38-40,51-58]. Technical issues contributed to attrition in 4 protocols [35,36,38,51-57].

Fifteen protocols reported (n=6) or enabled calculation of a response rate (n=9), defined as the percentage of planned prompts completed. Six met the high-adherence threshold (≥80%) [33,36-47] and 10 fell below it (48%‐78%). Pooled estimates could be calculated only for 4 protocols using a random-effects model (mean 64.6%, 95% CI 53.6%‐75.5%), reflecting incomplete and inconsistent reporting ( Multimedia Appendix 4). Only Moschko et al [35] collected both child and parent data (62% child and 56% parent), limiting direct comparisons between self- and proxy report. Five protocols reported response latency, the time between prompt delivery and participant response, averaging 6 minutes [31,35,38-47,52-57].

Qualitative Data

Only 2 of the 17 protocols [31,32] incorporated formal qualitative interviews, one using manifest analysis of parent interviews [32] and the other summarizing joint child-parent interviews [31], so we supplemented these findings with extracted observations and author reflections from the “Results” and “Discussion” sections of the remaining 13 protocols. Using the work of Norman et al [32] as a foundation, we identified 4 facilitators and 6 barriers (Tables 5 and 6).

Table 5. Facilitators to acceptability and feasibility reported across protocols (research question 2).
ThemeDetails
Uncomplicated (n=3) [31-33]EMAa tools were easy to use or included simplifying features. Familiar tasks (eg, taking photographs) required little explanation, prompts included clear training and visuals, and participation was described as not a major inconvenience.
Engaging (n=2) [31,32]Participants found EMA tasks enjoyable and reported positive attitudes, with reference to educational benefits for children.
Caregiver support (n=1) [31]Parents helped children interpret ambiguous prompts, families supported participation by listening for alerts when devices were left in shared spaces and reminding children to respond, and school staff enabled access and ensured audible alerts during school hours.
Reminders (n=3) [31,32,41]Ongoing researcher contact supported engagement. Norman et al [32] added an end-of-day survey mid-study to address low event-contingent response rates, which was reflected on positively.

aEMA: ecological momentary assessment.

Table 6. Barriers to acceptability and feasibility reported across protocols (research question 2).
ThemeDetails
Demanding (n=3) [32,61]EMAa prompts were seen as burdensome due to high effort or intensity, which was linked to dropout; timing (eg, during busy mornings) also contributed to perceived burden.
Challenging to report accurately (n=6) [31,32,47,54,61]Both child self-reporters and proxy reporters struggled with precise reporting. Children exhibited extreme response bias on Likert or VASb scales and often could not map prompts onto real-world activities, sometimes needing caregiver help, while proxies found it difficult to categorize nonconforming events (eg, buffet-style meals) and potentially skewed responses due to social desirability or guessing when not observing the child.
Device awareness (n=3) [31,32,38,52-57]Devices were frequently forgotten, left uncharged, or misplaced. Norman et al [32] found that proxy-reporting parents often overlooked event-contingent prompts, compromising data completeness.
Device access (n=3) [[31,36]Children sometimes lost physical access to devices due to extracurricular commitments, restrictive school policies, examinations, or institutional rules; caregiver control (eg, muting or storing devices) further limited engagement.
Stigma (n=1) [31]Using EMA devices in school led to peer questioning and discomfort. In one instance, stigma resulted in a participant discontinuing EMA participation during school hours.
Lack of caregiver support (n=2) [31,36]Participation decreased when caregivers lacked time to assist children or when teaching staff required children to mute or silence devices during lessons.

aEMA: ecological momentary assessment.

bVAS: visual analog scale.

We introduce a third category, mitigators, based on participant and author suggestions for addressing barriers. Only Vilaysack et al [31] reported participant feedback, which included improving notification sounds and providing a carrying case for the EMA collection device. The authors agreed with these suggestions, recommending louder, longer alerts and training children to adjust volume settings or automating them to suit school environments [31]. A carrying bag was also recommended for future use [31].

Three protocols recommended including additional reporters including the child (n=1) [37], primary caregivers (n=1) [38,52-57], secondary caregivers (n=1) [41-47], and teachers (n=1) [37]. Alternative strategies to improve data collection, particularly in reference to children’s difficulties expressing their experiences, included combining EMA with qualitative methods (n=3) [37,41-47,60]; using sensor data to add contextual information (n=3) [35,38,52-57,60]; and extending the monitoring period to account for inconsistencies in responses (n=3) [41-47,51,58]. Authors also emphasized the need for training children and parents to express experiences effectively, particularly in mapping experiences to scales (n=1) [31], and acquiring subject knowledge (n=1) [41-47]. Two protocols recommended greater customization, including adapting data collection for participants whose first language differed from that of the researchers or protocol [41-47], offering optional custom reminders [58].

Research question 3: What characteristics of the EMA protocol (RQ1) are related to high and low response rate (RQ2) when using EMA with children between the ages of 5 and 11 years?

Of the protocols with a reported or calculated response rate (n=15/17), high- and low response rate characteristics are summarized in Table 7. However, as the majority of protocols were judged at critical risk of bias, these comparisons should be interpreted with caution. High response rate protocols (n=6) [33,36-47] predominantly recruited older KS2 children with ADHD and relied on parent report. In contrast, low response rate protocols (n=9) [31,32,35,38,40,48-58,62] were evenly split between KS1 and KS2, typically included children without identified health conditions, and more commonly used child self-report. Both groups used verbal instruction, but hands-on practice and written instruction were more common in low-response protocols. Both groups also used handheld devices, although smartphones specifically were more common in low response rate protocols [33-35,61,63,64]. Monitoring periods differed, with longer durations in high-response protocols (≥3 weeks) and shorter durations (<3 weeks) in low-response protocols.

Table 7. Comparison of high and low response rate ecological momentary assessment protocols with children aged 5-11 years, showing majority design features within each group.
CharacteristicHigh response rateLow response rate
Demographics
  • KS1a children (n=1/6),
  • KS2b children (n=5/6)
  • KS1 children (n=5/9),
  • KS2 children (n=5/9)
  • ADHDc (n=4/6)
  • No condition (n=8/9)
  • Children reporting (n=2/6)
  • Parents reporting (n=5/6)
  • Children reporting (n=5/9)
  • Parents reporting (n=5/9)
Training
  • Verbal instruction (n=4/6)
  • Written instruction (n=1/6)
  • Practice for reporter (n=3/6)
  • Verbal instruction (n=6/9)
  • Written instruction (n=4/9)
  • Practice for reporter (n=6/9)
Technology
  • Smartphone (n=2/6)
  • Smartphone (n=7/9)
Monitoring period
  • ≥3 weeks (n=4/6)
  • ≥3 weeks (n=3/9)
  • <3 weeks (n=2/6)
  • <3 weeks (n=6/9)
Prompt frequency
  • 3 prompts per day (n=4/6)
  • 3-4 prompts per day (n=6/9)
Prompting designInterval-contingent (n=6/6)
  • Random (n=1/6)
  • Fixed (n=5/6)
Interval-contingent (n=9/9)
  • Random (n=5/9)
  • Fixed (n=2/9)
  • Prompts before school, after school, and evening (n=5/6)
  • Prompts before school, after school, and evening (n=2/9)
  • ≤12 items per prompt (n=2/6)
  • ≤12 items per prompt (n=6/9)
  • >13 items per prompt (n=4/6)
  • >13 items per prompt (n=3/9)
  • Use of scales for responses (n=6/6)
  • Use of scales for responses (n=7/9)
  • Multiple response types in 1 prompt (n=3/6)
  • Multiple response types in 1 prompt (n=7/9)
  • ≤1 hour to respond to prompt (n=1/6)
  • ≤1 hour to respond to prompt (n=6/9)
Design features
  • Monetary reward (n=4/6)
  • Monetary reward (n=5/9)
  • Prompt timing customization (n=6/6)
  • Prompt timing customization (n=3/6)

aKS1: key stage 1.

bKS2: key stage 2.

cADHD: attention-deficit/hyperactivity disorder.

All protocols used interval-contingent sampling with 3-4 prompts each day, and high response rate protocols predominantly used fixed schedules, typically avoiding school hours (n=4/6) by prompting before school, after school, and in the evening. In contrast, low response rate protocols used random schedules. Based on the maximum number of items used in a single prompt within each protocol, high-response protocols typically included 20 or more items (n=4/6), while low-response protocols more often included 12 or fewer items (n=5/8). Both groups used scale-based data collection methods, but low-response protocols more frequently combined multiple response formats within a single prompt and imposed shorter response windows of 1 hour or less. In the high response rate group, most protocols offered monetary incentives and all allowed participants to customize prompt timing; by contrast, in the low-response group, incentives were less consistently used and prompt timing customization was uncommon.


Principal Findings

This review aimed to answer 3 RQs by identifying the characteristics of EMA protocols used with children aged 5-11 years (RQ1), the feasibility and acceptability of EMA protocols for children aged 5-11 years (RQ2), and the protocol characteristics linked to high and low response rates (RQ3). Most protocols used handheld devices, interval-contingent prompting, and Likert scales, with training usually verbal and practice-based to collect self- or parent-reported data across monitoring periods of 3-28 days (RQ1). Feasibility and acceptability remain difficult to assess due to heterogeneous reporting and that 13 of 17 protocols were rated at critical risk of bias using the ROBINS-I tool. Key data required for pooling (eg, raw counts for planned vs completed prompts and corresponding variance estimates) were missing or selectively reported, preventing meaningful quantitative synthesis. Reported facilitators of acceptability and feasibility included the protocol being uncomplicated and engaging, the use of reminders, and caregiver support. Barriers included device access issues, reporter difficulty reporting accurately, reporting burden, stigma, lack of protocol awareness, and insufficient caregiver involvement. Suggested future mitigations included improvements to reminder systems, carrying aids, longer monitoring periods, increased customization, passive data collection, and involving additional reporters or methods (RQ2). High-response protocols (n=6, ≥80%) [20] more often involved older children (KS2), those with specific health conditions (ADHD), and featured longer monitoring periods (≥3 weeks). These protocols typically used fixed, interval-contingent prompt schedules, with greater than 20 items and prompting 3 times a day, often before school, after school, and in the evening. The majority also offered limited customization of prompt timing and provided monetary reimbursement for participation. Given the high risk of bias across most protocols, these contrasts should be interpreted as tentative (RQ3).

Comparison With Prior Work

Evidence on the feasibility and acceptability of EMA protocols for preadolescent children remains limited, and how EMA protocol design can best support these outcomes is unclear [22-26]. Despite this, there is a growing recognition that protocols should be adapted for this age group [23,24,26]. To our knowledge, this is the first review to systematically examine the feasibility and acceptability of EMA protocols for preadolescent children. While challenges such as technical issues and participant burden are common across age groups [1,19,65], this review identified additional factors that appear especially important for preadolescents and their caregivers acting as proxy reporters, including difficulties responding to prompts accurately, inconsistent access to reporting devices, and device-related stigma. To explore the impact of protocol characteristics on acceptability and feasibility, we apply the Technology Acceptance Model, which identifies perceived ease of use and perceived usefulness as key influences on technology adoption [66]. Using this framework helps distinguish between protocol features that may have supported acceptability, those that produced unexpected patterns, and those that created barriers to technology acceptability.

Several features of high response rate protocols appeared to align with the Technology Acceptance Model’s concept of perceived ease of use and usefulness, potentially making participation more acceptable. Predictable prompting schedules (eg, before school, after school, and evening), opportunities to customize timing, and the avoidance of in-school prompts may have helped reduce disruption (eg, limiting negative teacher interference) and make participation feel easier to manage within family routines [31,67]. The use of familiar technologies to caregivers, uncomplicated interfaces, encouraging caregiver support in responding to prompts and simple visuals may have supported participation by reducing cognitive and knowledge barriers, thereby improving perceived ease of use [68]. Perceived usefulness may have been reinforced through external motivators such as monetary incentives and through the involvement of caregivers, including school staff, who facilitated tasks [69-72], and the framing of EMA tasks as enjoyable or educational [73].

Other features associated with high response rates, however, diverged from common EMA design practices and therefore require more cautious interpretation. Adult EMA literature typically favors shorter monitoring periods (≤2 weeks) [23,25] and brief prompts to reduce participant burden [16]. In contrast, in this review, longer periods (≥3 weeks) and extensive prompts (≥20 items per prompt) were present in the majority of high response rate protocols. One explanation may be that longer durations help embed EMA into daily routines, with the stability offsetting potential fatigue [1,22]. However, these more demanding protocols also tended to provide stronger external supports, such as closer researcher involvement or monetary incentives, suggesting that response rates may have reflected a combination of routine and additional resources rather than duration or length alone. Training showed a similar counterintuitive pattern: hands-on practice was more common in low response rate protocols, indicating that practice is more important for supporting valid and accurate reporting than for increasing response rates [74]. A further unexpected pattern was the higher response rates linked to children with ADHD. Although this might initially appear counterintuitive, children with long-term conditions may be more intrinsically motivated to engage, as they are more likely to recognize a personal need that technology could help address [75]. In contrast, children without a long-term health condition may not perceive the same relevance or potential benefit.

By contrast, several protocol features reduced perceived ease of use, reflecting both design complexity and developmental challenges. For example, nearly all protocols relied on response formats such as Likert or visual analog scales, despite evidence that alternative formats, such as item ranking and semantic differential scales, may be more developmentally appropriate for younger children [76-78]. The continued reliance on such scales highlights a broader adult-centered design approach that may overlook children’s cognitive abilities and real-world reporting capacities [79]. Furthermore, many 5‐7 year olds may struggle with reading, reinforcing the need for adult support in interpreting prompts [80,81]. Yet, reliance on caregivers may overlook the developmental trajectory of literacy: children who already have the skills may be denied opportunities to practice reporting independently, while those still acquiring literacy may be excluded altogether.

Beyond developmental challenges, logistical and practical barriers also reduced ease of use. More complex protocols, such as those combining multiple response formats or enforcing narrow response windows (<1 hour), may have placed additional cognitive and logistical demands on children and caregivers [82,83]. These challenges are especially important, given the documented difficulty, in both this review and wider literature, that children face with extreme response bias and mapping their experiences onto structured input formats [80]. Furthermore, the lack of flexibility beyond initial prompt timing customization contrasts with adult EMA, where participants can sometimes delay prompts or choose alternative response formats to reduce burden [84,85]. For example, 1 protocol using photo-based meal tracking struggled to capture buffet-style eating, illustrating how rigid input formats can fail to reflect real-world variability [32]. While this is a known limitation for adults [1], it may be amplified in children, as caregivers may resist disrupting routines and children may see little benefit in adjusting their behavior to enable easier data capture [67,86]. Additionally, lack of device awareness contributed to issues such as forgetting to charge it or bring it with them, disrupting participation [23]. These challenges may be greater for preadolescents, who may not own devices suitable for EMA and are instead given separate ones, making routine integration more difficult [23,30]. Using wearables, which remain on the body, may help reduce this burden [16,87]; however, no protocol in this review used such an approach. Three protocols recommended incorporating sensor data to contextualize and support children’s self-reports, suggesting a potential role for wearables in future protocols [88].

Perceived usefulness was often less prioritized than ease of use. While many protocols included monetary incentives, the included protocols did not report how they established personal relevance for either children or caregivers. Emphasizing the value of participation may be particularly important when EMA is used solely as a measurement tool, as was the case in all included protocols, to avoid participants feeling that they contribute data without clear benefits or autonomy [8,89]. In addition, preadolescent children may be better motivated by immediate rewards [90], yet only 1 protocol in this review included a daily reward system [41-47]. Social stigma also shaped perceived usefulness. In 1 protocol, preadolescents reported attracting unwanted attention when using a smartphone at school [31]. This highlights how adult assumptions about appropriate technologies can overlook that many preadolescents lack regular access to smartphones [91,92], which may contribute to feelings of unfamiliarity or stigma. Few protocols in this review involved children directly in their development (eg, piloting), limiting opportunities to identify usability issues and integrate children’s perspectives [93,94].

Preadolescents are embedded in a shifting network of caregivers, teachers, peers, and wider family [95], yet many protocols included in this review treat them as isolated participants and do not explicitly involve caregivers in supporting roles. While caregivers were often given verbal or written instructions when the child was the reporter, protocols rarely included opportunities for them to practice using the device. School cooperation was also limited, restricting access to essential adult support. Furthermore, simultaneous tracking by children and caregivers was rare, although others recommended incorporating multiple perspectives in future work [37,38,41-47,52-57]. Such approaches may offer benefits including shared responsibility, enhanced motivation, and enriched data perspectives [96,97]. Preadolescents rely heavily on adults and their peers to interpret, manage, and respond to daily tasks [98,99], making limited involvement of these networks in protocol design both a missed opportunity and a barrier to engagement [31]. Protocols that acknowledge and actively involve this network, through co-design, piloting, and contextual adaptation, are more likely to avoid common pitfalls and create experiences that are not only feasible but also genuinely valuable to children and those who support them [100-102], although this must be balanced against the risk that proxy reporting shifts the focus away from children’s voices. In KS1 protocols, for example, parents often acted as reporters, sometimes entering responses in dialogue with their child, but in other cases responding based only on their own observations. The latter approach arguably reflects EMA of parents about their children, rather than EMA of children themselves, raising questions of validity and comparability with adult EMA where self-report is standard. These issues highlight the importance and complexity of situating EMA within children’s lives, emphasizing the need for EMA protocols that fit within children’s everyday social and caregiving contexts, ensuring that both children and their caregivers can meaningfully engage with and benefit from these tools [70,103].

Strengths and Limitations

Digital health is an interdisciplinary field, and a key strength of this review is the interdisciplinary approach taken by searching a large number of databases (n=10), spanning both health science databases and HCI venues. The review aimed to investigate EMA protocols used with children (aged 5‐11 years) to understand developmentally sensitive design implications. To investigate this, we looked across the broad spectrum of child health behaviors and conditions, which means our review is not limited to any specific health domain, making the findings relevant to a broader scope of researchers working with pediatric EMA protocols. However, due to heterogeneity and lack of reporting, a formal meta-analysis of quantitative data nor metaethnography of qualitative data was feasible, and we instead used a narrative review approach [61] to identify patterns that may inform future hypothesis generation. We also excluded studies that included our target age group but did not report specific protocol adaptations for younger children (eg, studies covering ages 5‐18 years without age-specific design considerations). The use of narrative synthesis, while appropriate given the diversity of study design, has been critiqued for limited transparency [104]. Additionally, the scarcity of detailed qualitative and quantitative data restricted the depth of our analysis. This limitation highlights a need for future research exploring child and caregiver views to improve EMA protocols. A further limitation is that most protocols were conducted in Western, high-income contexts, with little reporting of socioeconomic background and inconsistent reporting of ethnicity. Where provided, samples were often predominantly White, with only a few protocols including more diverse populations. Future research should prioritize more diverse samples and clearer demographic reporting.

Conclusions

This review provides the first systematic evidence base focused exclusively on digital EMA with preadolescent children. It examined 17 digital EMA protocols involving children aged 5‐11 years, highlighting gaps in developmental appropriateness (eg, absence of child-focused piloting) and inconsistencies in reporting quality that limit both interpretability and comparability across studies.

In contrast to existing reviews that primarily emphasize adherence or feasibility in children of different ages as a single group and adults, these findings emphasize the importance of preadolescent acceptance and developmental considerations when EMA is used with this specific age group. Barriers included device access issues, finding it challenging to report accurately (eg, response options not matching how they want to express themselves), reporting burden, stigma, lack of protocol awareness, and insufficient caregiver involvement. Facilitators included uncomplicated, engaging technology, reminders, and caregiver involvement. Several counterintuitive patterns also emerged: protocols with longer durations and more items per prompt were linked to high response rates, further highlighting the importance of considering preadolescents distinctly.

The review contributes to the field by consolidating the evidence base and identifying protocol characteristics, while also highlighting the need for improved and more consistent reporting of feasibility, acceptability, and response metrics. Without such improvements, meaningful comparison across protocols and cumulative knowledge building remains limited.

From a practical perspective, the findings suggest that future digital EMA research should focus on perceived ease of use (eg, predictable prompting schedules, simplified response formats, and flexibility in fitting daily routines) and perceived usefulness (eg, immediate rewards, personally relevant activities, clear explanations of purpose, and addressing stigma) as part of a more child-centered design approach. Given children’s dependence on caregivers and teachers, involving these adults is likely to support perceived ease of use (eg, assistance with prompts, device availability, and charging) and perceived usefulness (eg, clarifying relevance, reinforcing engagement, and managing social dynamics). With greater developmental alignment and improved reporting standards, digital EMA could be more effectively integrated into pediatric health monitoring in ways that are sensitive to the needs of different age groups.

Acknowledgments

The authors declare the use of generative artificial intelligence in the research and writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to GAI tools under full human supervision: proofreading and editing. The GAI tool used was Grammarly, ChatGPT-4.0. Responsibility for the final manuscript lies entirely with the authors. GAI tools are not listed as authors and do not bear responsibility for the final outcomes. A declaration was submitted by SC.

Funding

This work was supported by the EPSRC Digital Health and Care Centre for Doctoral Training (UKRI grant EP/S023704/1).

Data Availability

The data extracted and analyzed during this review are available in the supplementary files of this manuscript (Checklists 1 and 2 and Multimedia Appendices 1-4), with the full data extraction tables shown in Multimedia Appendix 5.

Authors' Contributions

SC conceptualized this work in collaboration with the coauthors. SC carried out the search strategy and conducted the article screening with LT. SC performed the quality assessment. SC wrote the initial manuscript. LT, AB, and JB reviewed, refined, and approved the final manuscript.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Full search strategy.

DOCX File, 18 KB

Multimedia Appendix 2

Quality assessment including CREMAS (Checklist for Reporting EMA Studies) and ROBINS-I v2 (Risk Of Bias In Nonrandomized Studies of Interventions, version 2).

XLSX File, 25 KB

Multimedia Appendix 3

Study-level overviews.

DOCX File, 151 KB

Multimedia Appendix 4

Response rate calculations.

XLSX File, 18 KB

Multimedia Appendix 5

Data extraction tables (full).

XLSX File, 29 KB

Checklist 1

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

DOCX File, 214 KB

Checklist 2

PRISMA-S (An Extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews) checklist.

DOCX File, 12 KB

  1. Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1-32. [CrossRef] [Medline]
  2. Hartson KR, Huntington-Moskos L, Sears CG, et al. Use of electronic ecological momentary assessment methodologies in physical activity, sedentary behavior, and sleep research in young adults: systematic review. J Med Internet Res. Jun 29, 2023;25:e46783. [CrossRef] [Medline]
  3. Overton M, Ward S, Swain N, et al. Are ecological momentary assessments of pain valid and reliable? A systematic review and meta-analysis. Clin J Pain. Jan 1, 2023;39(1):29-40. [CrossRef] [Medline]
  4. Gratch I, Choo TH, Galfalvy H, et al. Detecting suicidal thoughts: the power of ecological momentary assessment. Depress Anxiety. Jan 2021;38(1):8-16. [CrossRef] [Medline]
  5. Chan G, Alslaity A, Wilson R, Orji R. Exploring variance in users’ moods across times, seasons, and activities: a longitudinal analysis. Presented at: Adjunct Publication of the 24th International Conference on Human-Computer Interaction with Mobile Devices and Services; Sep 28, 2022:1-7; Vancouver BC Canada. [CrossRef]
  6. Sherriff B, Clark C, Killingback C, Newell D. Impact of contextual factors on patient outcomes following conservative low back pain treatment: systematic review. Chiropr Man Therap. Apr 21, 2022;30(1):20. [CrossRef] [Medline]
  7. Gao L, Sun Y, Pan L, et al. Current status and influencing factors of fatigue in patients with rheumatoid arthritis: a cross-sectional study in China. Int J Nurs Pract. Jun 2022;28(3):e12996. [CrossRef] [Medline]
  8. Dao KP, De Cocker K, Tong HL, Kocaballi AB, Chow C, Laranjo L. Smartphone-delivered ecological momentary interventions based on ecological momentary assessments to promote health behaviors: systematic review and adapted checklist for reporting ecological momentary assessment and intervention studies. JMIR Mhealth Uhealth. Nov 19, 2021;9(11):e22890. [CrossRef] [Medline]
  9. Huckins JF, daSilva AW, Wang W, et al. Mental health and behavior of college students during the early phases of the COVID-19 pandemic: longitudinal smartphone and ecological momentary assessment study. J Med Internet Res. Jun 17, 2020;22(6):e20185. [CrossRef] [Medline]
  10. Doherty K, Balaskas A, Doherty G. The design of ecological momentary assessment technologies. Interact Comput. May 17, 2020;32(3):257-278. [CrossRef]
  11. Salles A, Ais J, Semelman M, Sigman M, Calero CI. The metacognitive abilities of children and adults. Cogn Dev. Oct 2016;40:101-110. [CrossRef]
  12. Thunnissen MR, Aan Het Rot M, van den Hoofdakker BJ, Nauta MH. Youth psychopathology in daily life: systematically reviewed characteristics and potentials of ecological momentary assessment applications. Child Psychiatry Hum Dev. Dec 2022;53(6):1129-1147. [CrossRef] [Medline]
  13. Forsberg A, Guitard D, Adams EJ, Pattanakul D, Cowan N. Children’s long-term retention is directly constrained by their working memory capacity limitations. Dev Sci. Mar 2022;25(2):e13164. [CrossRef] [Medline]
  14. Wiljén A, Chaplin JE, Crine V, et al. The development of an mHealth tool for children with long-term illness to enable person-centered communication: user-centered design approach. JMIR Pediatr Parent. Mar 8, 2022;5(1):e30364. [CrossRef] [Medline]
  15. McCallum C, Rooksby J, Gray CM. Evaluating the impact of physical activity apps and wearables: interdisciplinary review. JMIR Mhealth Uhealth. Mar 23, 2018;6(3):e58. [CrossRef] [Medline]
  16. Intille S, Haynes C, Maniar D, Ponnada A, Manjourides J. μEMA: microinteraction-based ecological momentary assessment (EMA) using a smartwatch. Proc ACM Int Conf Ubiquitous Comput. Sep 2016;2016:1124-1128. [CrossRef] [Medline]
  17. Aminikhanghahi S, Schmitter-Edgecombe M, Cook DJ. Context-aware delivery of ecological momentary assessment. IEEE J Biomed Health Inform. Apr 2020;24(4):1206-1214. [CrossRef] [Medline]
  18. Kumar D, Haag D, Blechert J, Niebauer J, Smeddinck JD. Feature selection for physical activity prediction using ecological momentary assessments to personalize intervention timing: longitudinal observational study. JMIR Mhealth Uhealth. 2025;13:e57255-e57255. [CrossRef]
  19. Williams MT, Lewthwaite H, Fraysse F, Gajewska A, Ignatavicius J, Ferrar K. Compliance with mobile ecological momentary assessment of self-reported health-related behaviors and psychological constructs in adults: systematic review and meta-analysis. J Med Internet Res. Mar 3, 2021;23(3):e17023. [CrossRef] [Medline]
  20. Stone AA, Shiffman S. Capturing momentary, self-report data: a proposal for reporting guidelines. Ann Behav Med. 2002;24(3):236-243. [CrossRef] [Medline]
  21. Businelle MS, Hébert ET, Shi D, et al. Investigating best practices for ecological momentary assessment: nationwide factorial experiment. J Med Internet Res. Aug 12, 2024;26:e50275. [CrossRef] [Medline]
  22. Wrzus C, Neubauer AB. Ecological momentary assessment: a meta-analysis on designs, samples, and compliance across research fields. Assessment. Apr 2023;30(3):825-846. [CrossRef] [Medline]
  23. Wen CKF, Schneider S, Stone AA, Spruijt-Metz D. Compliance with mobile ecological momentary assessment protocols in children and adolescents: a systematic review and meta-analysis. J Med Internet Res. Apr 26, 2017;19(4):e132. [CrossRef] [Medline]
  24. Heron KE, Everhart RS, McHale SM, Smyth JM. Using mobile-technology-based ecological momentary assessment (EMA) methods with youth: a systematic review and recommendations. J Pediatr Psychol. Nov 1, 2017;42(10):1087-1107. [CrossRef] [Medline]
  25. Liao Y, Skelton K, Dunton G, Bruening M. A systematic review of methods and procedures used in ecological momentary assessments of diet and physical activity research in youth: an adapted STROBE checklist for reporting EMA studies (CREMAS). J Med Internet Res. Jun 21, 2016;18(6):e151. [CrossRef] [Medline]
  26. Mason TB, Do B, Wang S, Dunton GF. Ecological momentary assessment of eating and dietary intake behaviors in children and adolescents: a systematic review of the literature. Appetite. Jan 1, 2020;144:104465. [CrossRef] [Medline]
  27. Page MJ, McKenzie JE, Bossuyt PM, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021:n71. [CrossRef]
  28. Yang YS, Ryu GW, Choi M. Methodological strategies for ecological momentary assessment to evaluate mood and stress in adult patients using mobile phones: systematic review. JMIR Mhealth Uhealth. Apr 1, 2019;7(4):e11215. [CrossRef] [Medline]
  29. Boluyt N, Tjosvold L, Lefebvre C, Klassen TP, Offringa M. Usefulness of systematic review search strategies in finding child health systematic reviews in MEDLINE. Arch Pediatr Adolesc Med. Feb 2008;162(2):111-116. [CrossRef] [Medline]
  30. de Vries LP, Baselmans BML, Bartels M. Smartphone-based ecological momentary assessment of well-being: a systematic review and recommendations for future studies. J Happiness Stud. 2021;22(5):2361-2408. [CrossRef] [Medline]
  31. Vilaysack B, Cordier R, Doma K, Chen YW. Capturing everyday experiences of typically developing children aged five to seven years: a feasibility study of experience sampling methodology. Aust Occup Ther J. Dec 2016;63(6):424-433. [CrossRef] [Medline]
  32. Norman Å, Kjellenberg K, Torres Aréchiga D, Löf M, Patterson E. “Everyone can take photos.” Feasibility and relative validity of phone photography-based assessment of children’s diets - a mixed methods study. Nutr J. May 27, 2020;19(1):50. [CrossRef] [Medline]
  33. Rosen PJ, Epstein JN. A pilot study of ecological momentary assessment of emotion dysregulation in children. Vol 1. Journal of ADHD and Related Disorders; 2010:39-52. URL: https:/​/www.​researchgate.net/​publication/​303157300_A_pilot_study_of_ecological_momentary_assessment_of_emotion_dysregulation_in_children [Accessed 2025-11-01]
  34. Appelhans BM, Bleil ME, Crane MM. Leveraging recreational activities to reduce obesity-related behaviors in children from lower-income households. Appetite. Oct 1, 2025;214:108171. [CrossRef] [Medline]
  35. Moschko T, Stadler G, Gawrilow C. Fluctuations in children’s self-regulation and parent-child interaction in everyday life: an ambulatory assessment study. J Soc Pers Relat. Jan 2023;40(1):254-276. [CrossRef]
  36. Rosen PJ, Epstein JN, Van Orden G. I know it when I quantify it: ecological momentary assessment and recurrence quantification analysis of emotion dysregulation in children with ADHD. Atten Defic Hyperact Disord. Sep 2013;5(3):283-294. [CrossRef] [Medline]
  37. Alacha HF, Rosen PJ, Bufferd SJ. Positive affect variability is associated with homework management difficulties in children with ADHD. J Child Fam Stud. Sep 2024;33(9):2933-2946. [CrossRef]
  38. Kramer AC, Neubauer AB, Leonhardt A, Brose A, Dirk J, Schmiedek F. Ambulatory assessment of rumination and worry: capturing perseverative cognitions in children’s daily life. Psychol Assess. Sep 2021;33(9):827-842. [CrossRef] [Medline]
  39. Neubauer AB, Kramer AC, Schmiedek F. Assessing domain-general need fulfillment in children and adults: introducing the General Need Satisfaction and Frustration Scale. Psychol Assess. Nov 2022;34(11):1022-1035. [CrossRef] [Medline]
  40. Neubauer AB, Kramer AC, Schmidt A, Könen T, Dirk J, Schmiedek F. Reciprocal relations of subjective sleep quality and affective well-being in late childhood. Dev Psychol. Aug 2021;57(8):1372-1386. [CrossRef] [Medline]
  41. Trofholz A, Tate A, Janowiec M, et al. Ecological momentary assessment of weight-related behaviors in the home environment of children from low-income and racially and ethnically diverse households: development and usability study. JMIR Res Protoc. Dec 1, 2021;10(12):e30525. [CrossRef] [Medline]
  42. Tate A, Telke S, Trofholz A, Miner M, Berge JM. Stressed Out! Examining family meal decisions in response to daily stressors via ecological momentary assessment in a racially/ethnically diverse population. Prev Med Rep. Dec 2020;20:101251. [CrossRef] [Medline]
  43. Berge JM, Fertig AR, Trofholz A, Neumark-Sztainer D, Rogers E, Loth K. Associations between parental stress, parent feeding practices, and child eating behaviors within the context of food insecurity. Prev Med Rep. Sep 2020;19:101146. [CrossRef] [Medline]
  44. Loth KA, Fertig A, Trofholz A, et al. Concordance of children’s intake of selected food groups as reported by parents via 24-h dietary recall and ecological momentary assessment. Public Health Nutr. Jan 2021;24(1):22-33. [CrossRef] [Medline]
  45. Loth KA, Tate AD, Trofholz A, et al. Ecological momentary assessment of the snacking environments of children from racially/ethnically diverse households. Appetite. Feb 1, 2020;145:104497. [CrossRef] [Medline]
  46. de Brito JN, Loth KA, Tate A, Berge JM. Associations between parent self-reported and accelerometer-measured physical activity and sedentary time in children: ecological momentary assessment study. JMIR Mhealth Uhealth. May 19, 2020;8(5):e15458. [CrossRef] [Medline]
  47. Wirthlin R, Linde JA, Trofholz A, Tate A, Loth K, Berge JM. Associations between parent and child physical activity and eating behaviours in a diverse sample: an ecological momentary assessment study. Public Health Nutr. Oct 2020;23(15):2728-2736. [CrossRef] [Medline]
  48. Blume F, Irmer A, Dirk J, Schmiedek F. Day-to-day variation in students’ academic success: the role of self-regulation, working memory, and achievement goals. Dev Sci. Nov 2022;25(6):e13301. [CrossRef] [Medline]
  49. Galeano-Keiner EM, Neubauer AB, Irmer A, Schmiedek F. Daily fluctuations in children’s working memory accuracy and precision: variability at multiple time scales and links to daily sleep behavior and fluid intelligence. Cogn Dev. Oct 2022;64:101260. [CrossRef]
  50. Schmidt A, Neubauer AB, Dirk J, Schmiedek F. The bright and the dark side of peer relationships: differential effects of relatedness satisfaction and frustration at school on affective well-being in children’s daily lives. Dev Psychol. Aug 2020;56(8):1532-1546. [CrossRef] [Medline]
  51. Engelen L, Bundy AC, Lau J, et al. Understanding patterns of young children’s physical activity after school--it’s all about context: a cross-sectional study. J Phys Act Health. Mar 2015;12(3):335-339. [CrossRef] [Medline]
  52. Könen T, Dirk J, Schmiedek F. Cognitive benefits of last night’s sleep: daily variations in children’s sleep behavior are related to working memory fluctuations. J Child Psychol Psychiatry. Feb 2015;56(2):171-182. [CrossRef] [Medline]
  53. Könen T, Dirk J, Leonhardt A, Schmiedek F. The interplay between sleep behavior and affect in elementary school children’s daily life. J Exp Child Psychol. Oct 2016;150:1-15. [CrossRef] [Medline]
  54. Kühnhausen J, Leonhardt A, Dirk J, Schmiedek F. Physical activity and affect in elementary school children’s daily lives. Front Psychol. 2013;4:456. [CrossRef] [Medline]
  55. Dirk J, Schmiedek F. Variability in children’s working memory is coupled with perceived disturbance: an ambulatory assessment study in the school and out-of-school context. Res Hum Dev. Jul 3, 2017;14(3):200-218. [CrossRef]
  56. Neubauer AB, Dirk J, Schmiedek F. Momentary working memory performance is coupled with different dimensions of affect for different children: a mixture model analysis of ambulatory assessment data. Dev Psychol. Apr 2019;55(4):754-766. [CrossRef] [Medline]
  57. Schmidt A, Dirk J, Schmiedek F. The importance of peer relatedness at school for affective well‐being in children: between‐ and within‐person associations. Soc Dev. Nov 2019;28(4):873-892. [CrossRef]
  58. Engelen L, Bundy AC, Bauman A, Naughton G, Wyver S, Baur L. Young children’s after-school activities - there’s more to it than screen time: a cross-sectional study of young primary school children. J Phys Act Health. Jan 2015;12(1):8-12. [CrossRef] [Medline]
  59. Rovane AK, Hock RM, Yang CH, Hills KJ. Parent facilitation of child emotion regulation in ASD: an ecological momentary assessment study. J Autism Dev Disord. Dec 2025;55(12):4197-4211. [CrossRef] [Medline]
  60. Kracht CL, Tate A, de Brito JN, Trofholz A, Berge JM. Association between parental stress, coping, mood, and subsequent child physical activity and screen-time: an ecological momentary assessment study. BMC Public Health. Feb 22, 2025;25(1):729. [CrossRef] [Medline]
  61. Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the Conduct of Narrative Synthesis in Systematic Reviews: A Product From the ESRC Methods Programme Version. 2006. [CrossRef]
  62. Schmidt A, Dirk J, Neubauer AB, Schmiedek F. Evaluating sociometer theory in children’s everyday lives: inclusion, but not exclusion by peers at school is related to within-day change in self-esteem. Eur J Pers. Sep 2021;35(5):736-753. [CrossRef]
  63. Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. Jul 10, 2008;8:45. [CrossRef] [Medline]
  64. Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. Oct 12, 2016;355:i4919. [CrossRef] [Medline]
  65. Carpenter RW, Wycoff AM, Trull TJ. Ambulatory assessment. Assessment. Aug 2016;23(4):414-424. [CrossRef] [Medline]
  66. Marangunić N, Granić A. Technology acceptance model: a literature review from 1986 to 2013. Univ Access Inf Soc. Mar 2015;14(1):81-95. [CrossRef]
  67. Selman SB, Dilworth‐Bart JE. Routines and child development: a systematic review. J of Family Theo & Revie. Jun 2024;16(2):272-328. URL: https://onlinelibrary.wiley.com/toc/17562589/16/2 [CrossRef]
  68. Xiong C, D’Souza A, El-Khechen-Richandi G, et al. Perceptions of Digital Technology Experiences and Development Among Family Caregivers and Technology Researchers: Qualitative Study. JMIR Form Res. Jan 28, 2022;6(1):e19967. [CrossRef] [Medline]
  69. Abdelazeem B, Abbas KS, Amin MA, et al. The effectiveness of incentives for research participation: a systematic review and meta-analysis of randomized controlled trials. PLoS One. 2022;17(4):e0267534. [CrossRef] [Medline]
  70. Pina LR, Sien SW, Ward T, Yip JC, Munson SA, Fogarty J, et al. From personal informatics to family informatics: understanding family practices around health monitoring. Presented at: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing; Feb 25, 2017:2300-2315; Portland, Oregon, USA. [CrossRef]
  71. Lowry C, Leonard-Kane R, Gibbs B, Muller LM, Peacock A, Jani A. Teachers: the forgotten health workforce. J R Soc Med. Apr 2022;115(4):133-137. [CrossRef] [Medline]
  72. Paquette ET, Palac H, Bair E, et al. The importance of engaging children in research decision-making: a preliminary mixed-methods study. Ethics Hum Res. May 2020;42(3):12-20. [CrossRef] [Medline]
  73. Schepers S, Dreessen K, Zaman B. Fun as a user gain in participatory design processes involving children. Presented at: Proceedings of the 17th ACM Conference on Interaction Design and Children; Jun 19, 2018:396-404; Trondheim, Norway. [CrossRef]
  74. Wen CKF, Junghaenel DU, Newman DB, et al. The effect of training on participant adherence with a reporting time frame for momentary subjective experiences in ecological momentary assessment: cognitive interview study. JMIR Form Res. May 26, 2021;5(5):e28007. [CrossRef] [Medline]
  75. Anastasi JK, Capili B, Norton M, McMahon DJ, Marder K. Recruitment and retention of clinical trial participants: understanding motivations of patients with chronic pain and other populations. Front Pain Res (Lausanne). 2023;4:1330937. [CrossRef] [Medline]
  76. Burton N, Burton M, Fisher C, Peña PG, Rhodes G, Ewing L. Beyond Likert ratings: improving the robustness of developmental research measurement using best-worst scaling. Behav Res Methods. Oct 2021;53(5):2273-2279. [CrossRef] [Medline]
  77. van Laerhoven H, van der Zaag-Loonen HJ, Derkx BHF. A comparison of Likert scale and visual analogue scales as response options in children’s questionnaires. Acta Paediatr. Jun 2004;93(6):830-835. [CrossRef] [Medline]
  78. Rocereto JF, Puzakova M, Anderson RE, Kwak H. The role of response formats on extreme response style: a case of Likert-type vs. semantic differential scales. In: Sarstedt M, Schwaiger M, Taylor CR, editors. Emerald Group Publishing Limited. 2011:53-71. [CrossRef]
  79. Mellor D, Moore KA. The use of Likert scales with children. J Pediatr Psychol. Apr 2014;39(3):369-379. [CrossRef] [Medline]
  80. Chambers CT, Johnston C. Developmental differences in children’s use of rating scales. J Pediatr Psychol. 2002;27(1):27-36. [CrossRef] [Medline]
  81. Manu M, Torppa M, Vasalampi K, Lerkkanen M, Poikkeus A, Niemi P. Reading development from kindergarten to age 18: the role of gender and parental education. Read Res Q. Oct 2023;58(4):505-538. [CrossRef]
  82. Barrouillet P, Camos V. As time goes by: temporal constraints in working memory. Curr Dir Psychol Sci. 2012;21:413-419. [CrossRef]
  83. Ahmed SF, Ellis A, Ward KP, Chaku N, Davis-Kean PE. Working memory development from early childhood to adolescence using two nationally representative samples. Dev Psychol. Oct 2022;58(10):1962-1973. [CrossRef] [Medline]
  84. Stone AA, Schneider S, Smyth JM. Evaluation of pressing issues in ecological momentary assessment. Annu Rev Clin Psychol. May 9, 2023;19(1):107-131. [CrossRef] [Medline]
  85. Prochnow T, Wang WL, Wang S, et al. Understanding longitudinal ecological momentary assessment completion: results from 12 months of burst sampling in the TIME Study. JMIR Mhealth Uhealth. Oct 22, 2025;13:e67117. [CrossRef] [Medline]
  86. Kuczynski L, Burke T, Song-Choi P. Mothers’ perspectives on resistance and defiance in middle childhood: promoting autonomy and social skill. Soc Sci (Basel). 2021;10(12):469. [CrossRef]
  87. Dey AK, Wac K, Ferreira D, Tassini K, Hong JH, Ramos J. Getting closer: an empirical investigation of the proximity of user to their smart phones. Presented at: Proceedings of the 13th International Conference on Ubiquitous Computing; Sep 17, 2011:163-172; Beijing, China. [CrossRef]
  88. Thompson L, Charitos S, Bird J, Marshall P, Brigden A. Exploring the use of smartwatches and activity trackers for health-related purposes for children aged 5 to 11 years: systematic review. J Med Internet Res. Jan 27, 2025;27:e62944. [CrossRef] [Medline]
  89. Druin A. The role of children in the design of new technology. Behav Inf Technol. Jan 2002;21(1):1-25. [CrossRef]
  90. Steinberg L, Graham S, O’Brien L, Woolard J, Cauffman E, Banich M. Age differences in future orientation and delay discounting. Child Dev. 2009;80(1):28-44. [CrossRef] [Medline]
  91. Richter A, Adkins V, Selkie E. Youth perspectives on the recommended age of mobile phone adoption: survey study. JMIR Pediatr Parent. Oct 31, 2022;5(4):e40704. [CrossRef] [Medline]
  92. Moreno MA, Kerr BR, Jenkins M, Lam E, Malik FS. Perspectives on smartphone ownership and use by early adolescents. J Adolesc Health. Apr 2019;64(4):437-442. [CrossRef] [Medline]
  93. Malmqvist J, Hellberg K, Möllås G, Rose R, Shevlin M. Conducting the pilot study: a neglected part of the research process? Methodological findings supporting the importance of piloting in qualitative research studies. Int J Qual Methods. Jan 1, 2019;18:1609406919878341. [CrossRef]
  94. Hassan ZA, Schattner P, Mazza D. Doing a pilot study: why is it essential? Malays Fam Physician. 2006;1(2-3):70-73. [Medline]
  95. Manalel JA, Antonucci TC. Beyond the nuclear family: children’s social networks and depressive symptomology. Child Dev. Jul 2020;91(4):1302-1316. [CrossRef] [Medline]
  96. Dzubur E, Huh J, Maher JP, Intille SS, Dunton GF. Response patterns and intra-dyadic factors related to compliance with ecological momentary assessment among mothers and children. Transl Behav Med. Mar 1, 2018;8(2):233-242. [CrossRef] [Medline]
  97. Campbell R, Goodman-Williams R, Feeney H, Fehler-Cabral G. Assessing triangulation across methodologies, methods, and stakeholder groups: the joys, woes, and politics of interpreting convergent and divergent data. American Journal of Evaluation. Mar 2020;41(1):125-144. [CrossRef]
  98. Simons-Morton B, Chen R. Peer and parent influences on school engagement among early adolescents. Youth Soc. Sep 1, 2009;41(1):3-25. [CrossRef] [Medline]
  99. Mascia ML, Langiu G, Bonfiglio NS, Penna MP, Cataudella S. Challenges of preadolescence in the school context: a systematic review of protective/risk factors and intervention programmes. Education Sciences. 2023;13(2):130. [CrossRef]
  100. Silva LM, Cibrian FL, Bonang C, et al. Co-designing situated displays for family co-regulation with ADHD children. Presented at: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems; May 11, 2024:1-19; Honolulu HI, USA. [CrossRef]
  101. Scheer ER, Werner NE, Coller RJ, et al. Designing for caregiving networks: a case study of primary caregivers of children with medical complexity. J Am Med Inform Assoc. Apr 19, 2024;31(5):1151-1162. [CrossRef] [Medline]
  102. Warren JL, Antle AN, Kitson A, Davoodi A. A codesign study exploring needs, strategies, and opportunities for digital health platforms to address pandemic-related impacts on children and families. Int J Child Comput Interact. Sep 2023;37:100596. [CrossRef]
  103. Coiera E, Yin K, Sharan RV, et al. Family informatics. J Am Med Inform Assoc. Jun 14, 2022;29(7):1310-1315. [CrossRef] [Medline]
  104. Campbell M, Katikireddi SV, Sowden A, McKenzie JE, Thomson H. Improving Conduct and Reporting of Narrative Synthesis of Quantitative Data (ICONS-Quant): protocol for a mixed methods study to develop a reporting guideline. BMJ Open. Feb 2018;8(2):e020064. [CrossRef]


ADHD: attention deficit/hyperactivity disorder
ASD: autism spectrum disorder
CREMAS: Checklist for Reporting EMA Studies
EMA: ecological momentary assessment
HCI: Human-Computer Interaction
KS1: key stage 1
KS2: key stage 2
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
ROBINS-I v2: Risk Of Bias In Nonrandomized Studies of Interventions, version 2
RQ: research question


Edited by Stefano Brini; submitted 19.Jun.2025; peer-reviewed by Johanna Hepp, Yiqiang Feng; final revised version received 15.Jan.2026; accepted 16.Jan.2026; published 14.Apr.2026.

Copyright

© Sydney Charitos, Lauren Thompson, Amberly Brigden, Jon Bird. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 14.Apr.2026.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.