Published on in Vol 23, No 5 (2021): May

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/23499, first published .
Maximizing Participant Engagement, Participation, and Retention in Cohort Studies Using Digital Methods: Rapid Review to Inform the Next Generation of Very Large Birth Cohorts

Maximizing Participant Engagement, Participation, and Retention in Cohort Studies Using Digital Methods: Rapid Review to Inform the Next Generation of Very Large Birth Cohorts

Maximizing Participant Engagement, Participation, and Retention in Cohort Studies Using Digital Methods: Rapid Review to Inform the Next Generation of Very Large Birth Cohorts

Review

1Murdoch Children’s Research Institute, Parkville, Australia

2Department of Paediatrics, The University of Melbourne, Parkville, Australia

3Faculty of Medicine, Nursing and Health Sciences, Monash University, Clayton, Australia

Corresponding Author:

Melissa Wake, MD

Murdoch Children’s Research Institute

50 Flemington Road

Parkville, VIC 3052

Australia

Phone: 61 3 9345 5937

Email: melissa.wake@mcri.edu.au


Background: Many current research needs can only be addressed using very large cohorts. In such studies, traditional one-on-one phone, face-to-face, or paper-based engagement may not be feasible. The only realistic mechanism for maintaining engagement and participation at this scale is via digital methods. Given the substantial investment being made into very large birth cohort studies, evidence for optimal methods of participant engagement, participation, and retention over sustained periods without in-person contact from researchers is paramount.

Objective: This study aims to provide an overview of systematic reviews and meta-analyses evaluating alternative strategies for maximizing participant engagement and retention rates in large-scale studies using digital methods.

Methods: We used a rapid review method by searching PubMed and Ovid MEDLINE databases from January 2012 to December 2019. Studies evaluating at least 1 e-engagement, participation, or retention strategy were eligible. Articles were screened for relevance based on preset inclusion and exclusion criteria. The methodological quality of the included reviews was assessed using the AMSTAR-2 (Assessing the Methodological Quality of Systematic Reviews 2) measurement tool, and a narrative synthesis of the data was conducted.

Results: The literature search yielded 19 eligible reviews. Overall, 63% (n=12) of these reviews reported on the effectiveness of e-engagement or participation promotion strategies. These evaluations were generally not conducted within very large observational digital cohorts. Most of the contributing reviews included multipurpose cohort studies (with both observational and interventional elements) conducted in clinical and research settings. Email or SMS text message reminders, SMS text messages or voice notifications, and incentives were the most commonly used design features to engage and retain participants. For parental outcomes, engagement-facilitation interventions influenced uptake and behavior change, including video feedback, goal setting, and intensive human facilitation and support. Participant-stated preferences for content included new knowledge, reminders, solutions, and suggestions about health issues presented in a clear, short, and personalized way. Perinatal and postpartum women valued self-monitoring and personalized feedback. Digital reminders and multiple SMS text messages were specific strategies that were found to increase adherence to medication and clinic attendance, respectively.

Conclusions: This review adds to the growing literature evaluating methods to optimize engagement and participation that may apply to large-scale studies using digital methods; it is promising that most e-engagement and participation promotion strategies appear to be effective. However, these reviews canvassed relatively few strategies, suggesting that few alternative strategies have been experimentally evaluated. The reviews also revealed a dearth of experimental evidence generated within very large observational digital cohort studies, which may reflect the small number of such studies worldwide. Thus, very large studies may need to proactively build in experimental opportunities to test engagement and retention approaches to enhance the success of their own and other large digital contact studies.

J Med Internet Res 2021;23(5):e23499

doi:10.2196/23499

Keywords



Background

Adult cohort studies (such as the UK Biobank, recruiting 500,000 participants and costing approximately £250 million (US $349 million) to date [1]) have demonstrated the power of mega-cohorts to transform the speed, precision, and capacity for high-value new knowledge for health and health care delivery. Unfortunately, high-profile early life initiatives of similar size and ambition, such as the US National Children’s Study and UK Life Study, were withdrawn despite £0.8 billion (US $1.2 billion) and £38 million (US $59 million) funding, respectively [2,3], in large part because they stumbled at the first hurdle of engagement and uptake. Others, though successful in recruitment, have had substantial attrition over time [4]. Thus, a limited science of engagement and retention poses a critical hurdle to such studies in meeting their vision of advancing human health.

Engagement is defined as “the extent to and manner in which people actively use a resource and has been operationalized as a multistage process involving the point of engagement, a period of sustained engagement, disengagement, and reengagement” [5]. Many factors may influence the engagement process at different time points. In a research study, indicators of poor engagement may include low initial uptake from the first point of contact or reduced interaction over time, in some cases leading to complete disengagement or dropout. Engagement strategies have been developed to enable cohort studies—both observational and interventional—to meet their aims (eg, improving health behaviors and outcomes) by allowing regular, sustainable engagement with large numbers of participants via remote or digital-only studies [6].

e-Engagement incorporates the participation, recruitment, and retention of participants through digital platforms. Factors that may improve participant e-engagement include its technical features, content, frequency of waves, and engagement-facilitation interventions (EFIs) [7]. User characteristics and digital platform features should also be considered. Ritterband et al [8] simplified this in their internet intervention model, hypothesizing that behavior change is influenced by the stepwise progression of environmental factors, support, and website characteristics affecting adherence, which then affect behavior change (ie, sustained participant engagement) through various mechanisms of change. Thus, maximizing e-engagement can improve the efficiency of research processes and reduce both administration costs [9] and the validity and power costs of significant and systematic nonuptake and attrition [10] in major studies.

Given the expense of longitudinal cohort studies, effective strategies that engage and retain cohort participants are critical to the integrity of research outcomes [11,12]. The retention of study participants is vital to ensure the power and internal validity of longitudinal research [13-15], whereas participant engagement is important for evaluating the efficacy and generalizability of the program under study. A review of randomized controlled trials [16] suggests that delays in participant recruitment or high dropout rates postrandomization may lead to uncertainty in treatment effectiveness and possibly confound results. For example, in the case of technology-based intervention studies, the technology may change over time if recruitment is prolonged, potentially leading to artifacts or differential effects on treatment outcomes. Proposed retention strategies involving (1) contact and scheduling methods, (2) visit characteristics, (3) study personnel, (4) nonfinancial incentives, (5) financial incentives, (6) reminders, (7) special tracking methods, (8) study description, (9) benefits of study, (10) reimbursement, (11) study identity, and (12) community involvement [17,18] may influence participant retention rates. However, there is limited experimental evidence and data for the in-depth exploration of retention strategies and their implementation.

With the recent growth in experimental research on the optimization of digital methods in longitudinal cohort research studies, there is a need for the literature to be collectively synthesized at a pace reflecting the rapid evolution of technology.

Objective

The objective of this review is to provide an overview of strategies that enhance engagement, participation, and retention rates in large-scale digital contact studies, comparing digital methods with alternative (digital and nondigital) methods. This work has been undertaken as part of the design of the forthcoming Generation Victoria (GenV) study [19]. GenV is a whole-of-state birth and parent cohort being planned in the state of Victoria, Australia. After initial face-to-face recruitment, the majority of contact with study participants will be via digital methods. The findings of this review are expected to inform GenV and other very large birth cohort studies in planning.


Protocol Registration

The protocol of this rapid review was registered with PROSPERO (International Prospective Register of Systematic Reviews; registration number CRD42020155430). We followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement to report our systematic review [20].

Research Questions and Definitions

In the context of the administration of large-scale digital contact cohort studies, we investigated the following research questions:

  1. What technical design features aid engagement, participation, and retention?
  2. What EFIs aid engagement, participation, and retention?
  3. What feedback is valued by parents with young children?
  4. How effective are e-engagement, participation, and retention interventions?

We used the following definitions throughout:

  • Engagement: the proportion of participants who receive, open, and actively engage in a survey or an assessment wave. Incorporates the study being able to contact the participant, and the participant being motivated to start the activity.
  • EFI: the approach used to increase the acceptability of a web-based program.
  • Participation: the proportion of participants who completed a survey or an assessment. Incorporates the participant having time to complete the activities, understanding how to complete the activities, and being willing to provide information about themselves and their family.
  • Retention: the proportion of participants who participate across successive waves. Incorporates the study being able to contact the participant and the participant wanting to continue to participate.
  • Review: reporting on overall findings of an included systematic review.
  • Study: reporting on findings of an individual study reported within a systematic review.

Electronic Searches

Four authors (MW, JN, SAC, and YW) developed the search strategy and refined the searches with an experienced librarian. The search queries used to retrieve our systematic reviews and meta-analyses are presented in Multimedia Appendix 1. Literature searches were performed in PubMed and Ovid MEDLINE databases using both MeSH and free-text words. The results from each search engine were downloaded into an EndNote (Clarivate Analytics) reference library and saved in Covidence (Veritas Health Innovation Ltd). Duplicate studies across the combined groups were removed. We also consulted experts and manually searched for relevant studies.

Selection of Reviews

Two authors (selected from JN, YW, and LC) independently screened each paper title and abstract for relevance. The full text of the remaining papers was independently screened by two of the authors (selected from JN, YW, SAC, and LC). Any disagreements were resolved by consensus. To ensure a standardized process for our review, the author’s pilot-tested titles and abstracts, and full-text screening with a sample of papers. This information helped refine the inclusion and exclusion criteria.

Inclusion Criteria

Study Types

As this was a rapid review, we examined only systematic reviews and meta-analyses (not individual source studies) pertinent to large-scale cohort studies. We included studies with observational and/or interventional elements conducted in clinical and research settings. As digital technology is moving so rapidly, we limited our search to reviews published between January 2012 and December 2019, reasoning that these would include relevant older studies while being most technologically relevant to the needs of cohorts being planned in the 2020s.

Studies were eligible if they evaluated at least one of the following e-engagement, participation, or retention strategies (note that testing these strategies could occur in the context of a trial of therapeutic intervention):

  1. Alternative contact metrics: for example, frequency per month or year; time of the month, week, or day; duration of each contact; and reminder content and frequency.
  2. Reimbursement and gifts or penalties: for example, payments for survey completion, small gifts, or store discount codes.
  3. Feedback features: for example, presented as participant’s responses or performance at the point of completion; progress over time (with or without comparison with the population); a report sharable with care providers; and thank you certificates.
  4. Content features: for example, assessments relevant to the life course approach or development stage of participants and/or their child; the balance of positive and negatively framed questions; ease of understanding; cognitive burden of assessment or survey items; and interest.
  5. Technical and design features: for example, native or web app, can leave and return to assessment, the appearance of the interface, gamified interface, and visual progress tracker.
  6. Study design features: for example, messages personalized with participant names and study staff contactable to answer questions.
  7. Target participant characteristics: for example, demographic, motivation, or burden of disease.
  8. Communication modes: for example, visual, auditory, text, and real person or avatar.
Participants

As the respondents in large birth cohorts are usually parents for the first decade of life, our primary focus was adults aged <50 years. Where evidence existed, we considered parents of children aged between 0 and 5 years.

Comparators

Alternative standard delivery strategies such as mail, fax, and other digital interventions (DIs).

Outcome Measures

Participant engagement in, completion of, and retention in digital study (survey and assessment) waves throughout short and long periods.

Exclusion Criteria

For initial title and abstract screening, we excluded the following publications: the primary focus (participant) was adults ≥50 years or children as the primary respondents; publications not written in English; and publications with full text not accessible through the University of Melbourne library.

Additional criteria for the full-text review screening were not reporting our outcome metrics of engagement, participation, or retention; focusing on low-income countries; and focusing on rare or uncommon conditions such as HIV or cancer.

Data Extraction

A data extraction template was developed and piloted by the authors (JN, YW, SAC, and LC) in 3 reviews. The template contained general review information (author and search dates), characteristics of included studies (number of relevant studies, study designs, health topics, population age, and geographic area), e-engagement, participation, and retention promoting strategies, the methodological quality of systematic reviews, and a summary of review results and conclusions.

Data were extracted independently by 2 of the authors (selected from JN, YW, SAC, and LC), and any discrepancies were resolved by consensus.

Data Synthesis

A meta-analysis was not performed because of the heterogeneity of intervention types, study designs, study populations, and outcome variables among the included studies. Instead, a narrative summary of the findings across studies was created based on study outcomes (ie, participant engagement, participation, completion, and retention) and strategies promoting these study outcomes.

Methodological Quality of Included Reviews

Two authors (JN and YW) independently assessed the quality of the included review methodology using AMSTAR-2 (Assessing the Methodological Quality of Systematic Reviews 2; a measurement tool to assess systematic reviews) [21]. The appraisal tool of AMSTAR-2 included 16 domains: whether there was a description of the PICO (population, intervention, control group, and outcome) components in the research questions and the inclusion criteria; the protocol of systematic review or meta-analysis; study design rationale; the literature searching strategy; study selection; data extraction; specific details of inclusion and exclusion criteria; adequate detail of the included studies; bias risk assessment of the included studies; the funding sources; appropriate statistical methods; the impact evaluation of the individual study’s risk of bias (RoB); the explanation of RoB in individual studies; a satisfactory explanation for any heterogeneity; adequate investigation of publication bias; and potential conflicts of interest. The answer options for the AMSTAR-2 were yes, partial yes, and no. Yes denoted a positive result. No represented that there was not enough information about the domain. Partial yes represented that it partially adhered to the standard. Discrepancies were resolved by consensus.


Search and Screening Results

The search strategy yielded 1080 systematic reviews. An additional 13 reviews were identified through a manual search of publications’ reference lists. Following the removal of duplicates, 1071 publications were screened. The title and abstract screening excluded 907 reviews. A total of 164 articles underwent full-text review, of which 19 publications [22-40] met the inclusion criteria. Figure 1 summarizes the search and screening process presented in the PRISMA format.

Figure 1. Search and screening process.
View this figure

Characteristics of Included Reviews

On simple summing, the 19 reviews contained 437 studies and more than 556,000 participants (4 studies did not report the number of participants). Some studies and therefore participants are included in more than one review. Given that we aimed to identify strategies that may influence the outcomes of interest rather than synthesize an overall estimate (as per meta-analytic techniques), we did not see an overlap as problematic.

The characteristics of the included reviews are summarized in Table 1. Most reviews contained studies of varying designs, spanning quantitative and qualitative analyses.

The 19 systematic reviews included studies conducted in both research (8 reviews) and clinical or health care (11 reviews) settings. Of these reviews, 4 examined young people and adults with mental disorders, anxiety, and depressive symptoms. Others examined engagement, participation, and/or retention in digital contact studies among perinatal and postpartum women, patients with chronic diseases, parents in neonatal intensive care units, human papillomavirus vaccine uptake in young children, vaccinations in adults, pregnant women, and uptake of preschool vaccinations.

Table 1. Systematic review characteristics.
StudyObjectiveStudies in reviewParticipantsPopulationRegionsDesigns of included studies
Research setting

Alkhaldi, 2016 [23]Evaluate the effectiveness of tech-based prompts (eg, SMS text messages or calls) for promoting engagement with digital interventions148774Adults participating in digital interventions for physical and/or mental healthEurope, United States, and AustraliaRCTsa

Baumeister, 2014 [26]Investigate the impacts of guidance (human support) on the effectiveness of web-based mental health interventions14Not statedAdults with clinical or subthreshold mental disordersNot statedRCTs

Lattie, 2019 [32]Evaluate factors associated with the effectiveness of web-based mental health interventions8915,857Postsecondary (eg, university) students targeted by universal prevention or treatment intervention programsEurope, United States, Canada, China, Mexico, Australia, and New ZealandRCTs, nonrandomized studies, and qualitative

Lim, 2019 [33]Explore postpartum women’s and health professionals’ perspectives of digital health interventions for lifestyle management in postpartum women9484Postpartum womenUnited Kingdom, United States, Bangladesh, and AustraliaQualitative (focus groups or interviews), questionnaire, and observational

Thakkar, 2016 [37]Investigate the effect of SMS text messaging on medication adherence in chronic disease162742Adults with chronic diseases, including HIV infection, cardiovascular disease, asthma, diabetes, and epilepsyEurope, South America, United States, Asia, and AfricaRCTs

Tromp, 2015 [38]Investigate motivations of children and their parents to participate in clinical drug research42Not statedChildren with health conditions (eg, cancer, respiratory diseases, or diabetes) or no health conditions and their parents or guardiansNot statedQuantitative (questionnaires or registries) and qualitative (interviews, focus groups, or case study)

Valimaki, 2017 [39]Summarize the content and effectiveness of web-based interventions for depression and anxiety277786Young people (aged 10-24 years) with symptoms and/or diagnosis of depression or anxietyEurope, United States, Canada, China, Australia, and New ZealandRCTs

Whitaker, 2017 [40]Describe the extent and effectiveness of using Facebook to recruit participants for health research35Median 264 per studyPeople (aged ≥13 years) targeted for recruitment into health studies and interventions, most commonly smoking cessation, human papillomavirus vaccination, and healthier lifestyle interventionsGermany, United States, Canada, Japan, and AustraliaQuantitative and qualitative
Clinical or health care setting

Adams, 2015 [22]Provide evidence on the effectiveness, acceptability, economic costs, and consequences of parental financial incentives and quasi-mandatory schemes for increasing the uptake of preschool vaccinations11334,476Parent of preschool children living in high-income countries; member of any relevant stakeholder group living in high-income countriesUnited KingdomRCTs and controlled pre-post and time-series analyses

Ames, 2019[24]Describe clients’ experiences of receiving health information via their mobile phone35Not statedAdolescent and adult clients of pregnancy, newborn, and child health, sexual health, and family planning health services receiving communication via their mobile devicesUnited Kingdom, United States, Canada, Southeast Asia, Australia, South America, and AfricaQualitative study

Atkinson, 2019 [25]Evaluate the effectiveness of digital push interventions in improving vaccine uptake and series completion compared with nondigital interventions1324,224Adults receiving vaccines themselves, including pregnant women, or parents of adolescents and children eligible for vaccinationUnited States, Lebanon, Zimbabwe, and GuatemalaRCTs

Belisario, 2015 [27]Compare the quality of survey responses collected using mobile apps vs other methods142272Smartphone and tablet apps as a delivery mode in clinical patients. Data collected from participants completing health-related, self-administered questionnairesWestern Europe, United States, Canada, and KoreaRCTs, crossover, and paired repeated measures studies

Dol, 2017 [28]Examine the effect of eHealth interventions used in neonatal intensive care units on parents and infants8Not statedParents in neonatal intensive care unitsUnited States, Singapore, the Netherlands, South Korea, and IsraelRCTs, quasi-experimental, pre-post, observational studies, descriptive studies, and prospective studies

Dubad, 2017 [29]Evaluate the efficacy and usability of mobile mood-monitoring apps in young people25110,051Healthy participants, participants from clinical populations, including youth with a range of mental health, emotional, or behavioral problemsWestern Europe, United States, and AustraliaRCTs, secondary analyses, nonexperimental studies, and quasi-experimental

Garrido, 2019 [30]Examine the effectiveness of digital mental health interventions for depression and anxiety in young people4116,874Young people with depression and anxietyNorthern Europe, United States, South America, Asia, and AustraliaRCTs, single cohort (including pre-post design), and case studies

Kang, 2017 [31]Evaluate the impacts of digital interventions on human papillomavirus vaccination514,107Young adults (males and females) who had received their first human papillomavirus vaccine doseUnited StatesRCTs

Mertens, 2019 [34]Evaluate the effects of technology-supported lifestyle interventions on gestational weight gain and postnatal weight loss92603Perinatal women during pregnancy or within the first postnatal areaUnited States, Australia, and IranRCTs

Parsons et al, 2017 [35]Evaluate the remotely delivered interventions for children with autism spectrum disorder living outside of urban areas: systematic review9197Families having a child with autism spectrum disorder, living outside of urban areas, and having limited access to servicesUnited States, Canada, and AustraliaPre-post, multiple-base design, RCTs, and quasi-experimental studies

Robotham, 2016 [36]Assess the impact of digital notifications to improve attendance in clinics2116,076Patients attending health care servicesEurope, United States, Asia, Africa, and AustraliaRCTs

aRCT: randomized controlled trial.

Research Question 1: Technical Design Features That Aid Engagement, Participation, and Retention

We found 4 reviews [22,25,31,32] reporting on several technical design features to aid engagement, participation, and retention, including financial incentives (including gifts), digital pushes (SMS text message alerts), voice notifications, and email or SMS text message reminders and studies’ technical feasibility and usability (ie, informed consent). Email or SMS text message reminders and SMS text message notifications were reported in 4 reviews [31,32,36,37] as the most commonly used technical design feature to improve participation and completion rates. Two reviews reported the use of email or SMS text message reminders and voice notifications [36,37] to enhance study participation. Robotham et al [36] compared zero, one, and multiple SMS text message notifications and voice notifications.

Research Question 2: EFIs That Aid Engagement, Participation, and Retention

The following EFIs were reported by 5 reviews [24,26,28,30,35] as a means to aid uptake and participant engagement.

SMS Text Messages and Interactive Voice Response Messages

The review by Ames et al [24] examined perceptions and experiences of digital targeted client communication (ie, SMS text messages and interactive voice response messages) via mobile devices in the areas of reproductive, maternal, newborn, and adolescent health. The results suggested that many clients liked receiving messages from health services using mobile phones. Content preferences included new knowledge, reminders, solutions, and suggestions about health issues presented in a clear, short, and personalized way.

Intensive Guidance (Web-Based Interventions With Human Facilitation, Support, or Coaches)

According to the review by Baumeister et al [26], in treating mental health disorders, guidance, as a retention strategy, improved rates of completion (pooled completer rate: odds ratio 2.76, 95% CI 1.68-4.53; n=6) and the number of completed modules (pooled mean number of completed modules: standardized mean difference 0.52, 95% CI 0.37-0.067; n=7). Lim et al [33] reported on the use of lifestyle coaching as an EFI to aid DI uptake. DIs were perceived as positive, user-friendly, and acceptable. Engagement strategies employed in DIs were monitoring and feedback, goal setting, health professional input, and social support.

Videoconferencing and Video-Feedback Interventions

The review by Dol et al [28] reported the following EFIs to aid uptake across included studies: Baby CareLink (an educational and emotional support system for parents with children in the neonatal intensive care unit) [41], Skype, and FaceTime. In this review, no significant differences were found between parents who participated in an e-intervention or received standard care in terms of their reported anxiety and/or stress, possibly because of the greatly varied study design and type of eHealth technology across studies.

Garrido et al [30] summarized the use of web-based modules, learning materials, or activities; group chats or courses; online forums; web-based chat facilities with a mental health professional; games; and psychoeducational computer programs as EFIs to aid participant uptake. The pooled effect size on depression compared with a nonintervention control was small (Cohen d=0.33; 95% CI 0.11-0.55), whereas the pooled effect size of studies comparing an intervention group with an active control showed no significant differences (Cohen d=0.14; 95% CI −0.04 to 0.31). In addition, pooled effect sizes were higher when supervision was involved (for studies comparing digital mental health interventions with high human interaction vs no intervention: Cohen d=0.52, 95% CI 0.23-0.80; for studies comparing digital mental health interventions with high human interaction vs active controls with no supervision: Cohen d=0.49; 95% CI −0.11 to 1.01).

Web-Based Training Intervention in Behavioral Interventions and Video Training Materials

Parsons et al [35] reported that using video training materials compared with face-to-face training improved parent knowledge, parent intervention fidelity, social behavior, and communication skills of children with autism spectrum disorders.

Research Question 3: Feedback Valued by Participants With Younger Children

Feedback was valued by perinatal and postpartum women and parents in neonatal intensive care units, as reported by 4 reviews [24,28,33,34].

In the review by Merten et al [34], participants valued visual and personalized feedback, information, and tools for physical activity and dietary intake tailored to their self-monitored data during pregnancy. This feedback reinforced successes and/or offered motivational support and recommendations to achieve their goals. Ames et al [24] reported that the opportunity to offer feedback about needs, preferences, and experiences during pregnancy helped develop or improve the study intervention. In the review by Dol et al [28], parents valued a video-feedback intervention that guided them to reflect on their own successful interactions through recordings of parent-infant interaction and feedback from a video interaction guidance professional. According to Lim et al [33], many of the characteristics of DIs that postpartum women valued included feedback and goal setting. Women valued setting realistic goals through video consultation with their dietitian and tracking daily weight, exercise, and blood glucose levels in a web-based intervention, consistent with known key strategies for behavior change.

Research Question 4: Effectiveness of Engagement, Participation, and Retention Promotion Strategies

Overall, 63% (12/19) of reviews reported the effectiveness of e-engagement or participation promotion strategies. Table 2 summarizes the effectiveness of the various e-engagement, participation, and retention strategies reported in the reviews. Most findings were reported as relative rather than absolute differences, where numerical syntheses were provided (refer to Multimedia Appendix 2 [22,23,25,27,29,31,32,34,36,37,39,40] for further details on the answered research questions and strategies).

Table 2. Interventions, main outcomes, and results of included systematic reviews.
StudyCondition or sampleIntervention vs controlOutcomeStudy statistics: number of studies, effect size (95% CI), heterogeneityResults
Alkhaldi, 2016 [23]Digital interventionsStudy design features: (1) Technology-based engagement strategies (email, phone call, and SMS text messages) to promote engagement with digital interventions vs no strategy (11 studies); postal mail strategy (1 study); fewer technology-based strategies than the intervention group (2 studies).Engagement—engagement with the digital intervention. Dichotomous outcomes: number of log-ins or visits, page views, sessions completed, digital interventions features used. Continuous outcomes: time spent on the digital intervention.(1)
  • Dichotomous outcomes (n=8): RRa=1.27 (−1.01 to 1.60) favoring strategy group; I2=71%.b
  • Continuous outcomes (n=4): SMDc 0.19 (−0.11 to 0.48) favoring strategy group; I2=20%.
(1) Engagement in a digital intervention was higher with engagement strategy, compared with no strategy.
Baumeister, 2014 [26]Mental health disordersStudy design features:
(1) Guided interventions (with human support) vs nonguided interventions (self-guided)
(2) Guided interventions with a higher qualified e-coach vs guided interventions with a lesser qualified e-coach
(3) Intensive guidance (at least three email conversations per week) vs less-intensive guidance (one email contact per week).
Completion—number of completed modules, number of people completing the intervention.(1)
  • Number of completed modules (n=7): SMD 0.53 (0.37 to 0.67), higher in a guided group.
  • Number of completers (n=6): ORd 2.76 (1.68 to 4.53). Higher for a guided group; I2=42%.
(2)
  • Number of completed modules (n=4): SMD −0.15 (−0.36 to 0.05). No significant difference between groups; I2=0%.
  • Number of completers (n=4): OR 0.85 (0.54 to 1.35). No significant difference between groups; I2=0%.
(3)
  • Completer rate: OR 1.40 (0.41 to 4.71). Higher in intensive guidance group.
  • Mean completed modules: SMD 0.11 (−0.41 to 0.63). Higher in intensive guidance group.
(1) Completion was higher in guided interventions than self-guided interventions.
(2) No difference in completion by coach qualification level.
(3) Completion was higher with intensive guidance than less-intensive guidance.
Garrido, 2019 [30]Internet or web-based interventionsCommunication modes:
(1) Web-based module learning materials or activities, group chats or courses, online forums, and web-based chat facilities with a mental health professional vs face-to-face counseling
(2) Computer-based programs including games and psychoeducational computer programs vs waitlist control group.
Completion—proportion of commencing participants who completed the intervention.(1) Not stated.(1) and (2) Engagement and adherence rates were low with participants completing less than half of the intervention components.
Parsons, 2017 [35]Internet or web-based parent training programs for autism spectrum disorderCommunication modes:
(1) Web-based training intervention in behavioral interventions vs written training materials
(2) Video training materials vs completing the same training face-to-face within families’ homes.
Completion—completion and adherence.(1) and (2) Not stated.(1) and (2) Interventions delivered via videos were more effective and accepted by parents than those delivered via written information.
Mertens, 2019 [34]TelehealthCommunication modes:
(1) Mobile apps, SMS text messages, and e-intervention vs standard care including brief information brochures with healthy eating and physical activity advice.
Participation—efficacy, feasibility, acceptability, use of e-intervention.(1) Not stated.(1) Email, app alerts, or SMS text message notifications are well accepted for health interventions.
Whitaker, 2017 [40]Internet or web-based interventionsCommunication modes:
(1) Recruitment via Facebook advertisements vs recruitment via traditional methods or national data.
Engagement—number of participants recruited, conversion rate.(1) Not stated.(1) Facebook can be successfully used to recruit young and hard-to-reach populations. Facebook-recruited samples were generally representative to the target demographic, but some reported overrepresentation of young White women.
Atkinson, 2019 [25]VaccinationsCommunication modes:
(1) Digital push notifications (eg, SMS text message alerts) vs nondigital interventions (eg, appointment card).
(2) Digital push notifications (eg, SMS text message alerts) vs nondigital pull interventions
Participation—vaccination uptake (1 dose) or completion (all doses in series).(1)
  • 1 dose (n=9): OR 1.17 (1.10 to 1.23); I2=89%.
  • Completion of all doses (n=4): OR 1.53 (1.13 to 2.08); I2=82%.
(2)
  • 1 dose or completion of all doses (n=10): OR 1.22 (1.15 to 1.30); I2=79%.
(1) and (2) There were increased odds of participants being vaccinated or completing the vaccination series with digital alerts compared with nondigital interventions.
Dubad, 2018 [29]Delivery modeCommunication modes:
(1) Mobile mood-monitoring apps vs paper diary or in person.
Participation—completion rate of diary entries and mood assessments, engagement with the app.(1) Not stated.(1) Participation rates ranged between 30% and 99%.
Belisario, 2015 [27]Delivery modeCommunication modes:
(1) Smartphone app questionnaire vs paper questionnaire.
Completion—data completeness.(1) Not stated.(1) Higher data completeness in app than paper reported by individual studies.
Dol et al, 2017 [28]eHealth interventionCommunication modes:
(1) Videoconferencing (Skype or FaceTime), Baby CareLink (an internet-based application), video-feedback intervention, and internet-based telemedicine program vs standard care.
Completion—parents completed demographic and feasibility surveys postintervention.(1) Not stated.(1) Parents generally found eHealth interventions useful and acceptable for neonatal intensive unit care for their infant.
Valimaki, 2017 [39]Internet or web-based interventionsCommunication modes:
(1) Web-based interventions for depression and anxiety, computers, tablets, or mobile phones vs waitlist, other intervention method or program.
Completion—attrition, number of participants leaving the study early.(1)
  • Attrition of web-based interventions compared with a control group for short-term effectiveness (n=11): OR 1.31 (1.08 to 1.58).
  • Attrition in midterm (follow-up measurements after 3-5 months) effectiveness (n=3): OR 1.65 (1.09 to 2.49).
(1) Adolescents in the intervention group left the study early more often, both in short-term studies and midterm studies.
Robotham, 2016 [36]Patients attending various health care servicesContact metrics:
(1) One SMS text message notification vs no SMS text message notifications.
(2) 2+SMS text message notifications vs no SMS text message notification.
(3) SMS text message notifications vs voice notifications.
Participation—attendance, cancellation, and “no shows” at a health care service appointment.(1)Attendance (n=13): RR 1.23 (1.10 to 1.38) in favor of the SMS text message group; I2=82%.Cancellation (n=3): RR 1.37 (P=.34) with no difference between groups; I21%.“No shows” (n=16): RR 0.75 (0.68 to 0.82); I2=21%. (2)Attendance (n=13): RR 1.49 (1.17 to 1.88) in favor of 2+notifications group; I2=66%. 19% risk difference“No shows”: (n=15): RR 0.75 (0.57 to 0.99) with “no shows” lower in the 2+notifications group or I2=35%. 0.3% risk difference between 1 and 2+notification groups. (3)Attendance (n=3): RR 0.90 (0.82 to 0.98) in favor of voice notifications, I21%; “No shows” (n=4): RR 1.12 (0.90 to 1.38), I2=73%Between 1 and 2+notification groups. “No shows”: (n=15): RR 0.75 (1.17 to 1.88) with “no shows” lower in the 2+notifications group; I2=35%. 0.3% risk difference between 1 and 2+notification groups.Attendance (n=3): RR 0.90 (0.82 to 0.98) in favor of voice notifications, I21%. “No shows” (n=4): RR 1.12 (0.90 to 1.38); I2=73%.(1) Patients who received SMS text message notifications were 23% more likely to attend, equally likely to cancel, and less likely to no show a clinic appointment than those who received no notification.
(2) Participants who receive 2+SMS text message notifications are 19% more likely to attend compared with one SMS text message notification but equally likely to not show at a clinic appointment, compared with those who received 1 SMS text message.
(3) Voice notifications may increase clinic attendance slightly compared to SMS notifications, but no difference was found for “no shows”
Thakkar, 2016 [37]Various chronic conditionsContact metrics:
(1) SMS text message reminder vs no SMS text message reminder.
Participation—medication adherence.(1)
  • Adherence to medication schedule (n=not stated): OR 2.11 (1.52 to 2.93). Weighted mean effect size (n=not stated): Cohen d=0.41 (0.23 to 0.59).
(1) The odds of medication adherence more than doubled with SMS text message reminders, compared with no reminders. Assuming baseline medication adherence was 50%, this translates to an improvement to 67.8%, or an absolute increase of 17.8%.
Lattie, 2019 [32]Participation remindersContact metrics:
(1) Email reminders vs no email reminders.
Completion—number of sessions or assessments or prompts completed.(1)
  • Number of sessions completed (n=not reported): Reminder group mean 2.9, SD 2.5; no reminder group mean 3.6, SD 2.3; t=0.88.
(1) Email reminders were not associated with completing more sessions in a web-based intervention. There were also notable rates of participant attrition and early program discontinuation in many of the studies.
Kang, 2017 [31]RemindersContact metrics:
(1) 7 email or SMS text message reminders vs paper appointment card.
Completion—completion of three-dose human papilloma virus vaccine schedule.(1)
  • Number of people completing the vaccine schedule (n=86): email or SMS text message reminder group 34%, paper card reminder group 32%; P=.76.
(1) Completion rates of a vaccine schedule did not differ by reminder format (email or SMS text message, compared with paper card).
Adams, 2015 [22]IncentivesReimbursement and gifts or penalties:
(1) Cash lottery tickets for attendance vs usual care (no incentives).
(2) Loss of US $40 welfare benefits for not vaccinating vs usual care (no incentives).
(3) Loss of some welfare benefits for not vaccinating vs usual care (no incentives).
Engagement or uptake—uptake of preschool vaccinations; up to date with 0-2–year vaccinations; up to date with child vaccinations.(1)
  • At each follow-up time point, attendance for any reason and for vaccination was higher in incentives group.
(2)
  • No difference in up-to-date rates at 1 or 2-years follow-up.
(3)
  • Welfare deduction group had higher vaccination rates at 1, 2, 3, and 4 years. At age 2 years, the welfare deduction group had higher vaccine series completion.
(1) The incentives group had higher attendance.
(2) No difference between the groups.
(3) The welfare deduction group had higher vaccination rates.

aRR: relative risk.

bI2 statistic: percentage of variation due to heterogeneity between studies.

cSMD: standardized mean difference.

dOR: odds ratio.

RoB for Included Systematic Reviews

The assessment of the 16 items of AMSTAR-2 from each included review is demonstrated in Multimedia Appendix 3 [22-40]; 11 systematic reviews were rated as critically low [22,23,25,26,30-36] and 8 [24,27-29,37-40] were rated as low quality. Items 7 and 10, as indicated in Table 3, were rated as particularly low quality. All systematic reviews except 1 [29] reported potential sources of conflicts of interest, including any funding they received for conducting the review, but no review reported on the sources of funding for the studies included in the review. It is important to note that AMSTAR-2 does not evaluate the quality of the primary studies. Its objective is to evaluate the methodological quality of the systematic reviews, considering how well the systematic review was conducted (eg, literature searching and data pooling). Therefore, if a systematic review included primary studies with a high RoB but the review itself was well conducted, the review tended to be rated as high quality.

In Table 3, we detailed the overall confidence in the results of each included systematic review. Reviews performed poorly with respect to (1) reporting sources of funding for included studies (0/19, 0%), (2) adequately investigating publication bias (small study bias) and discussing its likely impact on the results of the review (4/19, 21%), and (3) providing a list of excluded studies and justifying their exclusions (6/19, 32%).

Table 3. Overall confidence assessment (Assessing the Methodological Quality of Systematic Reviews 2 tool) of the 19 included systematic reviews.
AMSTAR-2a itemsYes, n (%)Partial yes, n (%)No, n (%)No MAb, n (%)
1. Did the research questions and inclusion criteria for the review include the components of PICOc?13 (68)0 (0)6 (32)0 (0)
2. Did the report of the review contain an explicit statement that the review methods were established before the conduct of the review and did the report justify any significant deviations from the protocol?d7 (37)8 (42)4(21)0 (0)
3. Did the review authors explain their selection of the study designs for inclusion in the review?12 (63)0 (0)7 (37)0 (0)
4. Did the review authors use a comprehensive literature search strategy?d3 (16)14 (74)2 (11)0 (0)
5. Did the review authors perform study selection in duplicates?13 (68)0 (0)6 (32)0 (0)
6. Did the review authors perform data extraction in duplicates?12 (63)0 (0)7 (37)0 (0)
7. Did the review authors provide a list of excluded studies and justify the exclusions?d6 (32)0 (0)13 (68)0 (0)
8. Did the review authors describe the included studies in adequate detail?13 (68)3 (16)3 (16)0 (0)
9. Did the review authors use a satisfactory technique for assessing the RoBe in individual studies that were included in the review?d9 (47)6 (32)4 (21)0 (0)
10. Did the review authors report on the sources of funding for the studies included in the review?0 (0)0 (0)19 (100)0 (0)
11. If meta-analysis was performed, did the review authors use appropriate methods for statistical combination of results?d7 (37)0 (0)4 (21)8 (42)
12. If meta-analysis was performed, did the review authors assess the potential impact of RoB in individual studies on the results of the meta-analysis or other evidence synthesis?6 (32)0 (0)5 (26)8 (42)
13. Did the review authors account for RoB in individual studies when interpreting/discussing the results of the review?d10 (53)0 (0)9 (47)0 (0)
14. Did the review authors provide a satisfactory explanation for and discussion of any heterogeneity observed in the results of the review?10 (53)0 (0)9 (47)0 (0)
15. If they performed quantitative synthesis, did the review authors carry out an adequate investigation of publication bias (small study bias) and discuss its likely impact on the results of the review?d4 (21)0 (0)8 (42)7 (37)
16. Did the review authors report any potential sources of conflict of interest, including any funding they received for conducting the review?18 (95)0 (0)1 (5)0 (0)

aAMSTAR-2: Assessing the Methodological Quality of Systematic Reviews 2.

bMA: meta-analysis.

cPICO: population, intervention, control group, outcome.

dItems considered as critical domains in the AMSTAR-2.

eRoB: risk of bias.

Out of the 19 reviews, 10 (53%) accounted for such bias in individual studies when interpreting and discussing the results of the review and 9 (47%) reviews used a satisfactory technique for assessing the RoB in individual studies included in the review. For reviews that used a satisfactory technique for assessing RoB, most [23,26,27,29,35,36] suggested that e-engagement, participation, and retention promotion strategies were effective.


Principal Findings

This study reviewed the current state of research comparing alternative strategies to maximize participant engagement, participation, and retention in large digital studies (as held in narrative and systematic reviews). We explored EFIs and study design features that aid engagement, participation, and retention. For reviews that met the inclusion criteria, there was substantial heterogeneity across studies in terms of e-strategies.

Most reviews show that e-engagement and participation promotion strategies are effective, which is promising. However, these reviews canvassed relatively few experimentally tested strategies, suggesting that the myriad of alternative strategies that may have been tested have not yet been the subject of reviews. Many studies have reported these features as a secondary goal of objectives such as adherence to therapy rather than as a primary goal in and of itself. From the 19 reviews, few contained very large digital studies that directly compared alternative strategies to examine impacts on engagement and retention; this may reflect the small number of such mega-studies worldwide. However, contributing reviews contained multipurpose (observational and interventional) cohort studies conducted in clinical and research settings. Motivation for study engagement, participation, and retention may differ somewhat between observational and clinical intervention studies where the participant can potentially directly benefit from participation, although the successful strategies make sense and at face value seem likely to generalize to both settings. In the absence of more tailored evidence, engagement strategies successful in intervention studies may be the best evidence we have, though they should be cautiously applied.

In the context of technical study design features, evidence suggests that using email or SMS text message reminders and voice notifications enhanced participant attendance to health care clinics. Although promising, these results should be interpreted with caution given the short duration of the e-intervention and reliance on self-reported medication adherence measures. Future studies need to determine the features of text message interventions that improve success and appropriate patient populations, sustained effects, and influences on clinical outcomes.

Human facilitation or support was important in influencing the uptake, engagement, and outcomes of digital technologies [26]. As illustrated by completion modules and completer rates, the larger effect sizes found in guided interventions suggested increased intervention adherence.

Reviews examining the effectiveness of e-engagement, participation, and retention interventions in the context of a health care intervention (rather than a cohort study) suggested the following:

  1. Visual and personalized feedback seemed effective, for example, for recordings of parent-child interaction in the neonatal intensive care setting [42,43]. This reinforces successes and/or offers motivational support to achieve an individual’s goals and is consistent with known key strategies for behavior change.
  2. In e-intervention studies, goal setting has mostly been used as a behavior change strategy [44-46], such as the Fishbein and Yzer Integrative Model of Behavioral Prediction and Fogg Behavior Model for Persuasive Design [44], Theory of Planned Behavior and Fun Theory [47], the Social Cognitive Theory [48,49], and the Coventry, Aberdeen, and London-Refined taxonomy of behavior change techniques [50].
  3. Using digital push interventions for vaccine uptake and series completion supported the idea that digital technologies could be a useful adjunct in improving vaccination rates. Reminder interventions for vaccinations have improved the completion of vaccination schedules.
  4. There was higher uptake when parental financial incentives or rewards were offered in quasi-mandatory schemes to increase the uptake of preschool vaccinations. Universal gifts were more acceptable than targeted parental financial incentives.
  5. Mental health apps were effective or partially effective in producing beneficial changes in psychological outcomes among young adolescents (ie, among college students). This is consistent with past meta-analyses of digital mental health programs for similar populations [51,52].
  6. Intensive guidance (with a human coach) was more efficacious than unguided interventions and a beneficial design feature, particularly for mental health studies. It is considered an adherence-facilitating measure in large digital research studies.
  7. Electronic text notifications improved attendance and reduced nonattendance (no-shows) across health care settings. Sending multiple notifications improved attendance rates.

Overall, no specific e-intervention strategy was identified as being superior. However, more interactive methods of delivery, such as videos and regular e-therapist contact for training, (1) improve adherence, (2) increase completion rates, and (3) improve fidelity. Further research is needed to understand the strategies that improve retention in longitudinal studies.

Limitations

We limited our search to systematic reviews published between 2012 and 2019. These reviews should give good reach into source studies during the preceding decade while encompassing the rapid evolution of technology and the explosion of digital methods in this period and thus relevant to the new studies of the 2020s. However, we acknowledge that this is an arbitrary choice. Although all of the literature sourced reported on studies using partial or fully digital contact with participants, much was in the context of interventions and may not be wholly applicable to observational cohort studies. Nonetheless, those strategies found to be successful in interventional settings seem worthwhile to explore in cohort studies. We obtained low-quality ratings for some systematic reviews. We also note that although high engagement and retention are the best strategies to obtain powerful representative data sets, statistical techniques such as multiple imputation are vital adjuncts.

Conclusions

Although all studies want to maximize the recruitment and retention of study participants, the best methods to do this, particularly in digital settings, are understudied. This review adds to the small but growing literature on methods for optimizing engagement and participation in digital contact cohort studies. Evidence-based recruitment and retention methods are particularly important to the success of the next generation of very large birth cohorts, which are very expensive but have low funding per participant and require high retention throughout decades despite participants having no or very little in-person contact with the study team. Ideally, such studies will not only use existing evidence-based methods but will also build on experimental studies of alternative engagement and retention methods to build the evidence base of the science of science.

Acknowledgments

This study was funded by GenV. GenV is supported by the Paul Ramsay Foundation, the Victorian Government, the Murdoch Children’s Research Institute, and the Royal Children’s Hospital Foundation (2019-1226). Research at the Murdoch Children’s Research Institute was supported by the Victorian Government’s operational infrastructure support program.

MW is supported by Australia’s National Health and Medical Research Council (NHMRC) Principal Research Fellowship 1160906. FKM is supported by NHMRC Career Development Fellowship 1111160.

Authors' Contributions

MW conceived the study. MW, JN, and SAC designed the study. JN, SAC, YW, and LC conducted the review. JN drafted the first version of this manuscript. All authors contributed to writing the manuscript and read and approved the final review.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Search query for systematic reviews and meta-analyses in OVID MEDLINE (search conducted on December 19, 2019).

PDF File (Adobe PDF File), 166 KB

Multimedia Appendix 2

Study-specific research questions and answers provided in reviews to achieve high participant engagement, participation, and retention.

PDF File (Adobe PDF File), 544 KB

Multimedia Appendix 3

Methodological quality of included systematic reviews.

PDF File (Adobe PDF File), 578 KB

  1. Fry A, Littlejohns TJ, Sudlow C, Doherty N, Adamska L, Sprosen T, et al. Comparison of sociodemographic and health-related characteristics of UK biobank participants with those of the general population. Am J Epidemiol 2017 Nov 01;186(9):1026-1034 [FREE Full text] [CrossRef] [Medline]
  2. McCarthy M. US cancels plan to study 100,000 children from "womb" to age 21. Br Med J 2014 Dec 19;349:g7775. [CrossRef] [Medline]
  3. Pearson H. Massive UK baby study cancelled. Nature 2015 Oct 29;526(7575):620-621. [CrossRef] [Medline]
  4. Magnus P, Irgens LM, Haug K, Nystad W, Skjaerven R, Stoltenberg C, MoBa Study Group. Cohort profile: the Norwegian Mother and Child Cohort Study (MoBa). Int J Epidemiol 2006 Oct;35(5):1146-1150. [CrossRef] [Medline]
  5. O'Brien HL, Toms EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Am Soc Inf Sci Technol 2008 Apr;59(6):938-955. [CrossRef]
  6. Simblett S, Greer B, Matcham F, Curtis H, Polhemus A, Ferrão J, et al. Barriers to and facilitators of engagement with remote measurement technology for managing health: systematic review and content analysis of findings. J Med Internet Res 2018 Jul 12;20(7):e10480 [FREE Full text] [CrossRef] [Medline]
  7. Batterham PJ, Calear AL, Sunderland M, Kay-Lambkin F, Farrer LM, Gulliver A. A brief intervention to increase uptake and adherence of an online program for depression and anxiety: Protocol for the Enhancing Engagement with Psychosocial Interventions (EEPI) Randomized Controlled Trial. Contemp Clin Trials 2019 Mar;78:107-115. [CrossRef] [Medline]
  8. Ritterband LM, Thorndike FP, Cox DJ, Kovatchev BP, Gonder-Frederick LA. A behavior change model for internet interventions. Ann Behav Med 2009 Aug;38(1):18-27 [FREE Full text] [CrossRef] [Medline]
  9. Teague S, Youssef GJ, Macdonald JA, Sciberras E, Shatte A, Fuller-Tyszkiewicz M, SEED Lifecourse Sciences Theme. Retention strategies in longitudinal cohort studies: a systematic review and meta-analysis. BMC Med Res Methodol 2018 Nov 26;18(1):151 [FREE Full text] [CrossRef] [Medline]
  10. Gustavson K, von Soest T, Karevold E, Røysamb E. Attrition and generalizability in longitudinal studies: findings from a 15-year population-based study and a Monte Carlo simulation study. BMC Public Health 2012 Oct 29;12:918 [FREE Full text] [CrossRef] [Medline]
  11. Szklo M. Population-based cohort studies. Epidemiol Rev 1998;20(1):81-90. [CrossRef] [Medline]
  12. Booker CL, Harding S, Benzeval M. A systematic review of the effect of retention methods in population-based cohort studies. BMC Public Health 2011 Apr 19;11:249 [FREE Full text] [CrossRef] [Medline]
  13. Fewtrell MS, Kennedy K, Singhal A, Martin RM, Ness A, Hadders-Algra M, et al. How much loss to follow-up is acceptable in long-term randomised trials and prospective studies? Arch Dis Child 2008 Jun;93(6):458-461. [CrossRef] [Medline]
  14. Siddiqi A, Sikorskii A, Given CW, Given B. Early participant attrition from clinical trials: role of trial design and logistics. Clin Trials 2008;5(4):328-335 [FREE Full text] [CrossRef] [Medline]
  15. Gupta A, Calfas KJ, Marshall SJ, Robinson TN, Rock CL, Huang JS, et al. Clinical trial management of participant recruitment, enrollment, engagement, and retention in the SMART study using a Marketing and Information Technology (MARKIT) model. Contemp Clin Trials 2015 May;42:185-195 [FREE Full text] [CrossRef] [Medline]
  16. Watson JM, Torgerson DJ. Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol 2006 Jul 19;6:34 [FREE Full text] [CrossRef] [Medline]
  17. Robinson KA, Dennison CR, Wayman DM, Pronovost PJ, Needham DM. Systematic review identifies number of strategies important for retaining study participants. J Clin Epidemiol 2007 Aug;60(8):757-765 [FREE Full text] [CrossRef] [Medline]
  18. Robinson KA, Dinglas VD, Sukrithan V, Yalamanchilli R, Mendez-Tellez PA, Dennison-Himmelfarb C, et al. Updated systematic review identifies substantial number of retention strategies: using more strategies retains more study participants. J Clin Epidemiol 2015 Dec;68(12):1481-1487 [FREE Full text] [CrossRef] [Medline]
  19. Wake M, Hu YJ, Warren H, Danchin M, Fahey M, Orsini F, et al. Integrating trials into a whole-population cohort of children and parents: statement of intent (trials) for the Generation Victoria (GenV) cohort. BMC Med Res Methodol 2020 Sep 24;20(1):238 [FREE Full text] [CrossRef] [Medline]
  20. Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, PRISMA-P Group. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015 Jan 01;4:1 [FREE Full text] [CrossRef] [Medline]
  21. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Br Med J 2017 Sep 21;358:j4008 [FREE Full text] [CrossRef] [Medline]
  22. Adams J, Bateman B, Becker F, Cresswell T, Flynn D, McNaughton R, et al. Effectiveness and acceptability of parental financial incentives and quasi-mandatory schemes for increasing uptake of vaccinations in preschool children: systematic review, qualitative study and discrete choice experiment. Health Technol Assess 2015 Nov;19(94):1-176 [FREE Full text] [CrossRef] [Medline]
  23. Alkhaldi G, Hamilton FL, Lau R, Webster R, Michie S, Murray E. The effectiveness of prompts to promote engagement with digital interventions: a systematic review. J Med Internet Res 2016 Jan 08;18(1):e6 [FREE Full text] [CrossRef] [Medline]
  24. Ames HM, Glenton C, Lewin S, Tamrat T, Akama E, Leon N. Clients' perceptions and experiences of targeted digital communication accessible via mobile devices for reproductive, maternal, newborn, child, and adolescent health: a qualitative evidence synthesis. Cochrane Database Syst Rev 2019 Oct 14;10:CD013447 [FREE Full text] [CrossRef] [Medline]
  25. Atkinson KM, Wilson K, Murphy MS, El-Halabi S, Kahale LA, Laflamme LL, et al. Effectiveness of digital technologies at improving vaccine uptake and series completion - a systematic review and meta-analysis of randomized controlled trials. Vaccine 2019 May 21;37(23):3050-3060. [CrossRef] [Medline]
  26. Baumeister H, Reichler L, Munzinger M, Lin J. The impact of guidance on Internet-based mental health interventions — a systematic review. Internet Interv 2014 Oct;1(4):205-215. [CrossRef]
  27. Belisario JS, Jamsek J, Huckvale K, O'Donoghue J, Morrison CP, Car J. Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database Syst Rev 2015 Jul 27(7):MR000042. [CrossRef] [Medline]
  28. Dol J, Delahunty-Pike A, Anwar Siani S, Campbell-Yeo M. eHealth interventions for parents in neonatal intensive care units: a systematic review. JBI Database System Rev Implement Rep 2017 Dec;15(12):2981-3005. [CrossRef] [Medline]
  29. Dubad M, Winsper C, Meyer C, Livanou M, Marwaha S. A systematic review of the psychometric properties, usability and clinical impacts of mobile mood-monitoring applications in young people. Psychol Med 2018 Jan;48(2):208-228. [CrossRef] [Medline]
  30. Garrido S, Millington C, Cheers D, Boydell K, Schubert E, Meade T, et al. What works and what doesn't work? A systematic review of digital mental health interventions for depression and anxiety in young people. Front Psychiatry 2019;10:759 [FREE Full text] [CrossRef] [Medline]
  31. Kang HS, De Gagne JC, Son YD, Chae S. Completeness of human papilloma virus vaccination: a systematic review. J Pediatr Nurs 2018;39:7-14. [CrossRef] [Medline]
  32. Lattie EG, Adkins EC, Winquist N, Stiles-Shields C, Wafford QE, Graham AK. Digital mental health interventions for depression, anxiety, and enhancement of psychological well-being among college students: systematic review. J Med Internet Res 2019 Jul 22;21(7):e12869 [FREE Full text] [CrossRef] [Medline]
  33. Lim NL, Shorey S. Effectiveness of technology-based educational interventions on the empowerment related outcomes of children and young adults with cancer: a quantitative systematic review. J Adv Nurs 2019 Oct;75(10):2072-2084. [CrossRef] [Medline]
  34. Mertens L, Braeken MA, Bogaerts A. Effect of lifestyle coaching including telemonitoring and telecoaching on gestational weight gain and postnatal weight loss: a systematic review. Telemed J E Health 2019 Oct;25(10):889-901. [CrossRef] [Medline]
  35. Parsons D, Cordier R, Vaz S, Lee HC. Parent-mediated intervention training delivered remotely for children with autism spectrum disorder living outside of urban areas: systematic review. J Med Internet Res 2017 Aug 14;19(8):e198 [FREE Full text] [CrossRef] [Medline]
  36. Robotham D, Satkunanathan S, Reynolds J, Stahl D, Wykes T. Using digital notifications to improve attendance in clinic: systematic review and meta-analysis. BMJ Open 2016 Oct 24;6(10):e012116 [FREE Full text] [CrossRef] [Medline]
  37. Thakkar J, Kurup R, Laba T, Santo K, Thiagalingam A, Rodgers A, et al. Mobile telephone text messaging for medication adherence in chronic disease: a meta-analysis. JAMA Intern Med 2016 Mar;176(3):340-349. [CrossRef] [Medline]
  38. Tromp K, Zwaan CM, van de Vathorst S. Motivations of children and their parents to participate in drug research: a systematic review. Eur J Pediatr 2016 May;175(5):599-612 [FREE Full text] [CrossRef] [Medline]
  39. Välimäki M, Anttila K, Anttila M, Lahti M. Web-based interventions supporting adolescents and young people with depressive symptoms: systematic review and meta-analysis. JMIR Mhealth Uhealth 2017 Dec 08;5(12):e180 [FREE Full text] [CrossRef] [Medline]
  40. Whitaker C, Stevelink S, Fear N. The use of Facebook in recruiting participants for health research purposes: a systematic review. J Med Internet Res 2017 Aug 28;19(8):e290 [FREE Full text] [CrossRef] [Medline]
  41. Safran C, Pompilio-Weitzner G, Emery KD, Hampers L. Collaborative approaches to e-Health: valuable for users and non-users. Stud Health Technol Inform 2005;116:879-884. [Medline]
  42. Brown MJ, Sinclair M, Liddle D, Hill AJ, Madden E, Stockdale J. A systematic review investigating healthy lifestyle interventions incorporating goal setting strategies for preventing excess gestational weight gain. PLoS One 2012;7(7):e39503 [FREE Full text] [CrossRef] [Medline]
  43. Gardner B, Wardle J, Poston L, Croker H. Changing diet and physical activity to reduce gestational weight gain: a meta-analysis. Obes Rev 2011 Jul;12(7):602-620. [CrossRef] [Medline]
  44. Fernandez ID, Groth SW, Reschke JE, Graham ML, Strawderman M, Olson CM. eMoms: electronically-mediated weight interventions for pregnant and postpartum women. Study design and baseline characteristics. Contemp Clin Trials 2015 Jul;43:63-74 [FREE Full text] [CrossRef] [Medline]
  45. Herring SJ, Cruice JF, Bennett GG, Davey A, Foster GD. Using technology to promote postpartum weight loss in urban, low-income mothers: a pilot randomized controlled trial. J Nutr Educ Behav 2014;46(6):610-615 [FREE Full text] [CrossRef] [Medline]
  46. Gilmore LA, Klempel MC, Martin CK, Myers CA, Burton JH, Sutton EF, et al. Personalized mobile health intervention for health and weight loss in postpartum women receiving women, infants, and children benefit: a randomized controlled pilot study. J Womens Health (Larchmt) 2017 Jul;26(7):719-727 [FREE Full text] [CrossRef] [Medline]
  47. Kernot J, Olds T, Lewis LK, Maher C. Effectiveness of a facebook-delivered physical activity intervention for post-partum women: a randomized controlled trial protocol. BMC Public Health 2013 May 29;13:518 [FREE Full text] [CrossRef] [Medline]
  48. Phelan S, Brannen A, Erickson K, Diamond M, Schaffner A, Muñoz-Christian K, et al. 'Fit Moms/Mamás Activas' internet-based weight control program with group support to reduce postpartum weight retention in low-income women: study protocol for a randomized controlled trial. Trials 2015 Mar 25;16:59 [FREE Full text] [CrossRef] [Medline]
  49. Pollak KI, Alexander SC, Bennett G, Lyna P, Coffman CJ, Bilheimer A, et al. Weight-related SMS texts promoting appropriate pregnancy weight gain: a pilot study. Patient Educ Couns 2014 Nov;97(2):256-260 [FREE Full text] [CrossRef] [Medline]
  50. Willcox J, Wilkinson S, Lappas M, Ball K, Crawford D, McCarthy E, et al. A mobile health intervention promoting healthy gestational weight gain for women entering pregnancy at a high body mass index: the txt4two pilot randomised controlled trial. Brit J Obstet Gynec 2017 Oct 20;124(11):1718-1728. [CrossRef] [Medline]
  51. Davies EB, Morriss R, Glazebrook C. Computer-delivered and web-based interventions to improve depression, anxiety, and psychological well-being of university students: a systematic review and meta-analysis. J Med Internet Res 2014 May 16;16(5):e130 [FREE Full text] [CrossRef] [Medline]
  52. Harrer M, Adam SH, Baumeister H, Cuijpers P, Karyotaki E, Auerbach RP, et al. Internet interventions for mental health in university students: a systematic review and meta-analysis. Int J Methods Psychiatr Res 2019 Jun;28(2):e1759 [FREE Full text] [CrossRef] [Medline]


AMSTAR-2: Assessing the Methodological Quality of Systematic Reviews 2
DI: digital intervention
EFI: engagement-facilitation intervention
GenV: Generation Victoria
NHMRC: National Health and Medical Research Council
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses
RoB: risk of bias


Edited by G Eysenbach; submitted 14.08.20; peer-reviewed by S Teague, D McDonnell; comments to author 23.10.20; revised version received 16.03.21; accepted 07.04.21; published 14.05.21

Copyright

©Joanna Nkyekyer, Susan A Clifford, Fiona K Mensah, Yichao Wang, Lauren Chiu, Melissa Wake. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 14.05.2021.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.