Published on in Vol 24, No 4 (2022): April

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/35120, first published .
Challenges in Participant Engagement and Retention Using Mobile Health Apps: Literature Review

Challenges in Participant Engagement and Retention Using Mobile Health Apps: Literature Review

Challenges in Participant Engagement and Retention Using Mobile Health Apps: Literature Review

Review

Northwestern University Feinberg School of Medicine, Chicago, IL, United States

Corresponding Author:

Saki Amagai, BA

Northwestern University Feinberg School of Medicine

625 North Michigan Avenue, 2700 suite

Chicago, IL, 60613

United States

Phone: 1 312 503 1725

Email: saki.amagai@northwestern.edu


Background: Mobile health (mHealth) apps are revolutionizing the way clinicians and researchers monitor and manage the health of their participants. However, many studies using mHealth apps are hampered by substantial participant dropout or attrition, which may impact the representativeness of the sample and the effectiveness of the study. Therefore, it is imperative for researchers to understand what makes participants stay with mHealth apps or studies using mHealth apps.

Objective: This study aimed to review the current peer-reviewed research literature to identify the notable factors and strategies used in adult participant engagement and retention.

Methods: We conducted a systematic search of PubMed, MEDLINE, and PsycINFO databases for mHealth studies that evaluated and assessed issues or strategies to improve the engagement and retention of adults from 2015 to 2020. We followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Notable themes were identified and narratively compared among different studies. A binomial regression model was generated to examine the factors affecting retention.

Results: Of the 389 identified studies, 62 (15.9%) were included in this review. Overall, most studies were partially successful in maintaining participant engagement. Factors related to particular elements of the app (eg, feedback, appropriate reminders, and in-app support from peers or coaches) and research strategies (eg, compensation and niche samples) that promote retention were identified. Factors that obstructed retention were also identified (eg, lack of support features, technical difficulties, and usefulness of the app). The regression model results showed that a participant is more likely to drop out than to be retained.

Conclusions: Retaining participants is an omnipresent challenge in mHealth studies. The insights from this review can help inform future studies about the factors and strategies to improve participant retention.

J Med Internet Res 2022;24(4):e35120

doi:10.2196/35120

Keywords



Background

Today, 85% of the US population owns a smartphone device and daily use averages 4.5 hours [1]. With the rise in smartphone ownership and use, smartphones have become one of the most accessible and cost-effective platforms in health care and research. Smartphones are also pervasive across age, race, and socioeconomic status, allowing researchers to inexpensively reach out to myriad of population-level samples with ease. Specifically, the adoption of mobile health (mHealth) apps—mobile apps that help monitor and manage health of participants through smartphone devices, tablets, and other wireless network devices—has been increasing in the research sphere. The mHealth market is expected to grow at a compound annual growth rate of 17.6% from 2021 to 2028 [2]. In addition, the recent COVID-19 pandemic has led to a rise in the downloads and use of various mHealth apps, highlighting the importance of technology-based remote monitoring and diagnosis for continued advancement in modern health care (eg, [3]).

The greatest advantage of mHealth apps is their convenience. Unlike traditional in-person study settings, mHealth apps can be easily accessed from anywhere at the participant’s convenience. Using apps for remote assessment allows participants to make fewer site visits, substantially reducing the burden of travel and the time needed to participate in laboratory studies. With lowered barriers, it becomes easier for participants to conduct repeated testing and share real-time data based on their daily life experiences. Some mHealth research apps also allow participants to directly communicate with their providers via the app, which may enhance both the effectiveness of the app in its goals (eg, in disease management) and adherence in research studies. Given the ubiquity of smartphones among US adults, mHealth apps for research stand to better meet participants where they are at.

For researchers, the convenience of mHealth apps allows them to reach out to large and diverse participant populations more inexpensively and efficiently than traditional in-person studies. Recently, several large-scale studies were able to recruit thousands of participants within a span of a few months using Apple’s ResearchKit framework (eg, [4-7]). Using these apps, researchers can monitor day-to-day fluctuations of a wide range of real-time data. For example, self-reported emotional outcomes can be assessed together with passive location data to then infer many other real-time variables, such as physical activity, weather, and air quality, that could potentially affect mood throughout the day.

Despite these overwhelming advantages, many mHealth studies experience high participant attrition rooted in the fundamental challenges of keeping participants engaged. For example, consistent with other large-scale mHealth studies, the notable Stanford-led MyHeart Counts study experienced substantial dropout rates; mean engagement with the app was only 4.1 days [8]. It is a ubiquitous problem across all app uses; approximately 71% of app users are estimated to disengage within 90 days of a new activity [9].

It is imperative for mHealth studies to minimize participant dropout, as substantial attrition may reduce study power and threaten the representativeness of the sample. A potential benefit of mHealth research studies should include easier access to well-balanced, representative samples in terms of race, ethnicity, gender, age, education status, etc. However, given that many studies systematically lose participants, systematic differences between participants who are not completing the studies and those who complete the studies, may introduce bias to the sample. Differential retention makes it difficult to conclude whether any observed effects were caused by the intervention itself, retention bias, or inherent differences between groups. Participant dropout also precludes the conduct of longitudinal research.

In an effort to understand the various factors affecting participant retention, recent studies have evaluated recruitment and retention in several remotely conducted mHealth studies. In their cross-study evaluation of 100,000 participants, Pratap et al [10] analyzed individual-level study app use data from 8 studies that accumulated nearly 3.5 million remote health evaluations. Their study identified 4 factors that were significantly associated with increased participant retention: clinician referral, compensation, having the clinical condition of interest, and older age. However, the study only focused on large-scale observational studies led by the Sage or Research Kit, with especially low barriers to entry and exit, thus questioning the appropriateness of applying these findings to other small-scale studies with varying levels of participation. To our knowledge, other published systematic reviews and meta-analyses on engagement and retention are narrowly focused on one subfield of mHealth research, such as depression or smoking, or are only based on a few studies. Thus, it is impossible to extrapolate their findings to other mHealth apps that are not in the same subfield [11-13].

Retention strategies could be incorporated as app features to prevent participant dropout. For example, gamifying mHealth apps by incorporating badges, competitions, and rankings should make the experience more enjoyable and provide better incentives for participants. The addition of reminders, such as push notifications and SMS text messages, and enabling communication with clinicians are also expected to increase participant retention. However, the extent of their effectiveness in successfully engaging and retaining participants is not yet well defined.

This Study

One fundamental challenge for many mHealth app studies is the rapid and substantial participant dropout. This study aimed to better understand how mHealth studies conducted in the past 5 years have addressed the challenges of participant engagement and retention. We conducted a systematic review of the literature to identify notable factors and strategies used in participant engagement and retention. We hypothesize that participant attrition will be high overall and that there will be shared challenges across different studies that researchers should be cognizant of in future research.


Search Criteria and Eligibility

Our methodology was guided by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement [14]. We identified 3 main databases for this search: PubMed, MEDLINE, and PsycInfo. This review aimed to evaluate the engagement and retention of adults in evaluation research on mHealth apps. Study inclusion criteria were peer-reviewed publications within the last 5 years (January 2015 to October 2020), conducted within the United States, with a minimum of 20 adults. Refer to Figure 1 for more details on the search strategy and exact search terms. Although mHealth takes many forms, we were exclusively interested in mobile-based apps rather than SMS text messaging, tablets, or web-based interventions. We used a variety of research methods and designs, including qualitative, quantitative, or mixed methods. To conduct a comprehensive analysis, we also included mHealth apps in various research areas, ranging from smoking cessation to cardiovascular health research. Articles that were written purely as study protocols or design pieces were excluded. As we were primarily interested in mHealth for intervention purposes, we excluded studies that used fitness app data exclusively (eg, Fitbit and digital pedometers), unless they were specifically geared toward a particular health population (eg, breast cancer survivors and patients with other chronic illnesses). We also excluded evaluations of mHealth apps that focused solely on participant education or where the clinician was the focus of the intervention.

Figure 1. Study selection flowchart.
View this figure

Data Extraction and Analysis

We initially extracted basic information from each study: title, year, author, target population, operating system, definition of engagement, sample size and type (clinical vs nonclinical), participant age, study duration, main findings, possible implications, and whether participants were compensated. Most of these data were analyzed in a quantitative manner and are described as descriptive statistics in the Results section (ie, app system, sample size, sample type, compensation, and participant age). These data were also used to develop a binomial regression model to determine the factors affecting retention. For the remaining variables, such as the definition of engagement, findings, and implications, we extracted whole sentences or paragraphs that mentioned these items. Following the narrative approach described by Mays et al [15], the first (SA) and second author (SP) analyzed the findings and implications of the initial sample extraction to determine potential themes around retention and engagement. At this point, codes were applied to individual considerations of retention and engagement (or lack thereof) within the articles. After several readings of all extracted findings and implications, the second author initially determined approximately 5 themes related to support and barriers to engagement. These themes were developed from sets of codes, and these sets of codes were considered a theme once they were identified in 2 unique articles. After discussion and agreement with the first author, the second author reread the full-text articles to continue to refine these themes and consolidate the findings. We reached saturation when we could no longer identify new themes during the analysis, a process Saunders et al [16] considered inductive thematic saturation. Descriptions of these themes are presented in the qualitative findings of the Results section. The definitions of engagement themes and success rates were also processed in a similar way, and they are described in the quantitative findings of the Results section.


Final Sample of Studies

After locating all studies that met our search criteria (N=389) and downloading the full text, the first and second authors briefly reviewed the abstracts and full text to determine whether the selected studies met our inclusion criteria. In this process, we confirmed that all the studies that should have been excluded were, in fact, excluded. In a random sample of 100 articles, the authors agreed on 91% (91/100) of these decisions. After reaching an agreement about the remaining 9% (9/100), the first and second author divided the remaining articles for a more detailed review. The final sample comprised 62 articles. Refer to Figure 1 for the study selection flowchart and Multimedia Appendix 1 [4-8,17-73] for characteristics of the studies.

Descriptive Findings

The mean age across the users of mHealth apps among the 62 studies was 44.14 years (range 32-64.9 years), and the majority were of a clinical population (48/62, 77%). The sample size ranged from 20 (our predetermined minimum) to 101,108 participants, with most studies reporting a sample size of <100 (34/62, 55%). Most studies reported compensating participants (34/62, 55%). Most articles described apps that were available for both iPhone and Android users (29/62, 47%). Refer to Table 1 for more information about the descriptive statistics.

Table 1. Descriptive statistics of the 62 studies.

Values
Age of users (years), mean (range)44.14a (32-64.9)
Sample sizeb (n=108), n (%)

20-4917 (27)

50-9917 (27)

100-49912 (19)

>50016 (25)
Platform, n (%)

Android11 (17)

iPhone11 (17)

Both29 (46)

Not reported11 (17)
Clinical vs nonclinical, n (%)

Clinical48 (77)

Nonclinical14 (22)
Compensation, n (%)

Provided compensation34 (54)

No compensation28 (45)
Success code, n (%)

Not successful13 (21)

Partially successful42 (67)

Successful6 (9)

Not able to calculate1 (1)

aAdequate information to calculate the mean age was not provided for 13 out of the 62 studies. We excluded these studies from the mean age calculation.

bThe sample size ranged from 20 to 101. The median was 90.5 (IQR 436).

Definitions

Engagement, Retention, Adherence, Compliance, Completion, etc

We identified 2 main themes regarding the use of definitions in the literature sample. Our initial finding was that there is no clear agreement on the definition of engagement. This was likely because the literature in this space varies widely across questions, motivations, and perspectives. The second finding was that engagement was often captured by many different terms. In our sample, we saw terms such as retention, adherence, compliance, completion, and others sometimes used interchangeably. Despite this lack of clarity, we categorized our final sample into 3 distinct areas of engagement. Almost all studies (59/62, 95%) described or measured some form of engagement around the opening or using a specific app. Depending on the interface of the app being studied, this open or use definition encompasses nearly any type of app interaction. In some cases, the number of app opens and duration of time spent were collected via a backend system, whereas for others, the data that users logged within the app were part of this definition. The 3 articles that did not fit our open or use category relied on self-reported use of the app or measuring the completion of intervention activities from the app.

Success

We asked about the extent to which the research was successful in maintaining participant engagement. Regardless of the term used for engagement or retention, we defined success as the percentage of participants with complete data from the initial sample after an intervention. We defined a Success Code variable with 3 categories based on information from the mean and SD. Percentages below the mean minus one SD were considered not successful and percentages above the mean plus one SD were considered successful. Everything else in between was considered partially successful (42/62, 68%). Only 19% (12/62) were considered not successful and 3% (2/62) could not be calculated because they relied on self-reported app use.

Simultaneously, we developed a binomial regression model to examine the factors that could affect retention. The outcome of our binomial regression model was the proportion of complete data from the final sample. The model was weighted on the sample size of the studies. Table 2 shows the odds ratio estimates and CIs from the binomial regression model. For any given participant, it is more likely that they will not be retained than they will be retained. Furthermore, participants with a clinical condition of interest were 4 times more likely to stay in the study than those who did not. Moreover, participants who were compensated were 10 times more likely to stay in the study than those who were not compensated.

Table 2. Results from the binomial regression model.

Odds ratio (95% CI)
Intercept0.09a (0.093-0.094)
Clinical4.34a (4.16-4.52)
Compensation10.32a (9.48-11.25)

aP<.001.

Qualitative Findings

Our qualitative findings represent the recurring themes around engagement listed in the findings, discussion, limitations, or conclusions sections of the articles. To be considered a stand-alone theme, the concept must have appeared in at least two independent studies in our sample.

Support Themes

We identified 3 major themes (ie, app affordances, successful recruitment, and low barriers to entry) that researchers mentioned that might have kept the participants engaged in their mHealth apps. Even if the article in the sample did not specifically use these supports, we noted where researchers recommended more work to address these supports in future research.

App Affordances

Affordances are “the quality or property of an object that defines its possible uses or makes clear how it can or should be used” [74]. In the technology space, this word is often used to describe the possibilities of specific actions that software or hardware allows. On the level of the app being studied (either compared with business-as-usual, another app, or something else altogether), there were several affordances that made research participants more engaged or more likely to stay engaged across the study span. One such factor was gamification. According to Fernandez et al [57], “Gamification or the use of game design elements (badges, leaderboards, rewards, and avatars) can help maintain user engagement.” Very few studies have actually implemented gamification, but this theme was often mentioned as a possibility for future research to evaluate. Approximately one-quarter of our sample mentioned gamification as a future tool for promoting or sustaining engagement in a given mHealth app.

Although it was sometimes an area of interest in its own right, most articles mentioned some level of app reminders, feedback, or notifications that promoted engagement. Indeed, Bidargaddi et al [22] tested the effect of timing on weekends versus weekdays and found that users were most likely to engage with the app within 24 hours if prompted midday on the weekend. It is clear that reminders or other feedback through notifications was a supportive element for producing more engagement and less retention.

Approximately half of the articles mentioned some form of social support provided by coaches or peers within the app. Apps that included a coaching element, either from paraprofessionals, other participants, or the research study team, reported that this social support was critical for maintaining engagement throughout. One specific study by Mao et al [64] reported that 90% of participants who downloaded the app completed 4 months of coaching. This finding was likely because of a combination of participant-selected professional coaches who provided accountability and the social nature of the coaching relationship. In addition to social support, apps featuring tailored and personalized content were more likely to support engagement and adherence to the study.

Successful Recruitment

A total of 2 subthemes were drawn from the discussion of recruitment as support for engagement: recruiting highly motivated niche groups and providing some type of motivator in the form of either an incentive or a compensation. mHealth apps that were focused on a niche or highly motivated group of users tended to be more successful in engaging participants over the course of the study. For example, mHealth apps created to support smoking cessation for adult smokers were more likely to be successful when participants were already highly motivated to stop smoking (eg, [18,40,72]). Most studies also mentioned either some form of compensation or other incentives or motivators that could engage more study participants for a longer period. More than half of the studies mentioned providing some type of compensation. Several articles mentioned that there was also a necessary balance needed to use compensation effectively. Providing too little incentive might make participants less compelled to continue in the research, but at the same time, providing too much incentive could also backfire by reducing their intrinsic motivation to continue. This balance continues to be important for researchers to consider moving forward.

Low Barriers to Entry

Related to both app affordances and recruitment strategies, another subtheme that emerged was apps with extremely low barriers to entry. This theme was best described by McConnell et al [7] in the MyHeart Counts Study. Their app was based on Apple’s ResearchKit and enabled nearly 50,000 participants to register and provide consent for research. By launching a free app on smartphones, the authors stated “...the bar for entry to this study was much lower than that for equivalent studies performed using in-person visits. This lowering has the demonstrated advantage that many people consented...” Several other large-scale studies developed using ResearchKit had the advantage of recruiting and enrolling several thousands of participants [75]. This initial engagement was noted as a benefit, but as we learned later, such a low barrier to entry also often meant a low barrier to exit.

Barrier Themes

Researchers have also mentioned barriers that might diminish participant engagement. Here, we also noted barriers that were addressed in the discussion or limitations section of the articles, even though they were not actively described in the measures or results. These themes were described as (1) the lack of support codes; (2) low barriers to exit; (3) technical difficulties in using the app; and (4) somewhat counterintuitively, the usefulness of an app.

Lack of Support Features

Most barriers, either explicitly described or implied, were those that counteracted the support features. Articles routinely mentioned the lack of app affordances and recruitment success. Research involving apps without gamification, notifications of some sort, or support from peers or coaches was more likely to mention these as potential rationales for poor engagement and areas that could be improved in the future. A similar phenomenon was found in terms of recruitment strategies, where lack of compensation or having a niche group for the app were regularly noted as barriers to retention.

Low Barriers to Exit

In the same manner that large smartphone-based studies using the ResearchKit format provided a low barrier to entry, they also provided an equally low barrier to exit. For example, the MyHeart Counts Study further noted that when there is a low barrier to entry, there is a “notable disadvantage that those individuals are by definition less invested in the study and thus less likely to complete all portions” [7]. Almost all the apps available from ResearchKit in our sample represent the highest end of the sample size; however, none of the studies received even a partially successful code in our analysis.

Technical Difficulties

Articles that mentioned occasional glitches or bugs in the use of their apps were also likely to describe technical difficulties as a reason for lack of engagement. One study explicitly mentioned the use of the research support team to troubleshoot any technical difficulties for users [35], but most articles did not mention how they handled technology support requests. It is likely that some of the technical difficulties could have been on the app side, especially when the apps tested were in a pilot or beta form, but it is also possible that the participants had their own technical difficulties. None of the studies we evaluated performed any kind of pretest to measure participant comfort or familiarity with apps in general or apps similar to the one being studied. Generally, participants who were young adults or middle-aged were assumed to be good with technology overall. In addition, despite nearly a third of the articles mentioning usability and feasibility as a main investigation, only 5 studies mentioned participant results from the System Usability Scale [76], a standardized measure of usability frequently included in the human-computer interaction research space. Otherwise, usability and feasibility analyses were conducted on a study-by-study basis.

Usefulness of App

Although it may seem counterintuitive, apps that were extremely useful for participants were also some of the apps anecdotally deemed poor at engagement. For example, participants who successfully quit smoking while using a smoking cessation app generally had poor engagement in the long term. Indeed, if an app works, or achieves what it is meant to achieve, and does not offer some kind of regular check-in or maintenance program, it may be reasonable that participants taper the use of the app. In these cases, reduced engagement is a sign of success rather than failure and could actually be considered the goal of the app.


Principal Findings

This study synthesizes the literature on mHealth apps and the engagement strategies. As mHealth apps continue to grow in popularity and research in this space follows that trend, researchers need to identify what made participants stay engaged in the app or studies with the app.

Our review found that most (48/62, 77%) studies were at least partially successful in maintaining participant engagement throughout. Many of these successes were because of the support features of the research or app and the lack of barriers to entry. We determined the categories of strategies that support or detract from engagement. We identified particular elements of the app (eg, feedback, appropriate reminders, and in-app support from peers or coaches) and strategies for research that promote retention (eg, compensation and niche samples) as well as those that do not support retention (eg, lack of support features, technical difficulties, and usefulness of app). Research on the massive population-level ResearchKit apps appeared in both cases, using both successful and unsuccessful engagement and retention techniques. Although low barriers to initial entry could allow thousands of participants to be recruited, the same features also functioned as low barriers to exit. Recruiting a large number of participants is certainly beneficial, but that benefit may be substantially reduced if retention is poor. Future research should consider how to better balance these needs and incorporate factors such as clinical status, referral from providers, and compensation into recruitment plans for population-level apps.

This study used a binomial regression model to assess whether having a clinical condition of interest or receiving compensation affects retention rates. The empirical outcomes of the binomial regression model revealed that (1) any participant is more likely to not be retained than to be retained, (2) participants who have the same clinical condition targeted by the study are 4.33 times more likely to stay in the study than participants who do not have the same clinical condition targeted by the study, and (3) participants receiving compensation are 10.32 times more likely to stay in the study than participants who do not receive compensation. These findings, in line with previous research [10], demonstrate that retaining participants is a true challenge for studies using mHealth apps. Unlike that study [10], we were unable to incorporate clinician referral and age as part of our model because of inconsistent reporting in the articles. Although we planned to include other factors of interest, such as participant gender, income level, years of education, and smartphone platform type, the inconsistent reporting across studies made it challenging to accurately compare these variables. We also recognize that our definition of success relies on a normal distribution rather than some other indicator, which might be more appropriate for research with mobile apps that are still in their infancy. To summarize, scientists and researchers must consider different strategies to incentivize and encourage participant retention.

Of course, there is a balance when it comes to successful recruitment strategies, specifically compensation and niche groups. Strong participant engagement or retention may not accurately demonstrate the effectiveness of an app if the participants are overly compensated. Likewise, recruiting a niche group that is highly motivated to use a particular app presents a selection bias and leads to a lack of generalizability of the evaluation findings. Researchers and industry alike would do well to consider this balance when implementing studies using mHealth apps.

Limitations

Although we offer new insights into mHealth apps and participant engagement, this study has some limitations. First, as a systematic review, we were unable to make claims about all studies on apps. Owing to the file drawer phenomenon and our use of only peer-reviewed published articles, we do not report any studies that might have found null results, even though they might have described different interesting supports and barriers for engagement. Therefore, we encourage readers to refrain from generalizations about research on all mHealth apps. Second, we initially extracted information about the diversity of the sample; however, not all articles were clear about the diversity and the possible limitations of their own samples. Unfortunately, we were unable to describe these features in detail, as it is a critical area for more scholarship. Future research should consider the diversity in the demographics of published articles on mHealth apps and provide guidance about that.

Implications

We recommend that future mHealth apps consider potential support and barriers to participant engagement. Although the promise of moving health experiences onto the devices that people are currently using is great, many of the same barriers to participant engagement still exist and should be considered before moving research onto smartphone administration exclusively.

Conclusions

Retaining participants is a ubiquitous challenge for studies using mHealth apps. Despite the continued success of mHealth apps in the research sphere, there are many barriers to participant retention and long-term engagement. The insights from this review will help inform future studies about the potential different strategies and factors to consider and improve mHealth app engagement and retention.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Study characteristics of the 62 articles.

DOCX File , 29 KB

  1. Mobile fact sheet. Pew Research Center. 2019.   URL: https://www.pewresearch.org/internet/fact-sheet/mobile/ [accessed 2022-04-07]
  2. Grand View Research. 2021.   URL: https:/​/www.​marketresearch.com/​Grand-View-Research-v4060/​mHealth-Size-Share-Trends-Component-14163316/​ [accessed 2022-04-07]
  3. Almalki M, Giannicchi A. Health apps for combating COVID-19: descriptive review and taxonomy. JMIR Mhealth Uhealth 2021 Mar 02;9(3):e24322 [FREE Full text] [CrossRef] [Medline]
  4. Chan YF, Bot BM, Zweig M, Tignor N, Ma W, Suver C, et al. The asthma mobile health study, smartphone data collected using ResearchKit. Sci Data 2018 May 22;5:180096 [FREE Full text] [CrossRef] [Medline]
  5. Chan YF, Wang P, Rogers L, Tignor N, Zweig M, Hershman SG, et al. The Asthma Mobile Health Study, a large-scale clinical observational study using ResearchKit. Nat Biotechnol 2017 Apr;35(4):354-362 [FREE Full text] [CrossRef] [Medline]
  6. Doerr M, Maguire Truong A, Bot BM, Wilbanks J, Suver C, Mangravite LM. Formative evaluation of participant experience with mobile eConsent in the app-mediated Parkinson mPower study: a mixed methods study. JMIR Mhealth Uhealth 2017 Feb 16;5(2):e14 [FREE Full text] [CrossRef] [Medline]
  7. McConnell MV, Shcherbina A, Pavlovic A, Homburger JR, Goldfeder RL, Waggot D, et al. Feasibility of obtaining measures of lifestyle from a smartphone app: the MyHeart counts cardiovascular health study. JAMA Cardiol 2017 Jan 01;2(1):67-76. [CrossRef] [Medline]
  8. Hershman SG, Bot BM, Shcherbina A, Doerr M, Moayedi Y, Pavlovic A, et al. Physical activity, sleep and cardiovascular health data for 50,000 individuals from the MyHeart Counts Study. Sci Data 2019 Apr 11;6(1):24 [FREE Full text] [CrossRef] [Medline]
  9. Team Localytics. Mobile App Retention Rate: What’s a Good Retention Rate? Upland Software. 2021.   URL: https://uplandsoftware.com/localytics/resources/blog/mobile-apps-whats-a-good-retention-rate/ [accessed 2022-04-07]
  10. Pratap A, Neto EC, Snyder P, Stepnowsky C, Elhadad N, Grant D, et al. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. NPJ Digit Med 2020 Feb;3:21 [FREE Full text] [CrossRef] [Medline]
  11. Chu KH, Matheny SJ, Escobar-Viera CG, Wessel C, Notier AE, Davis EM. Smartphone health apps for tobacco Cessation: a systematic review. Addict Behav 2021 Jan;112:106616 [FREE Full text] [CrossRef] [Medline]
  12. Druce KL, Dixon WG, McBeth J. Maximizing engagement in mobile health studies: lessons learned and future directions. Rheum Dis Clin North Am 2019 May;45(2):159-172 [FREE Full text] [CrossRef] [Medline]
  13. Torous J, Lipschitz J, Ng M, Firth J. Dropout rates in clinical trials of smartphone apps for depressive symptoms: a systematic review and meta-analysis. J Affect Disord 2020 Feb 15;263:413-419. [CrossRef] [Medline]
  14. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009 Jul 21;6(7):e1000097 [FREE Full text] [CrossRef] [Medline]
  15. Mays N, Pope C, Popay J. Systematically reviewing qualitative and quantitative evidence to inform management and policy-making in the health field. J Health Serv Res Policy 2005 Jul;10 Suppl 1:6-20. [CrossRef] [Medline]
  16. Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant 2018;52(4):1893-1907 [FREE Full text] [CrossRef] [Medline]
  17. Purkayastha S, Addepally SA, Bucher S. Engagement and usability of a cognitive behavioral therapy mobile app compared with web-based cognitive behavioral therapy among college students: randomized heuristic trial. JMIR Hum Factors 2020 Feb 03;7(1):e14146 [FREE Full text] [CrossRef] [Medline]
  18. Marler JD, Fujii CA, Utley DS, Tesfamariam LJ, Galanko JA, Patrick H. Initial assessment of a comprehensive digital smoking cessation program that incorporates a mobile app, breath sensor, and coaching: cohort study. JMIR Mhealth Uhealth 2019 Feb 04;7(2):e12609 [FREE Full text] [CrossRef] [Medline]
  19. Patrick H, Fujii CA, Glaser DB, Utley DS, Marler JD. A comprehensive digital program for smoking cessation: assessing feasibility in a single-group cohort study. JMIR Mhealth Uhealth 2018 Dec 18;6(12):e11708 [FREE Full text] [CrossRef] [Medline]
  20. Huberty J, Green J, Glissmann C, Larkey L, Puzia M, Lee C. Efficacy of the mindfulness meditation mobile app "Calm" to reduce stress among college students: randomized controlled trial. JMIR Mhealth Uhealth 2019 Jun 25;7(6):e14273 [FREE Full text] [CrossRef] [Medline]
  21. Sittig S, Wang J, Iyengar S, Myneni S, Franklin A. Incorporating behavioral trigger messages into a mobile health app for chronic disease management: randomized clinical feasibility trial in diabetes. JMIR Mhealth Uhealth 2020 Mar 16;8(3):e15927 [FREE Full text] [CrossRef] [Medline]
  22. Bidargaddi N, Almirall D, Murphy S, Nahum-Shani I, Kovalcik M, Pituch T, et al. To prompt or not to prompt? A microrandomized trial of time-varying push notifications to increase proximal engagement with a mobile health app. JMIR Mhealth Uhealth 2018 Nov 29;6(11):e10123 [FREE Full text] [CrossRef] [Medline]
  23. Selter A, Tsangouri C, Ali SB, Freed D, Vatchinsky A, Kizer J, et al. An mHealth app for self-management of chronic lower back pain (Limbr): pilot study. JMIR Mhealth Uhealth 2018 Sep 17;6(9):e179 [FREE Full text] [CrossRef] [Medline]
  24. Dahne J, Lejuez CW, Diaz VA, Player MS, Kustanowitz J, Felton JW, et al. Pilot randomized trial of a self-help behavioral activation mobile app for utilization in primary care. Behav Ther 2019 Jul;50(4):817-827 [FREE Full text] [CrossRef] [Medline]
  25. Shah S, Kemp JM, Kvedar JC, Gracey LE. A feasibility study of the burden of disease of atopic dermatitis using a smartphone research application, myEczema. Int J Womens Dermatol 2020 Dec;6(5):424-428 [FREE Full text] [CrossRef] [Medline]
  26. Shcherbina A, Hershman SG, Lazzeroni L, King AC, O'Sullivan JW, Hekler E, et al. The effect of digital physical activity interventions on daily step count: a randomised controlled crossover substudy of the MyHeart Counts Cardiovascular Health Study. Lancet Digit Health 2019 Nov;1(7):e344-e352 [FREE Full text] [CrossRef] [Medline]
  27. Iacoviello BM, Steinerman JR, Klein DB, Silver TL, Berger AG, Luo SX, et al. Clickotine, a personalized smartphone app for smoking cessation: initial evaluation. JMIR Mhealth Uhealth 2017 Apr 25;5(4):e56 [FREE Full text] [CrossRef] [Medline]
  28. Arean PA, Hallgren KA, Jordan JT, Gazzaley A, Atkins DC, Heagerty PJ, et al. The use and effectiveness of mobile apps for depression: results from a fully remote clinical trial. J Med Internet Res 2016 Dec 20;18(12):e330 [FREE Full text] [CrossRef] [Medline]
  29. Bricker JB, Watson NL, Heffner JL, Sullivan B, Mull K, Kwon D, et al. A smartphone app designed to help cancer patients stop smoking: results from a pilot randomized trial on feasibility, acceptability, and effectiveness. JMIR Form Res 2020 Jan 17;4(1):e16652 [FREE Full text] [CrossRef] [Medline]
  30. Ben-Zeev D, Scherer EA, Gottlieb JD, Rotondi AJ, Brunette MF, Achtyes ED, et al. mHealth for schizophrenia: patient engagement with a mobile phone intervention following hospital discharge. JMIR Ment Health 2016 Jul 27;3(3):e34 [FREE Full text] [CrossRef] [Medline]
  31. Serrano KJ, Coa KI, Yu M, Wolff-Hughes DL, Atienza AA. Characterizing user engagement with health app data: a data mining approach. Transl Behav Med 2017 Jun;7(2):277-285 [FREE Full text] [CrossRef] [Medline]
  32. Zhang R, Nicholas J, Knapp AA, Graham AK, Gray E, Kwasny MJ, et al. Clinically meaningful use of mental health apps and its effects on depression: mixed methods study. J Med Internet Res 2019 Dec 20;21(12):e15644 [FREE Full text] [CrossRef] [Medline]
  33. Moberg C, Niles A, Beermann D. Guided self-help works: randomized waitlist controlled trial of Pacifica, a mobile app integrating cognitive behavioral therapy and mindfulness for stress, anxiety, and depression. J Med Internet Res 2019 Jun 08;21(6):e12556 [FREE Full text] [CrossRef] [Medline]
  34. Griauzde D, Kullgren JT, Liestenfeltz B, Ansari T, Johnson EH, Fedewa A, et al. A mobile phone-based program to promote healthy behaviors among adults with prediabetes who declined participation in free diabetes prevention programs: mixed-methods pilot randomized controlled trial. JMIR Mhealth Uhealth 2019 Jan 09;7(1):e11267 [FREE Full text] [CrossRef] [Medline]
  35. Ross EL, Jamison RN, Nicholls L, Perry BM, Nolen KD. Clinical integration of a smartphone app for patients with chronic pain: retrospective analysis of predictors of benefits and patient engagement between clinic visits. J Med Internet Res 2020 Apr 16;22(4):e16939 [FREE Full text] [CrossRef] [Medline]
  36. Scott AR, Alore EA, Naik AD, Berger DH, Suliburk JW. Mixed-methods analysis of factors impacting use of a postoperative mHealth app. JMIR Mhealth Uhealth 2017 Feb 08;5(2):e11 [FREE Full text] [CrossRef] [Medline]
  37. Gordon JS, Armin J, D Hingle M, Giacobbi Jr P, Cunningham JK, Johnson T, et al. Development and evaluation of the See Me Smoke-Free multi-behavioral mHealth app for women smokers. Transl Behav Med 2017 Jun;7(2):172-184 [FREE Full text] [CrossRef] [Medline]
  38. Berman MA, Guthrie NL, Edwards KL, Appelbaum KJ, Njike VY, Eisenberg DM, et al. Change in glycemic control with use of a digital therapeutic in adults with type 2 diabetes: cohort study. JMIR Diabetes 2018 Feb 14;3(1):e4 [FREE Full text] [CrossRef] [Medline]
  39. Michaelides A, Major J, Pienkosz Jr E, Wood M, Kim Y, Toro-Ramos T. Usefulness of a novel mobile diabetes prevention program delivery platform with human coaching: 65-week observational follow-up. JMIR Mhealth Uhealth 2018 May 03;6(5):e93 [FREE Full text] [CrossRef] [Medline]
  40. Bruno M, Wright M, Baker CL, Emir B, Carda E, Clausen M, et al. Mobile app usage patterns of patients prescribed a smoking cessation medicine: prospective observational study. JMIR Mhealth Uhealth 2018 Apr 17;6(4):e97 [FREE Full text] [CrossRef] [Medline]
  41. Petersen CL, Weeks WB, Norin O, Weinstein JN. Development and implementation of a person-centered, technology-enhanced care model for managing chronic conditions: cohort study. JMIR Mhealth Uhealth 2019 Mar 20;7(3):e11082 [FREE Full text] [CrossRef] [Medline]
  42. Bush J, Barlow DE, Echols J, Wilkerson J, Bellevin K. Impact of a mobile health application on user engagement and pregnancy outcomes among Wyoming Medicaid members. Telemed J E Health 2017 Nov;23(11):891-898 [FREE Full text] [CrossRef] [Medline]
  43. Ryan P, Brown RL, Csuka ME, Papanek P. Efficacy of osteoporosis prevention smartphone app. Nurs Res 2020;69(1):31-41 [FREE Full text] [CrossRef] [Medline]
  44. Zeng EY, Heffner JL, Copeland WK, Mull KE, Bricker JB. Get with the program: adherence to a smartphone app for smoking cessation. Addict Behav 2016 Dec;63:120-124 [FREE Full text] [CrossRef] [Medline]
  45. Sridharan V, Shoda Y, Heffner J, Bricker J. A pilot randomized controlled trial of a web-based growth mindset intervention to enhance the effectiveness of a smartphone app for smoking cessation. JMIR Mhealth Uhealth 2019 Jul 09;7(7):e14602 [FREE Full text] [CrossRef] [Medline]
  46. Hales S, Turner-McGrievy GM, Wilcox S, Davis RE, Fahim A, Huhns M, et al. Trading pounds for points: engagement and weight loss in a mobile health intervention. Digit Health 2017 Apr 24;3:2055207617702252 [FREE Full text] [CrossRef] [Medline]
  47. Mundi MS, Lorentz PA, Grothe K, Kellogg TA, Collazo-Clavell ML. Feasibility of smartphone-based education modules and ecological momentary assessment/intervention in pre-bariatric surgery patients. Obes Surg 2015 Oct;25(10):1875-1881. [CrossRef] [Medline]
  48. Betthauser LM, Stearns-Yoder KA, McGarity S, Smith V, Place S, Brenner LA. Mobile app for mental health monitoring and clinical outreach in veterans: mixed methods feasibility and acceptability study. J Med Internet Res 2020 Aug 11;22(8):e15506 [FREE Full text] [CrossRef] [Medline]
  49. Escobar-Viera C, Zhou Z, Morano JP, Lucero R, Lieb S, McIntosh S, et al. The Florida mobile health adherence project for people living with HIV (FL-mAPP): longitudinal assessment of feasibility, acceptability, and clinical outcomes. JMIR Mhealth Uhealth 2020 Jan 08;8(1):e14557 [FREE Full text] [CrossRef] [Medline]
  50. Patel ML, Hopkins CM, Bennett GG. Early weight loss in a standalone mHealth intervention predicting treatment success. Obes Sci Pract 2019 Jun;5(3):231-237 [FREE Full text] [CrossRef] [Medline]
  51. Forman EM, Goldstein SP, Zhang F, Evans BC, Manasse SM, Butryn ML, et al. OnTrack: development and feasibility of a smartphone app designed to predict and prevent dietary lapses. Transl Behav Med 2019 Mar 01;9(2):236-245 [FREE Full text] [CrossRef] [Medline]
  52. Du H, Venkatakrishnan A, Youngblood GM, Ram A, Pirolli P. A group-based mobile application to increase adherence in exercise and nutrition programs: a factorial design feasibility study. JMIR Mhealth Uhealth 2016 Jan 15;4(1):e4 [FREE Full text] [CrossRef] [Medline]
  53. Krzyzanowski MC, Kizakevich PN, Duren-Winfield V, Eckhoff R, Hampton J, Blackman Carr LT, et al. Rams Have Heart, a mobile app tracking activity and fruit and vegetable consumption to support the cardiovascular health of college students: development and usability study. JMIR Mhealth Uhealth 2020 Aug 05;8(8):e15156 [FREE Full text] [CrossRef] [Medline]
  54. Michaelides A, Raby C, Wood M, Farr K, Toro-Ramos T. Weight loss efficacy of a novel mobile Diabetes Prevention Program delivery platform with human coaching. BMJ Open Diabetes Res Care 2016 Sep 5;4(1):e000264 [FREE Full text] [CrossRef] [Medline]
  55. Kelechi TJ, Prentice MA, Mueller M, Madisetti M, Vertegel A. A lower leg physical activity intervention for individuals with chronic venous leg ulcers: randomized controlled trial. JMIR Mhealth Uhealth 2020 May 15;8(5):e15015 [FREE Full text] [CrossRef] [Medline]
  56. Stein N, Brooks K. A fully automated conversational artificial intelligence for weight loss: longitudinal observational study among overweight and obese adults. JMIR Diabetes 2017 Nov 01;2(2):e28 [FREE Full text] [CrossRef] [Medline]
  57. Fernandez MP, Bron GM, Kache PA, Larson SR, Maus A, Gustafson Jr D, et al. Usability and feasibility of a smartphone app to assess human behavioral factors associated with tick exposure (The Tick App): quantitative and qualitative study. JMIR Mhealth Uhealth 2019 Oct 24;7(10):e14769 [FREE Full text] [CrossRef] [Medline]
  58. Hébert ET, Ra CK, Alexander AC, Helt A, Moisiuc R, Kendzor DE, et al. A mobile just-in-time adaptive intervention for smoking cessation: pilot randomized controlled trial. J Med Internet Res 2020 Mar 09;22(3):e16907 [FREE Full text] [CrossRef] [Medline]
  59. Huberty J, Vranceanu AM, Carney C, Breus M, Gordon M, Puzia ME. Characteristics and usage patterns among 12,151 paid subscribers of the calm meditation app: cross-sectional survey. JMIR Mhealth Uhealth 2019 Nov 03;7(11):e15648 [FREE Full text] [CrossRef] [Medline]
  60. Chow PI, Showalter SL, Gerber M, Kennedy EM, Brenin D, Mohr DC, et al. Use of mental health apps by patients with breast cancer in the United States: pilot pre-post study. JMIR Cancer 2020 Apr 15;6(1):e16476 [FREE Full text] [CrossRef] [Medline]
  61. Toro-Ramos T, Michaelides A, Anton M, Karim Z, Kang-Oh L, Argyrou C, et al. Mobile delivery of the diabetes prevention program in people with prediabetes: randomized controlled trial. JMIR Mhealth Uhealth 2020 Jul 08;8(7):e17842 [FREE Full text] [CrossRef] [Medline]
  62. Tran C, Dicker A, Leiby B, Gressen E, Williams N, Jim H. Utilizing digital health to collect electronic patient-reported outcomes in prostate cancer: single-arm pilot trial. J Med Internet Res 2020 Mar 25;22(3):e12689 [FREE Full text] [CrossRef] [Medline]
  63. Ifejika NL, Bhadane M, Cai CC, Noser EA, Grotta JC, Savitz SI. Use of a smartphone-based mobile app for weight management in obese minority stroke survivors: pilot randomized controlled trial with open blinded end point. JMIR Mhealth Uhealth 2020 Apr 22;8(4):e17816 [FREE Full text] [CrossRef] [Medline]
  64. Mao AY, Chen C, Magana C, Caballero Barajas K, Olayiwola JN. A mobile phone-based health coaching intervention for weight loss and blood pressure reduction in a national payer population: a retrospective study. JMIR Mhealth Uhealth 2017 Jun 08;5(6):e80 [FREE Full text] [CrossRef] [Medline]
  65. Rudin RS, Fanta CH, Qureshi N, Duffy E, Edelen MO, Dalal AK, et al. A clinically integrated mHealth app and practice model for collecting patient-reported outcomes between visits for asthma patients: implementation and feasibility. Appl Clin Inform 2019 Oct;10(5):783-793 [FREE Full text] [CrossRef] [Medline]
  66. Johnston DC, Mathews WD, Maus A, Gustafson DH. Using smartphones to improve treatment retention among impoverished substance-using Appalachian women: a naturalistic study. Subst Abuse 2019 Jul 8;13:1178221819861377 [FREE Full text] [CrossRef] [Medline]
  67. Pagoto S, Tulu B, Agu E, Waring ME, Oleski JL, Jake-Schoffman DE. Using the habit app for weight loss problem solving: development and feasibility study. JMIR Mhealth Uhealth 2018 Jun 20;6(6):e145 [FREE Full text] [CrossRef] [Medline]
  68. Everett E, Kane B, Yoo A, Dobs A, Mathioudakis N. A novel approach for fully automated, personalized health coaching for adults with prediabetes: pilot clinical trial. J Med Internet Res 2018 Feb 27;20(2):e72 [FREE Full text] [CrossRef] [Medline]
  69. Spring B, Pellegrini C, McFadden HG, Pfammatter AF, Stump TK, Siddique J, et al. Multicomponent mHealth intervention for large, sustained change in multiple diet and activity risk behaviors: the make better choices 2 randomized controlled trial. J Med Internet Res 2018 Jun 19;20(6):e10528 [FREE Full text] [CrossRef] [Medline]
  70. Burke LE, Zheng Y, Ma Q, Mancino J, Loar I, Music E, et al. The SMARTER pilot study: testing feasibility of real-time feedback for dietary self-monitoring. Prev Med Rep 2017 Mar 31;6:278-285 [FREE Full text] [CrossRef] [Medline]
  71. Bricker JB, Copeland W, Mull KE, Zeng EY, Watson NL, Akioka KJ, et al. Drug Alcohol Depend 2017 Jan 01;170:37-42 [FREE Full text] [CrossRef] [Medline]
  72. Hoeppner BB, Hoeppner SS, Carlon HA, Perez GK, Helmuth E, Kahler CW, et al. Leveraging positive psychology to support smoking cessation in nondaily smokers using a smartphone app: feasibility and acceptability study. JMIR Mhealth Uhealth 2019 Jul 03;7(7):e13436 [FREE Full text] [CrossRef] [Medline]
  73. Dillingham R, Ingersoll K, Flickinger TE, Waldman AL, Grabowski M, Laurence C, et al. PositiveLinks: a mobile health intervention for retention in HIV care and clinical outcomes with 12-month follow-up. AIDS Patient Care STDS 2018 Jun;32(6):241-250 [FREE Full text] [CrossRef] [Medline]
  74. Affordance. Merriam-Webster.   URL: https://www.merriam-webster.com/dictionary/affordance [accessed 2022-04-07]
  75. Husain I. ResearchKit app shows high levels of enrollment and engagement on iPhone. iMedical Apps. 2015 Jun 17.   URL: https://www.imedicalapps.com/2015/06/researchkit-engagement-iphone/ [accessed 2022-04-07]
  76. Brooke J. SUS: a 'quick and dirty' usability scale. In: Jordan PW, Thomas B, McClelland IL, Weerdmeester B, editors. Usability evaluation in industry. London, UK: CRC Press; 1996:189-204.


mHealth: mobile health
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses


Edited by A Mavragani; submitted 22.11.21; peer-reviewed by A Mao, R Marshall; comments to author 20.01.22; revised version received 16.03.22; accepted 17.03.22; published 26.04.22

Copyright

©Saki Amagai, Sarah Pila, Aaron J Kaat, Cindy J Nowinski, Richard C Gershon. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 26.04.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.