Original Paper
Abstract
Background: Meditation apps are increasingly popular, yet there is limited understanding of how much users actually engage with them. While meditation apps show promise for supporting mental health, engagement in real-world settings appears to be notably low. The patterns of app use and the factors that influence usage remain relatively unclear.
Objective: This study aims to examine the extent of meditation app use and the factors associated with user engagement.
Methods: We conducted a cross-sectional survey of 536 recent meditation app users across 5 English-speaking countries. Engagement data were collected via self-report and app-verified screenshots. Assessed factors included user characteristics (age, education, income, sex, country, personality, self-efficacy, readiness and expectations for change, self-compassion, and quality of life), mental health (distress, well-being, life satisfaction, anxiety, depression, support, and stress), and app-related elements (therapeutic alliance, appeal, functionality, aesthetics, information, quality, and perceived impact). The 4 outcome variables representing engagement were app-verified minutes, self-reported minutes, app-verified minutes per year (adjusted for app download date), and self-reported minutes per year (adjusted for app download date). Associations between app use and variables of interest were examined using correlations. Factors with significant associations were then included in multivariable regression models to identify those most strongly associated with engagement.
Results: Age (ρ=0.13-0.15, PPFDR, where FDR is false discovery rate), expectations for sleep (ρ=0.12-0.33, PFDR<.05), and expectations for thriving (ρ=0.12-0.18, PFDR<.05) were associated with all outcome measures except adjusted objective minutes. Readiness to change was associated with all outcome measures (ρ=0.24-0.33, PFDR<.05). Among app factors, appeal (ρ=0.18-0.23, PFDR<.05) and perceived impact (ρ=0.23-0.32, PFDR<.05) were associated with all outcome measures except adjusted self-report minutes, while perceived quality (r=0.28-0.51, PFDR<.05) was associated with all outcome measures. Robust linear regressions showed that greater readiness to change (β=0.005-0.026, P=.006-.02), higher education level (β=0.029-0.540, P<.001), and higher openness (β=0.004-0.010, P=.008-.03) were associated with increased engagement. Additionally, greater expectations for sleep (β=0.004-0.009, P=.02-.04), greater expectation match (β=0.023, P=.03), and higher perceived app quality (β=0.008-0.042, P=.001-.01) were uniquely associated with increased engagement.
Conclusions: Most individuals who download meditation apps engage minimally. Our findings suggest that users who are more educated, open to new experiences, and hold strong beliefs in the effectiveness of meditation apps are more likely to use them regularly. Longitudinal studies are needed to examine patterns of use and strengthen causal inferences.
doi:10.2196/71960
Keywords
Introduction
Background
Around 1 billion people globally live with a mental health disorder [], creating a demand that exceeds available resources. As the rise of technology has coincided with increasing strain on mental health systems, digital mental health interventions have gained popularity due to their accessibility [,]. Fully automated versions of these interventions may reduce reliance on limited human resources. However, engagement remains a challenge, with fewer than 20% of users continuing beyond 7 days [].
Meditation apps are among the most common digital mental health tools [,]. Meditation encompasses a wide range of techniques across different traditions and religions and typically involves emotional and attentional regulation []. Mindfulness meditation, for example, emphasizes nonjudgmental awareness of the present moment []. Among regular meditators, most have used a meditation app []. While global rates of meditation remain unknown, 70 million people had downloaded Headspace by 2022 []. Thus, it is likely that a very large proportion of the population has tried meditation through an app. Given the high accessibility of meditation apps among those facing barriers to mental and physical health care, it is important to examine the practical limitations of such programs, with engagement being a key shortfall in app-based behavior change [,].
Most current information on meditation app use comes from clinical trials, which are not representative of real-world use. While randomized controlled trials of meditation apps show small- to medium-sized effects [], real-world estimates suggest exceedingly high discontinuation rates []. As behavioral interventions are only effective if used, this presents a major challenge for apps []. For digital offerings to be truly useful, a better understanding of the factors associated with sustained use is essential. This study examines engagement rates and identifies who is most likely to engage with meditation apps.
Mindfulness-Based Programs and Meditation Apps
While a limited number of meditation apps have shown some efficacy for mental health outcomes [], they should be considered separately from the established evidence base for mindfulness-based programs (MBPs) []. MBPs are arguably the most popular form of meditation training in clinical and academic settings, likely due to their strong evidence base [,]. By contrast, most popular meditation apps depart from the guided, intensive structure of MBPs []. Only 4% of popular apps provide evidence of their benefits []. Even where apps show potential efficacy, recent reviews highlight engagement as a major limitation to intervention effectiveness [,-].
The Digital Transition
Digital health interventions generally face engagement issues, which likely reduce their benefits []. In nonpharmacological interventions, adherence is linked to outcomes [], yet it tends to be worse in digital formats than in face-to-face interventions []. For behavior change apps (and apps more broadly), discontinuation occurs in 40%-60% of users []. In naturalistic settings, 21%-88% of users engage with an app at least once, but only 0.5%-28% sustain engagement (eg, completing all assigned modules or continuing use beyond 6 weeks []). Engagement decreases when digital interventions lack interactive human or human-like support [], posing challenges for fully automated meditation apps. Clarifying who engages meaningfully with meditation apps is therefore important, given the link between adherence and outcomes.
Attrition, Adherence, and Engagement
Engagement refers to the extent of intervention use, including the amount, frequency, duration, and depth of use []. Attrition and adherence are related terms that describe levels of (dis)engagement in research studies. Attrition refers to discontinuation or dropout from the intervention program or from research data provision during a study []. For MBPs, attrition is around 19% [], whereas app-based interventions show an average attrition of 42% in studies lasting 10 days to 12 weeks []. Real-world estimates for meditation apps indicate disengagement rates as high as 94% within the first 2 weeks [].
Adherence refers to the extent to which an individual follows a prescribed treatment or intervention []. As no clear guideline exists for the amount of practice required to achieve an effect in mindfulness or meditation [,], adherence can only be considered in relation to recommended practice amounts (see example in []). In meditation training, prescribed engagement time ranges from as little as 35 minutes [] to 3 hours per week in the widely used Mindfulness-Based Stress Reduction program []. By contrast, many apps recommend as little as 5 minutes per day or provide no clear guidance regarding minimum practice length, session duration, or overall time commitment needed to establish a practice []. Given the limited knowledge about dose-response relationships in meditation [,] and the tendency for most people to discontinue practice relatively early [], engagement serves as a useful proxy for understanding who practices, what type of practice they follow, and why.
Why Do People (Dis)Engage With Meditation and Apps?
Overview
Understanding engagement in meditation apps requires consideration of various behavior change and persuasive systems design frameworks. Behavior change frameworks—including habit formation theory, social cognitive theory, the theory of planned behavior, and the transtheoretical model of change—suggest that user characteristics such as expectations, motivation, readiness to change, consistency of use, and self-efficacy influence engagement with behavioral interventions [-]. The persuasive systems design framework highlights app design features that shape engagement and therapeutic alliance [].
Habit formation theory emphasizes reward and associative cues as central to establishing habits, with positive outcomes reinforcing continued engagement []. Context can shape how rewards are perceived, influencing whether a habit is formed []. The theory of planned behavior further posits that perspective and context guide behavior [,]. Broad factors such as sociodemographics, mental health, and personality also influence engagement []. Expectations are shaped by attitudes and norms, with positive expectations and attitudes predicting greater meditation app engagement []. The transtheoretical model outlines stages of change, with later stages—more closely aligned with commitment to action—linked to more sustained behavior change []. Readiness to change reflects an individual’s stage of change and is associated with successful maintenance of behavior change []. The Sussex Meditation Model identifies preintention, preparation, action, and maintenance stages as relevant to establishing a meditation practice [,]. Persuasive systems design, which examines how digital interventions can be structured to influence user behavior, highlights app features that enhance engagement, such as reminders and personalization []. Drawing on these frameworks, factors relevant to engagement in behavior change, meditation, or app use were categorized into user-related factors (sociodemographics, personal/user characteristics, and mental health factors) and app-related factors.
Sociodemographic Factors
Sociodemographic factors associated with disengagement from meditation include lower levels of education []; however, men, people with less education, and those with poorer health are less likely to begin meditating []. Meditators are also more likely to be wealthier than nonmeditators []. In online and app-based meditation, older age, positive expectations, and intrinsic motivation are associated with greater engagement [,].
Personal/User Factors
Personality factors have also been shown to influence engagement with meditation apps. Conscientiousness has been associated with meditation in general []. Openness predicts meditation practice outside formal program training, reflecting the “in-the-wild” context of app use [].
Behavior change factors may also influence engagement with meditation apps. Self-efficacy and readiness to change have been linked to successful habit formation []. A higher intention to practice is associated with greater engagement []. Intrinsic motivation moderates behavior change success across demographic groups [] and is crucial for making initial behavior change choices. Self-compassion and self-efficacy have also been found to be positively associated with engagement in behavior change [,].
Expectations for program efficacy can also influence behavior change. Positive experiences that meet expectations can facilitate ongoing engagement. Conversely, engagement may decline when a program or behavior does not deliver the anticipated positive outcomes []. Experiences of progress enhance engagement in both behavior change apps and meditation apps [,]. Positive expectations also predict higher engagement with digital meditation resources [].
Mental Health Factors
Health characteristics are also important for engagement. People may be motivated by physical or mental health issues, but these same issues can also act as barriers []. This paradox can be explained by the desire to address a problem that simultaneously hinders the ability to engage in practice. Additionally, limited perceived gains may lead to early discontinuation. Barriers to mental or physical health care, which can impact quality of life, may further motivate meditation app use to address unmet health needs [,]. Although meditation use among individuals with mental health problems is common, depression is associated with low adherence to behavior modification recommendations in clinical populations [,]. The very symptoms people seek to address—such as amotivation, distressing thoughts, and irritability—can also complicate their efforts. Meditation apps may be moderately effective for depression, anxiety, and stress [,,], potentially fostering an experience of progress. However, a minimal level of engagement is necessary to achieve efficacy [,]. Consequently, failure to achieve expected outcomes may lead to decreased engagement.
App Factors
The user’s relationship with the app is also relevant. Therapeutic alliance—the collaborative relationship between the user and the app—predicts engagement with mental health apps []. Ease of use, the ability to personalize settings, reminders, progress tracking, and positive perceptions of the app also predict higher engagement with mental health apps, though these factors have not been extensively examined in meditation apps [,,]. Usability (ie, the app’s functionality) was identified as a key factor related to engagement in a systematic review of mental health apps [].
This Study
Previous literature highlights factors that may be associated with meditation app use. In a cross-sectional survey capturing demographics, retrospective reports of app use, mental health factors, and perspectives on apps, we aimed to examine engagement rates and identify factors significantly associated with engagement.
This study focused on several preregistered questions:
- To what extent are user-related factors—including sociodemographic characteristics, spirituality, personality, self-efficacy, self-regulation, motivation, expectations, self-compassion, mental health care status, and psychological distress—associated with mindfulness app engagement?
- To what extent are user-app relationship factors—including therapeutic alliance, agreement on tasks and goals, and perceived app empathy and expertise—associated with mindfulness app engagement?
- To what extent are app-related factors—including appeal, functionality, aesthetics, information quality, quantity, and credibility, customization, accessibility, and usability—associated with mindfulness app engagement?
Additional questions included in the preregistration are not addressed in this paper.
Methods
Deviations From Preregistration
For clarification, we have changed the term “mindfulness apps” to “meditation apps” to capture a broader range of relevant practices. Mindfulness can, but does not necessarily, entail meditation and is variably represented as a capacity, skill, or technique. Meditation, by contrast, encompasses a broad array of spiritual and secular practices that use techniques such as focusing on an object, experience, image, or idea [].
Deviations from the preregistration included the following: (1) focusing on 4 definitions of minutes as the primary outcome and omitting the second preregistered outcome variable—regular practice hours—for simplicity. Regular practice hours were not included because their calculation combined multiple variables and, therefore, could be subject to estimation error. The 4 variations of the outcome variable were included to capture the complexity of user behavior. (2) We did not report 95% odds ratios, as continuous outcomes were used. (3) The final sample size was substantially reduced to 536 from the target of 1000 due to a smaller-than-anticipated eligible pool. This reduction decreased statistical power, although it still allowed adequate power to detect small effects. The reduced pool also led to a fourth deviation. (4) Recruitment was extended to Australia, Canada, the United Kingdom, and New Zealand. An additional deviation involved (5) not analyzing motivation for use, as this information was captured in open-text responses and could not be used in this quantitative analysis.
Ethical Considerations
This study was conducted in accordance with ethical guidelines and was approved by the Office of Research Ethics and Integrity at the University of Melbourne (approval number 2025-23969-62994-8).
Participants provided informed consent to participate in the study via the Qualtrics survey (Qualtrics International Inc). Consent was obtained within the survey, which also included a downloadable copy of the plain language statement. The plain language statement is available in Section S1 in . Provision of informed consent included acknowledgment of the right to withdraw at any time without providing an explanation. Participants also consented to secondary analyses. Survey questions were coded so that participants could not proceed without providing consent. All included responses were double-checked to ensure consent had been given.
Participants were compensated Aus $0.30-0.50 (US $0.20-0.33) for completing the screening survey (mean duration 1 minute 49 seconds) and Aus $6-8 (US $3.96-5.29) for completing the follow-up survey (mean duration 22 minutes 57 seconds), averaging Aus $20.59 (US $13.60) per hour. Survey compensation varied slightly based on median completion time; compensation was occasionally increased to better approximate the proposed hourly rate if the median completion time indicated the study took longer than expected.
Privacy and Confidentiality
Where possible, identifying information was removed from the dataset. Any copies of datasets containing identifying information were stored securely in accordance with relevant privacy guidelines and encrypted using Transport Layer Security (also known as HTTPS).
Study Design
Overview
This was a cross-sectional analysis of data collected from participants.
Procedure
Participants were recruited via Prolific (Prolific Academic Ltd) to complete a survey hosted on Qualtrics. The survey was accessible to potential participants in the United States, the United Kingdom, Canada, Australia, and New Zealand between August 1 and October 6, 2023. Participants were invited to complete a prescreening survey, and eligible individuals were sent the full survey within 1-2 days. Surveys were completed online using a laptop or mobile device. Participants were asked to upload a screenshot of their app use statistics, which provided information such as minutes, days, sessions, streaks, and the original date of download, depending on the app.
App Selection
We collected engagement information for popular meditation apps listed on the iOS (Apple Inc) and Android (Google LLC/Alphabet Inc) app stores (see Section S2 in ). Participants using apps in which meditation—including mindfulness meditation—was the primary intended function were included, based on app descriptions, marketing, and in-app features.
Participants using any app could complete the prescreening survey. Two (JA and JD) researchers assessed whether the app (1) prominently promoted itself as a mindfulness meditation tool and (2) provided techniques to practice mindfulness or another form of meditation. Meditation or mindfulness could not be a secondary component. We did not evaluate app content in relation to any specific definition. We adopted this approach because meditation apps do not offer a single type of meditation, nor do mindfulness apps (eg, Headspace) necessarily adhere to the MBP definition of meditation. Apps were excluded if they focused exclusively on fitness/exercise, employee well-being, cognitive behavioral therapy, or other mental health interventions. All included apps were fully automated (ie, without human support).
Participants
Inclusion criteria required participants to have used an eligible meditation app within the past 180 days; be fluent in English; and reside in Australia, Canada, New Zealand, the United Kingdom, or the United States. Exclusion criteria included failure to provide evidence of app use or use of an app in which meditation was not the primary focus.
A total of 6137 prescreening surveys were completed. We excluded 5307 responses: 4343 (70.77%) were unable to demonstrate access to an app, 316 (5.15%) had not used the app in the past 180 days, 319 (5.20%) had downloaded an ineligible app, and 329 (5.36%) self-reported zero use. Of the remaining surveys, 800 (13.04%) met the inclusion criteria.
Of the 800 participants invited to the survey, 677 (84.6%) completed it. Among the 675 survey responses received, 18 (2.3%) were identified as likely bots or fraudulent responses based on fraudulent screenshots or failed reCAPTCHA (reverse Completely Automated Public Turing Test to Tell Computers and Humans Apart), and 13 (1.6%) exhibited suspiciously high average session lengths (>3× the IQR, 70.35 minutes). An additional 22 participants (2.8%) timed out before completing the survey, 25 (3.1%) failed attention checks, 59 (7.4%) failed screenshot checks, 86 (10.75%) responded twice, and 21 (2.6%) declined to complete the survey. These categories were not mutually exclusive. The resulting sample consisted of 563 (70.4%) participants who consented and completed the full survey. Finally, 27 (4.8%) multivariate outliers were excluded according to the preregistration, yielding a final sample of 536 participants.
Measures
Engagement
To verify the reliability of self-reported information, we collected both subjective self-reports and objective, app-verified data (screenshots provided by participants), which included minutes, days, streaks (consecutive days of use), number of sessions, average session length, and duration of app ownership (). These metrics were collected using recent app duration, defined as the number of days between first and last app use. App-verified duration was recorded if a participant validated the download period via screenshot.
The primary engagement variable was minutes of app use. Four variations were analyzed: (1) objective unadjusted minutes, representing total app-verified minutes; (2) self-reported or “subjective” unadjusted minutes, representing total unverified self-reported minutes; (3) objective adjusted minutes, calculated as app-verified minutes adjusted for app-verified duration of use, expressed as minutes per year; and (4) self-reported or “subjective” adjusted minutes, calculated as self-reported minutes adjusted for self-reported duration of use, expressed as minutes per year. Adjusted variables accounted for the duration of access to the app (from the download date to the last use). As only a limited number of apps reported objective start dates, and app-verified duration correlated highly with self-reported duration, the adjusted variables were calculated using the time between the first and last reported use.
| Statistics | n | Mean (SD) | 5th percentile | 25th percentile | 50th percentile (median) | 75th percentile | 95th percentile | |
| Subjective | ||||||||
| Total minutes | 483 | 3562.78 (8616.39) | 4.10 | 76.00 | 420.00 | 2474.00 | 21735.10 | |
| Total sessions | 452 | 108.43 (197.00) | 0.58 | 9.40 | 37.33 | 130.30 | 407.25 | |
| Duration (days) | 477 | 894.16 (854.20) | 16.80 | 158.00 | 621.00 | 1342.00 | 2689.80 | |
| Minutes per session | 454 | 16.60 (29.74) | 3.00 | 5.81 | 10.51 | 18.09 | 35.41 | |
| Estimated minutes per montha | 477 | 148.77 (304.56) | 0.48 | 8.41 | 40.25 | 137.20 | 789.44 | |
| Estimated sessions per montha | 437 | 9.35 (16.61) | 0.13 | 0.93 | 3.29 | 11.05 | 34.79 | |
| Objective | ||||||||
| Total minutes | 483 | 3358.69 (8607.36) | 12.00 | 96.50 | 465.00 | 2410.00 | 21735.10 | |
| Total sessions | 151 | 61.89 (194.90) | 0.61 | 3.36 | 12.22 | 39.95 | 167.23 | |
| Minutes per session | 151 | 49.21 (89.20) | 1.98 | 7.26 | 25.92 | 55.90 | 154.08 | |
| Estimated minutes per montha | 151 | 73.58 (223.73) | 0.61 | 2.39 | 10.61 | 35.09 | 313.31 | |
| Estimated sessions per montha | 148 | 9.44 (47.63) | 0.02 | 0.10 | 0.66 | 2.32 | 17.29 | |
aEstimated minutes per month and sessions per month were calculated by total engagement in minutes divided by duration of app use in years divided by 12.
Self-Reported Measures
Sociodemographic Information and Meditation History
Sociodemographic information included household income, education level, religion, app name, and approximate start and stop dates of use. Most data were self-reported via the survey. Prolific provided additional information, including age, sex, language, student and employment status, country of birth, and current residence. Regular practice information included minutes per session, sessions per day, and days per week. Meditation history was assessed by asking participants to report their previous meditation experience in hours, ranging from 0-100 hours to 1000+ hours. See for regular practice information, and Sections S3-S5 in for sociodemographic statistics, meditation app frequencies, and a detailed survey flow.
Attention Checks
Three attention checks were included in the survey to assess participant engagement. Participants failing 2 or more attention checks were excluded. The attention checks were designed to mimic the scale items within which they appeared; for example, “In general, select dissatisfied to show that you are paying attention.”
The EuroQoL Health and Wellbeing Assessment—Short Form
The 9-item EuroQoL Health and Wellbeing (EQ-HWB-9) is a newly developed quality-of-life measure by Brazier and colleagues []. It assesses quality of life with a focus on health and well-being. The scale consists of 9 items, each rated from 1 (no difficulty, none of the time, and no physical pain) to 5 (unable, most or all of the time, and very severe physical pain). In this study, the 9 items of the EQ-HWB-9 demonstrated very good internal consistency (Cronbach α=0.873), and McDonald hierarchical omega was relatively high (ω=0.732). The EQ-HWB-9 was used with permission from the EuroQol Group.
The Kessler Psychological Distress Scale
The Kessler Psychological Distress Scale (K10) assesses psychological distress over the past 30 days []. This 10-item questionnaire, measuring anxiety and depressive symptoms, uses a 5-point scale ranging from 1 (none of the time) to 5 (all of the time). In this study, the scale demonstrated excellent internal consistency (Cronbach α=0.930; McDonald ω=0.799).
The Warwick-Edinburgh Mental Wellbeing Scale
The Short Warwick-Edinburgh Mental Wellbeing Scale (WEMWBS) was used to assess positive aspects of mental health []. This 7-item scale uses a 5-point response format, ranging from 1 (none of the time) to 5 (all of the time). In this study, the scale demonstrated high internal consistency (Cronbach α=0.879; McDonald ω=0.790).
The Satisfaction with Life Survey (Single Item)
The Satisfaction with Life Survey Single Item is an abbreviated version of the established Satisfaction with Life Survey (SWLS) []. The scale demonstrates reasonable criterion validity with the full SWLS (zero-order r=0.62-0.64). This single-item measure asks participants to rate their life satisfaction on a scale from 1 (extremely dissatisfied) to 7 (extremely satisfied).
7-Item Generalized Anxiety Disorder Scale
The 7-Item Generalized Anxiety Disorder Scale is used to screen for general anxiety symptoms []. Each item is rated from 0 (not at all) to 3 (nearly every day). In this study, the scale demonstrated high internal consistency (Cronbach α=0.899; McDonald ω=0.863).
Depression (8-Item Patient Health Questionnaire)
The 8-item Patient Health Questionnaire is used to assess depressive symptoms []. Each item is rated from 0 (not at all) to 3 (nearly every day). In this study, the scale demonstrated high internal consistency (Cronbach α=0.881; McDonald ω=0.781).
Self-Compassion Scale
The Self-Compassion Scale consists of 12 items assessing participants’ ability to be compassionate toward themselves []. Each item is rated from 1 (almost never) to 5 (almost always). In this study, the scale demonstrated high internal consistency (Cronbach α=0.881; McDonald ω=0.656).
Self-Efficacy (6-Item Generalized Self-Efficacy)
The 6-item Generalized Self-Efficacy assesses self-efficacy, or an individual’s perceived ability to achieve goals []. Each item is rated from 1 (not at all true) to 3 (exactly true). The scale demonstrated high internal consistency (Cronbach α=0.821; McDonald ω=0.788).
Readiness to Change 1-Item
The Readiness to Change 1-item assessment is a 10-point scale that measures an individual’s preparedness to enact a behavioral change []. The scale ranges from 0 (not prepared to change) to 10 (already changing). It has been validated to reflect actual readiness in clinical contexts [,,].
6-Item Digital Working Alliance Inventory
The 6-item Digital Working Alliance Inventory (DWAI-6) [] is a rating scale that assesses the therapeutic alliance between an individual and their health care provider, adapted for smartphone interventions (ie, referring to “the app” rather than “the therapist”). Items are rated on a 7-point scale from 1 (strongly disagree) to 7 (strongly agree), with subscales evaluating goal alliance (agreement on goals), task alliance (agreement on tasks), and bond (connection between app and user). The overall scale demonstrated good internal consistency (Cronbach α=0.850; McDonald ω=0.830). The Goal subscale showed high consistency (Cronbach α=0.766), the Bond subscale demonstrated acceptable consistency (Cronbach α=0.676), and the Task subscale showed poor consistency (Cronbach α=0.402).
Common Factors Domains (Modum Process Outcome Questionnaire)
We used a subset of items from the Common Therapeutic Relationship Factors Questionnaire (Modum Process Outcome Questionnaire) to assess the therapeutic relationship beyond the DWAI-6 []. The original questionnaire, which focuses on the clinician-patient relationship, was adapted to refer to the app-user relationship (eg, “I am able to be open and honest when interacting with the app”). Three items from the 12-item scale were included, each rated from 1 (strongly disagree) to 7 (strongly agree), with a “not applicable” option. The items demonstrated low internal consistency (Cronbach α=0.612; McDonald ω<0.001), likely due to being an unintended subset. Consequently, results will be reported for each item individually rather than as a total score.
The Big Five Inventory Short Form 2 (BFI-S-2)
The Big Five Inventory Short Form 2 (BFI-S-2) is a 30-item questionnaire assessing 5 personality domains: Extraversion, Agreeableness, Conscientiousness, Negative Emotionality/Neuroticism, and Open-Mindedness []. Each subscale consists of 6 items. Internal consistency, assessed using Cronbach α and McDonald ω, ranged from acceptable (Extraversion, α=0.766; ω=0.636/Agreeableness, α=0.744; ω=0.549/Openness, α=0.785; ω=0.664) to high (Conscientiousness, α=0.812; ω=0.738/Negative Emotionality, α=0.884; ω=0.841).
User Mobile Application Rating Scale
The user Mobile Application Rating Scale (uMARS) is a 27-item instrument for assessing the quality of mobile apps []. The scale includes subscales evaluating engagement (referred to as “appeal” in this study for clarity: “Is the app fun/entertaining to use?”), functionality (“How accurately and quickly do the app features and components work?”), aesthetics (“How good does the app look?”), information (“Is the app content correct, well written, and relevant to the goal/topic of the app?”), perceived/subjective quality (“Would you pay for this app?”), and perceived impact (“This app has increased my knowledge/understanding of meditating”). The scale asks participants to rate app elements on a 5-point scale ranging from 1 (poor) to 5 (excellent), with specific descriptions for each item. The overall scale demonstrated high internal consistency (Cronbach α=0.866) but only moderate reliability (McDonald ω=0.606). The additional subscale for perceived impact was also highly consistent (Cronbach α=0.863; McDonald ω=0.761). The Functionality subscale demonstrated high internal consistency and reliability (Cronbach α=0.803; McDonald ω=0.739), whereas the Engagement (Cronbach α=0.734; McDonald ω=0.628), Aesthetics (Cronbach α=0.761; McDonald ω=0.658), and Information (Cronbach α=0.762; McDonald ω=0.688) subscales showed acceptable consistency and reliability. The Subjective Quality subscale was consistent (Cronbach α=0.629; McDonald ω=0.667). The Aesthetics subscale showed variable internal consistency (Cronbach α=0.762; McDonald ω=0.065), indicating unequal item contributions.
Analysis Plan
Sample Size Determination
Given an unknown effect size due to the absence of robust data, and guided by prior estimates of effect sizes for meditation apps [], we aimed to calculate statistical power based on the smallest effect size we could reasonably detect. The target sample size was 1000 participants, providing 90% power to detect an effect of r=0.102, corresponding to a small effect. Because of recruitment challenges, the final sample size before analyzing the main engagement variable was 536, representing nearly 54% of our target population (n=1000). Although this reduction decreased statistical power (80% power to detect r=0.122), given that the a priori effect size was unknown, the study remained adequately powered to detect relatively small effects, albeit slightly larger than initially anticipated.
Planned Statistics
As outlined in the preregistration, we explored associations between user-related factors (Q1, H1), user-app relationship factors (Q2, H2), app-related factors (Q3, H3), mental health factors (Q5, H5), and predefined engagement variables. All engagement variables were heavily skewed and nonnormal (see and Sections S11-S14 in ); therefore, Spearman rho correlations were estimated. Variables that were significantly associated with any outcome variable were subsequently used to build regression models for each of the 4 outcomes. As transformations did not normalize the variables, untransformed variables were analyzed using robust regression with the “robustbase” package in R (R Foundation), employing an MM-type regression estimator with a bisquare redescending score function [,]. This method applies case weighting to account for nonnormality. All analyses were conducted in RStudio (R Foundation).

Regression Models
Robust linear regression was used to investigate which factors accounted for significant variance in engagement. For each of the 4 outcome variables—adjusted objective minutes, adjusted self-report minutes, objective minutes, and self-report minutes—a separate regression model was created using the respective measure as the outcome variable. No stepwise regression was implemented. Instead, independent variables were selected from user, mental health, and app factors that were significantly associated with at least one outcome measure in the correlation analyses, following correction for multiple comparisons.
Multiple Comparisons Correction
We explored correlations between the 4 engagement outcomes and related factors, applying a false discovery rate (FDR) correction to account for multiple comparisons [].
Data Cleaning
Duplicate and invalid responses were removed. The data demonstrated weak correlations, positive skew, and extreme values, indicating potentially high variability or inconsistency (see Sections S6-S10 in ). After adjusting for app duration, we implemented several data-cleaning procedures. Intraindividual response validity calculation, LongString Identification [], and inconsistency of responses on the BFI-S-2 [] were each used to identify and exclude extreme cases; however, none of these approaches resulted in major changes to the results (see Sections S10-S12 in ). As specified in the preregistration, multivariate outliers were removed, identified as cases with a Mahalanobis distance greater than the 95th percentile on the BFI-S-2 (see Section 13 in ).
Results
Overall Engagement
Overall, most users completed only a few minutes across limited sessions. Despite generally low engagement, 134 out of 536 (25%) users reported more than 11 sessions per month (approximately 1 session every 3 days), while the top 5% (27/536) reported around 35 sessions per month (more than 1 session per day). These patterns align with prior findings, including a median of 90% of users dropping off within the first week of real-world use and an average 42% drop in participation in meditation app randomized controlled trials spanning 1-2 months [,]. Notably, 402 (75%) participants reported more than 9 sessions, which contrasts with prior findings suggesting that most users disengage completely within a week of download. However, only the top 5% (25/536, 4.7%) engaged at levels consistent with clinically meaningful change [], while the top 25% (134/536) engaged at levels comparable to the dose of mindfulness-based interventions [].
Participant Characteristics: Descriptives
Sociodemographic Features
Participants (N=536) ranged from 18 to 70 years of age (mean 36.56 years, SD 10.68 years) and were predominantly female (n=366, 68.3%). See for participant flow. Most resided in the United States 253 (47.20%) or the United Kingdom (n=226, 42.2%), with smaller proportions from Australia (n=27, 5%), Canada (n=21, 3.9%), and New Zealand (n=6, 1.1%). The majority identified as White (n=422, 78.73%) and were highly educated, with 387 (72.20%) holding at least a bachelor’s degree. Participants were also relatively wealthy, with nearly one-third reporting a combined income of $100,000 or more (n=145, 27.05%). Note that income brackets were not adjusted across countries. Nearly half of the participants reported no religious affiliation (n=264, 49.25%). The most frequently used apps were Headspace (n=191, 35.6%) and Calm (n=123, 22.9%), which together accounted for 58.6% of the sample (n=314). Full details are provided in Sections S3 and S4 in .

Meditation Experience
Most users (n=330, 61.6%) reported between 0 and 100 hours of overall meditation experience. Meditation experience varied across meditation apps (χ24=34.18, P<.001); users with 0-100 hours were most likely to use Headspace (124/377, 32.9%), followed by Calm (75/377, 19.9%) and Insight Timer (25/377, 6.6%; see ).
| Duration (hours) | Calm, n (%) | Headspace, n (%) | Insight Timer, n (%) | Total, n (%) |
| 0-100 | 75 (19.9) | 124 (32.9) | 25 (6.6) | 224 (59.4) |
| 101-1000 | 35 (9.3) | 46 (12.2) | 46 (12.2) | 127 (33.7) |
| 1001+ | 6 (1.6) | 11 (2.9) | 9 (2.4) | 26 (6.9) |
| Total | 116 (30.8) | 181 (48.0) | 80 (21.2) | 377 (100) |
Engagement
After excluding invalid responses and duplicates, engagement levels remained low, with a positive skew for both hours and sessions (see and ). Adjusting for app duration showed low engagement regardless of how long the app had been available to users (see and ). Estimated minutes per month and sessions per month were adjusted within each user to provide clearer engagement metrics. The median number of sessions per month was 3.29. With a median of 40.2 minutes per month, this equated to roughly three 12-minute sessions.



Engagement Statistics
Engagement variables were highly intercorrelated (r=0.495-0.999; see Section S14 in ). Objective adjusted minutes were the most reliable variable, but the sample was small and limited to 2 apps (n=156; Headspace and Waking Up). By contrast, subjective minutes with a subjective start date had a sample size 3 times larger (n=536). Given the strong association between objective and subjective start dates (r=0.783, P<.001), adjusted subjective minutes were calculated using subjective duration as the denominator to maximize sample size.
Categorical Demographic Factors and Engagement
Categorical demographic associations with engagement are reported in . Being female was associated with lower engagement on 1 outcome, while residing in the United Kingdom was associated with higher engagement across 3 of the 4 outcome variables.
| Variables | Objective minutes | Subjective minutes | Adjusted objective | Adjusted subjective | |||||
| Sexb | –0.019 | 0.080 | –0.296a | –0.016 | |||||
| Residence | |||||||||
| Australia | –0.027 | –0.016 | –0.004 | –0.040 | |||||
| United States | –0.071 | –0.069 | –0.065 | –0.068 | |||||
| United Kingdom | 0.102a | 0.110a | 0.075 | 0.106b | |||||
| Canada | –0.032 | –0.062 | –0.029 | –0.068 | |||||
| New Zealand | –0.021 | –0.031 | –0.007 | 0.038 | |||||
aP<.05 without multiple comparisons correction.
bP<.05 with multiple comparisons correction. For the biserial sex correlation, 1=female, 0=male.
Engagement by App
Three 1-way analyses of variance indicated a significant effect of app type on engagement for subjective minutes (F2,362=9.03, P=.002, η2=.05) and adjusted subjective minutes (F2,358=7.63, P=.001; see Sections S15 and S16 in ). No significant differences were found for objective minutes (F2,362=0.972, P=.38).
User Factors
We defined robust associations as those present across 3 or more of the 4 engagement outcomes. Among user factors, only 9 of 20 variables met this criterion (see ): age, openness (BFI-S-2), readiness to change, expectation match, expectations for sleep, expectations for anxiety, expectations for happiness, expectations for thriving, and expectations for performance enhancement. After FDR correction, only 4 of 20 remained: age, readiness to change, expectations for sleep, and expectations for thriving (see Section 17 in for CIs).

Mental Health Factors
Self-reported stress, depression, and psychological distress were negatively associated with app use. Specifically, distress was negatively associated with adjusted self-reported minutes, depression with unadjusted self-reported minutes, and current stress with both unadjusted self-reported minutes and objective minutes. However, no mental health factors remained significantly associated with any engagement outcome after correction for multiple comparisons (see ).

App Factors
DWAI-6 Total, uMARS Appeal, uMARS Perceived Quality, and Perceived Impact were associated with 3 of the 4 outcome variables. Of the app factors investigated, 7 were associated with at least one engagement outcome after FDR correction (see ): DWAI-6 Total, as well as Goal, Bond, and Task subscales, and uMARS Appeal, Perceived Quality, and Perceived Impact.

Outcome Regression Models
All models demonstrated a reasonable fit, explaining 12%-16% of the variance (see and Section 18 in ). Significant (P<.05) predictors in 1 or more regression models included education, readiness to change, expectations for sleep, expectation match, the Perceived Quality subscale of the uMARS, and the Openness subscale of the BFI-S-2.
| Standardized β coefficients for predictors in models 1–4 | Adjusted objective minutesa, standardized β coefficients | Objective minutesb, standardized β coefficients | Adjusted self-report minutesc, standardized β coefficients | Subjective minutesd, standardized β coefficients | |||||
| Intercept | –0.260e | –0.228e | –0.355e | –0.371e | |||||
| User factors | |||||||||
| Sex | 0.001 | <0.001 | –0.008 | 0.002 | |||||
| Country of residence (United Kingdom) | <.0001 | 0.002 | 0.008 | 0.004 | |||||
| Age | 0.003 | <–.001 | 0.004 | –0.001 | |||||
| Education | 0.033 | 0.153e | 0.532e | 0.237e | |||||
| Big Five Inventory Short form 2 openness | 0.010f | 0.003 | 0.005 | 0.006 | |||||
| Readiness to change | <0.001 | 0.005f | 0.027g | 0.008f | |||||
| Expectations (match) | <0.001 | –0.003 | 0.023f | –0.004 | |||||
| Expectations for sleep | –0.002 | 0.004 | –0.008 | 0.009f | |||||
| Expectations for stress | <0.001 | <–.001 | 0.012 | <.001 | |||||
| Expectations for anxiety | 0.001 | –0.001 | –0.020 | –0.003 | |||||
| Expectations for happiness | –0.005 | 0.002 | 0.009 | <.001 | |||||
| Expectations for thriving | 0.008 | 0.003 | –0.006 | 0.006 | |||||
| Expectations for performance enhancement | –0.004 | –0.003 | –0.015 | –0.003 | |||||
| App factors | |||||||||
| DWAI-6h (Goal) | –0.009 | 0.004 | –0.020 | 0.008 | |||||
| DWAI-6 (Bond) | –0.004 | –0.003 | –0.013 | –0.002 | |||||
| DWAI-6 (Task) | 0.010 | 0.002 | 0.024 | 0.004 | |||||
| uMARSi (Appeal) | –0.012 | <.001 | –0.016 | 0.005 | |||||
| uMARS (Perceived Quality) | 0.025g | 0.003g | 0.041e | 0.010f | |||||
| uMARS (Perceived Impact) | –0.006 | <–.001 | –0.003 | 0.002 | |||||
| Adjusted R2 | 0.158 | 0.150 | 0.126 | 0.137 | |||||
aApp-verified minutes of use per year adjusted for total duration of use in years.
bTotal app-verified minutes of use.
cSelf-report minutes of use per year adjusted for total duration of use in years.
dSelf-report total minutes of use.
eP<.001.
fP<.05.
gP<.01.
hDWAI-6: 6-item Digital Working Alliance Inventory.
iuMARS: user Mobile Application Rating Scale.
Discussion
Principal Findings
We examined factors associated with engagement in popular meditation apps among 536 participants. Consistent with prior findings, most participants engaged minimally. Although apps were available for an average of 894 days (about 2.5 years), participants reported an average of 108 sessions, while app-verified data from about one-third of participants indicated 62 sessions on average. Half of the sample engaged in 3 or fewer sessions per month. Notably, engagement did not increase with longer app availability, suggesting a pattern of persistently low overall engagement.
Few significant correlations between individual user factors and engagement were observed, most of which were small in magnitude (r=0.09-0.30), with a few reaching the moderate range (r=0.30-0.50). After correction for multiple comparisons, positive associations with engagement remained for male sex, older age, higher education level, readiness to change, and expectations of the app for stress reduction, sleep improvement, anxiety reduction, happiness, thriving, and performance enhancement. These results suggest that older, more educated users with greater readiness to change and higher expectations of the app are more likely to engage regularly. App factors—including perceived appeal, quality, and impact—were consistently associated with higher engagement, as were the 3 digital working alliance subscales (Goal, Task, and Bond). This indicates that both perceptions of the app and the perceived relationship with it may be important determinants of engagement.
User Factors Related to Use
Sociodemographics
Education was the variable most consistently associated with higher engagement. Meditation is more common among individuals who are White, middle-aged, wealthier, and better educated []. Lower levels of education have been linked to earlier disengagement or a failure to engage in meditation at all []. Higher education is also associated with greater engagement in mindfulness practices in US nationally representative surveys []. Lower levels of education have also been linked to poorer health outcomes [,] and are related to health literacy, which partially mediates health-promoting behaviors [,]. While lower education may contribute to poorer health outcomes and lower health literacy, other social factors may also reduce engagement. For example, individuals with lower education often have more fragmented leisure time, leaving less opportunity for regular, recurrent activities []. Male sex was associated with 1 engagement measure, which contrasts with prior research showing that females are generally more likely to engage in meditation practice, even after controlling for other demographic factors [,,]. Previous studies have also found that males may demonstrate greater persistence in meditation []. As this correlation was observed only for adjusted objective use (available for Headspace and Waking Up), it may reflect patterns specific to users of these apps rather than meditation app use more broadly []. Notably, sex did not emerge as a significant predictor in the regression models.
Personality
Of the personality factors, only openness was related to engagement. Openness reflects general curiosity and a willingness to explore novel perspectives of one’s subjective experience []. Individuals higher in openness are more likely to try meditation initially and persist despite encountering difficulties. Openness has also been associated with meditation practice outside of group meditation class settings []. In contrast to prior research, we found no associations between engagement and conscientiousness, extraversion, agreeableness, or neuroticism [,,]. While conscientiousness was not related to engagement in this study, it has previously been linked to positive attitudes toward practice []. Similarly, neuroticism showed no association with engagement here, although prior work has linked it to perceiving more barriers to practice [,].
Mental Health
None of the 8 mental health factors were significantly associated with engagement. Previous research has found meditation apps to be modestly effective for depression and anxiety [,], potentially serving as a form of self-managed treatment for individuals facing barriers to mental health care []. However, no such associations with mental health factors were observed in this study. In a previous study, motivation for mental health was negatively associated with app use []. Meditation can negatively impact mental health []. While these meditation-related adverse events do not always result in impairment, about half of meditators report experiencing an adverse effect, and 9.1% report functional impairment as a result []. Individuals who do not experience benefits or who encounter adverse effects may disengage shortly after download. Furthermore, meditating for mental health reasons has been negatively associated with the total amount of meditation practice completed over the long term [,]. Individuals with higher lifetime meditation practice often shift toward spiritual motivations as their practice progresses []. However, the retrospective design of our study limits causal inferences.
App Factors
uMARS
Five of the 6 uMARS subscales were associated with engagement. Previous research suggests that aesthetics and appeal relate to meditation app engagement [], although in our study, aesthetics were not robustly associated after FDR correction. The Perceived Quality subscale showed the strongest association (r=0.51), indicating that user perceptions may drive both usage and beliefs in the app’s effectiveness. Perceived impact was also robustly associated with engagement. Given the retrospective design, survivorship bias should be considered: users who continued using the apps likely enjoyed them, while those who did not may have stopped. It is also possible that users who experienced benefits from their chosen app developed increasingly positive app appraisals over time.
Digital Working Alliance
The DWAI-6 Goal, Bond, and Task subscales, as well as the overall score, were associated with engagement, consistent with prior findings []. All subscales correlated with adjusted objective minutes—the most reliable outcome measure, computed using app-verified minutes and download date—but this could only be calculated for apps that provide download dates (Headspace and Waking Up; n=151). Therapeutic alliance and engagement may promote each other []. While therapeutic alliance is considered important in digital mental health [], current measures are adaptations of traditional, human-centered alliance scales. Incorporating human-computer interaction perspectives may provide greater nuance, particularly for anthropomorphic scale items []. Despite this limitation, therapeutic alliance with apps remains relevant to engagement, as alignment between a user’s goals and perceived app support may encourage continued use.
One consideration for both the uMARS and DWAI-6 is that several subscales demonstrated relatively poor internal consistency. The reliability of the uMARS Engagement and Subjective Quality subscales, as well as the DWAI-6 Bond and Task subscales, ranged from acceptable to poor, which reduces confidence in the constructs being measured.
Expectations for Efficacy
Higher expectations of efficacy across 6 of the 7 domains assessed (sleep, stress, anxiety, happiness, thriving, and performance enhancement) were generally associated with higher engagement, with the exception of expectations for attention/focus. Only expectations for sleep were significant in the regression model. These findings align with our predictions. Experimental and prospective studies have shown that failing to meet expectations is more predictive of behavior than matched expectations [,]. Unmet or low expectations negatively influence engagement and perceived usefulness, whereas met or exceeded expectations positively affect behavior and perceptions []. Expectations are closely linked to app ratings, as features such as goal setting and feedback enhance beliefs in an app’s effectiveness []. These features can also foster positive experiences of progress, creating a feedback loop that promotes further engagement [,]. In the absence of human interaction, the relationship between a user and an app is shaped by the “user journey”—the path a user follows through the app’s design. Persuasive design can help establish and meet user expectations.
A general rating of whether expectations were met was weakly associated with engagement. This result aligns with literature suggesting that matched expectations have a positive influence on behavior and mismatched expectations have a negative effect []. It is worth noting that we asked, “To what extent did your experience match your initial expectations?” without specifying which expectations participants should consider. Consequently, this approach may have captured only an overall impression of expectation match.
Readiness to Change
Readiness to change showed robust, moderate associations and accounted for a significant proportion of variance in the regression model. The readiness-to-change ruler used in this study is actively employed in behavior change interventions and is based on the Transtheoretical Model of Change, which conceptualizes behavior change in stages [,,]. Readiness to change shows promise as one of the most predictive factors of actual behavior change, as it is conceptually closely linked to both motivation and behavior. These findings align with broader evidence connecting readiness ratings to actual behavior change, particularly in health-related contexts [,,]. This relationship could inform app design, allowing offerings to be tailored to users’ readiness levels. The same single-item measure used in our study could be implemented immediately after app download to tailor the length, complexity, and type of practice to users’ readiness levels. For example, users with lower readiness could be offered shorter, simpler meditations or psychoeducational content about meditation to reduce perceived barriers and enhance understanding of the practice.
Self-Efficacy and App Ratings in Building Habits
Contrary to our expectations, self-efficacy was not related to engagement. Previous research on habit formation suggests that self-efficacy may support the maintenance of a target behavior before a habit is established. There is limited evidence that self-efficacy promotes habit-building [,,] and increases with ongoing meditation practice []; however, results are mixed [,]. One likely reason self-efficacy did not predict engagement is that expectations, perceptions, and habit formation played larger roles. A person may believe they can achieve a goal, but if they are not committed or do not perceive long-term utility, they may lack motivation to engage. This may explain why readiness to change was associated with engagement, whereas self-efficacy was not.
Limitations
One key limitation of this study is that its retrospective design precludes causal inferences, although research on meditation app engagement is generally scarce. Additionally, we cannot confirm detailed usage patterns, such as extended gaps or cessation points; however, our estimates of sessions per month provide a rough indication of practice regularity. This study included cross-app comparisons, which few prior studies have conducted. Such comparisons are valuable, given that all therapeutic alliance subscales and half of the uMARS subscales were associated with engagement after correction for multiple comparisons. However, by not focusing on a specific app, the sample was disproportionately composed of users of the most popular apps.
Another limitation was that our most reliable outcome variable—objective minutes adjusted for verified app duration—was restricted to apps that displayed the month or year of joining. As a result, the sample for objective-adjusted minutes comprised only about one-third of the self-reported sample. Nevertheless, objective minutes were highly correlated with self-reported minutes, which may mitigate some concerns, although it is possible that individuals who can view their app-recorded minutes rely on these records when self-reporting.
Our data quality may have been influenced by self-selection, socioeconomic skew, and the compensation structure in our Prolific sample. Nevertheless, research indicates that among popular online survey platforms, Prolific consistently provides high-quality data across a wide range of measures []. Additionally, our data may have been skewed by the overrepresentation of the most popular meditation apps, limiting the generalizability of the findings to less popular apps or those with a narrower focus.
A final significant issue concerns what the outcome measures captured. While meditation was the central focus of the included apps, many also offer alternative exercises that contribute to the measured minutes, including—but not limited to—breathwork, sleep stories, and podcasts. This is an issue because sleep stories may continue running for hours after an individual falls asleep and be recorded as meditation. Future studies that can distinguish between different activities will provide more accurate statistics on meditation engagement.
Future Directions
A group-level comparison of engagers and disengagers could reveal cluster effects, where active users share similar characteristics. Baumel et al [] observed a drop-off trend in a large sample but noted that understanding precisely why people engage or disengage during this period would be of interest. While there is a high use rate among those who continue engaging beyond the first week, this represents only a small portion of users. By including all users over the past 180 days, we obtain a general picture of app use across the population; however, this approach results in a large sample of disengagers and only a small sample of active users, limiting our power to detect effect sizes within the subsample of engagers.
Longitudinal analysis could directly examine temporal and causal aspects of engagement, account for changes in contributing factors over time, and provide a clearer understanding of baseline predictors. For example, longitudinal data could track whether changes in mental health outcomes influence engagement. This study did not find any significant associations for mental health outcomes that survived multiple comparisons. However, we relied on participants’ reports of mental health status following meditation app use. Apps have been shown to reduce outcomes such as stress, depression, and anxiety [], and such changes could positively or negatively reinforce app use. Moods, circumstances, and lifestyles can fluctuate widely over extended periods. User ratings of apps using scales such as the uMARS may better explain engagement when app rating and user engagement occur close together. Longitudinal analysis also allows for baseline measurement of variables, such as expectations, which can then be compared with actual experiences at follow-up. The low proportion of variability accounted for suggests that factors outside the model have a significant impact on engagement.
Conclusions
This study aimed to explore factors influencing engagement with popular meditation apps, highlighting a substantial early drop-off. Although the models accounted for only a small proportion of overall variance, the findings emphasize the importance of user characteristics and app quality in sustaining engagement. This exploratory study aimed to examine a wide range of factors potentially relevant to meditation app engagement. The results indicated that older, more educated users, as well as those with higher expectations of apps and greater readiness to change, were more likely to engage with the apps regularly.
Acknowledgments
We acknowledge the coding contributions of Alex Burger and Karen Trapani, as well as the administrative support of Cathleen Benevento. This study would not have been possible without the support of the Contemplative Studies Centre and the Melbourne School of Psychological Sciences community at the University of Melbourne. Funding was provided to establish the Contemplative Studies Centre via a philanthropic gift from the Three Springs Foundation, Pty, Ltd. No original content in the manuscript was generated by artificial intelligence. The authors occasionally used the free version of Grammarly (Grammarly, Inc) for content flow, such as identifying extraneous words in a sentence.
Data Availability
Data and analytic code are uploaded to the Open Science Framework registry [].
Authors' Contributions
Conceptualization: JA, JD, NTVD, JG
Data curation: JA, JD, PW, NTVD
Formal analysis: JA, JD, NTVD
Funding acquisition: NTVD
Methodology: JA, JD, FM, JG, NTVD
Resources: JA, JD, JG, NTVD
Visualization: JA, NTVD
Writing – original draft: JA, JG, NTVD
Writing – review & editing: JA, JG, SDA, NTVD
Conflicts of Interest
None declared.
Additional analysis.
DOCX File , 727 KBReferences
- GBD 2019 Mental Disorders Collaborators. Global, regional, and national burden of 12 mental disorders in 204 countries and territories, 1990-2019: a systematic analysis for the Global Burden of Disease Study 2019. Lancet Psychiatry. Feb 2022;9(2):137-150. [FREE Full text] [CrossRef] [Medline]
- Lattie EG, Stiles-Shields C, Graham AK. An overview of and recommendations for more accessible digital mental health services. Nat Rev Psychol. Feb 2022;1(2):87-100. [FREE Full text] [CrossRef] [Medline]
- Schueller S, Hunter J, Figueroa C, Aguilera A. Use of digital mental health for marginalized and underserved populations. Curr Treat Options Psych. Jul 5, 2019;6(3):243-255. [FREE Full text] [CrossRef] [Medline]
- Baumel A, Muench F, Edan S, Kane JM. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res. Sep 25, 2019;21(9):e14567-e14255. [FREE Full text] [CrossRef] [Medline]
- Goldberg SB, Baldwin SA, Riordan KM, Torous J, Dahl CJ, Davidson RJ, et al. Alliance with an unguided smartphone app: validation of the digital working alliance inventory. Assessment. Sep 25, 2022;29(6):1331-1345. [FREE Full text] [CrossRef] [Medline]
- Lutz A, Slagter HA, Dunne JD, Davidson RJ. Attention regulation and monitoring in meditation. Trends Cogn Sci. Apr 16, 2008;12(4):163-169. [FREE Full text] [CrossRef] [Medline]
- Kabat-Zinn J. Mindfulness-based interventions in context: past, present, and future. Clinical Psychology: Science and Practice. 2003;10(2):144-156. [FREE Full text] [CrossRef]
- Lam S, Riordan K, Simonsson O, Davidson R, Goldberg S. Who sticks with meditation? Rates and redictors of persistence in a population-based sample in the USA. Mindfulness (N Y). Jan 06, 2023;14(1):66-78. [FREE Full text] [CrossRef] [Medline]
- Lee RL. Review of headspace: meditation and sleep. Fam Syst Health. Mar 06, 2023;41(1):114-116. [CrossRef] [Medline]
- Eysenbach G. The law of attrition. J Med Internet Res. Mar 31, 2005;7(1):e11-116. [FREE Full text] [CrossRef] [Medline]
- Olano HA, Kachan D, Tannenbaum SL, Mehta A, Annane D, Lee DJ. Engagement in mindfulness practices by U.S. adults: sociodemographic barriers. J Altern Complement Med. Feb 2015;21(2):100-102. [FREE Full text] [CrossRef] [Medline]
- Gál É, Ștefan S, Cristea IA. The efficacy of mindfulness meditation apps in enhancing users' well-being and mental health related outcomes: a meta-analysis of randomized controlled trials. J Affect Disord. Jan 15, 2021;279(1):131-142. [FREE Full text] [CrossRef] [Medline]
- Crane RS, Brewer J, Feldman C, Kabat-Zinn J, Santorelli S, Williams JMG, et al. What defines mindfulness-based programs? The warp and the weft. Psychol Med. Apr 2017;47(6):990-999. [CrossRef] [Medline]
- Goldberg SB, Tucker RP, Greene PA, Davidson RJ, Wampold BE, Kearney DJ, et al. Mindfulness-based interventions for psychiatric disorders: a systematic review and meta-analysis. Clin Psychol Rev. Feb 2018;59(6):52-60. [FREE Full text] [CrossRef] [Medline]
- Galante J, Friedrich C, Collaboration of Mindfulness Trials (CoMinT) 3, Dalgleish T, Jones PB, White IR, et al. Collaboration of Mindfulness Trials (CoMinT). Individual participant data systematic review and meta-analysis of randomised controlled trials assessing adult mindfulness-based programmes for mental health promotion in non-clinical settings. Nat Ment Health. Jul 10, 2023;1(7):462-476. [FREE Full text] [CrossRef] [Medline]
- Bowles NI, Davies JN, Van Dam NT. Dose-response relationship of reported lifetime meditation practice with mental health and wellbeing: a cross-sectional study. Mindfulness (N Y). Feb 2022;13(10):2529-2546. [CrossRef] [Medline]
- Schultchen D, Terhorst Y, Holderied T, Stach M, Messner E-M, Baumeister H, et al. Stay present with your phone: a systematic review and standardized rating of mindfulness apps in European app stores. Int J Behav Med. Oct 10, 2021;28(5):552-560. [CrossRef] [Medline]
- Spijkerman MPJ, Pots WTM, Bohlmeijer ET. Effectiveness of online mindfulness-based interventions in improving mental health: a review and meta-analysis of randomised controlled trials. Clin Psychol Rev. Apr 2016;45(4):102-114. [FREE Full text] [CrossRef] [Medline]
- Sommers-Spijkerman M, Austin J, Bohlmeijer ET, Pots W. New evidence in the booming field of online mindfulness: an updated meta-analysis of randomized controlled trials. JMIR Ment Health. Jul 19, 2021;8(7):e28168-e28114. [FREE Full text] [CrossRef] [Medline]
- Jiang A, Rosario M, Stahl S, Gill JM, Rusch HL. The effect of virtual mindfulness-based interventions on sleep quality: a systematic review of randomized controlled trials. Curr Psychiatry Rep. Jul 23, 2021;23(9):62. [FREE Full text] [CrossRef] [Medline]
- DiMatteo MR, Giordani PJ, Lepper HS, Croghan TW. Patient adherence and medical treatment outcomes: a meta-analysis. Med Care. Sep 2002;40(9):794-811. [CrossRef] [Medline]
- Baumeister H, Reichler L, Munzinger M, Lin J. The impact of guidance on Internet-based mental health interventions — a systematic review. Internet Interventions. Oct 2014;1(4):205-215. [CrossRef]
- Fleming T, Bavin L, Lucassen M, Stasiak K, Hopkins S, Merry S. Beyond the trial: systematic review of real-world uptake and engagement with digital self-help interventions for depression, low mood, or anxiety. J Med Internet Res. Jun 06, 2018;20(6):e199-e215. [FREE Full text] [CrossRef] [Medline]
- Szinay D, Jones A, Chadborn T, Brown J, Naughton F. Influences on the uptake of and engagement with health and well-being smartphone apps: systematic review. J Med Internet Res. May 29, 2020;22(5):e17572. [FREE Full text] [CrossRef] [Medline]
- Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. Jun 29, 2017;7(2):254-267. [FREE Full text] [CrossRef] [Medline]
- Lam SU, Kirvin-Quamme A, Goldberg SB. Overall and Differential Attrition in Mindfulness-Based Interventions: A Meta-Analysis. Mindfulness (N Y). Nov 2022;13(11):2676-2690. [FREE Full text] [CrossRef] [Medline]
- Strohmaier S. The relationship between doses of mindfulness-based programs and depression, anxiety, stress, and mindfulness: a dose-response meta-regression of randomised controlled trials. Mindfulness. Mar 02, 2020;11(6):1315-1335. [FREE Full text] [CrossRef] [Medline]
- Yik LL, Ling LM, Ai LM, Ting AB, Capelle DP, Zainuddin SI, et al. The effect of 5-minute mindfulness of peace on suffering and spiritual well-being among palliative care patients: a randomized controlled study. Am J Hosp Palliat Care. Sep 28, 2021;38(9):1083-1090. [CrossRef] [Medline]
- Bowles NI, Van Dam NT. Dose-response effects of reported meditation practice on mental-health and wellbeing: a prospective longitudinal study. Appl Psychol Health Well Being. Aug 20, 2025;17(4):e70063-e71090. [CrossRef] [Medline]
- Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. Hoboken, NJ. Prentice-Hall; 1986.
- Ajzen I. The theory of planned behavior. Organizational Behavior and Human Decision Processes. Dec 1, 1991;50(2):179-211.
- DiClemente CC, Prochaska O. Toward a comprehensive, transtheoretical model of change: stages of change and addictive behaviors. In: Treating Addictive Behaviors (2nd Edition). Berlin/Heidelberg, Germany. Springer; 1998:3-27.
- Gardner B, Lally P, Wardle J. Making health habitual: the psychology of 'habit-formation' and general practice. Br J Gen Pract. Dec 2012;62(605):664-666. [FREE Full text] [CrossRef] [Medline]
- Oinas-Kukkonen H, Harjumaa M. Persuasive systems design: key issues, process model, and system features. CAIS. 2009;24(605):664-666. [FREE Full text] [CrossRef]
- Lally P, Gardner B. Promoting habit formation. Health Psychology Review. May 2013;7(sup1):S137-S158. [CrossRef]
- Crandall A, Cheung A, Young A, Hooper AP. Theory-based predictors of mindfulness meditation mobile app usage: a survey and cohort study. JMIR Mhealth Uhealth. Mar 22, 2019;7(3):e10794-e1S158. [FREE Full text] [CrossRef] [Medline]
- Hesse M. The Readiness Ruler as a measure of readiness to change poly-drug use in drug abusers. Harm Reduct J. Jan 25, 2006;3(3):3. [FREE Full text] [CrossRef] [Medline]
- Bowen M, Beam M. Mapping mindfulness: assessing the stages of meditation habit formation in the USA using the Sussex Mindfulness Meditation (SuMMed) Model. Mindfulness. Apr 07, 2025;16(5):1340-1351. [CrossRef]
- Miles E, Matcham F, Strauss C, Cavanagh K. Making mindfulness meditation a healthy habit. Mindfulness. Nov 28, 2023;14(12):2988-3005. [CrossRef]
- Lam SU, Xie Q, Goldberg SB. Situating meditation apps within the ecosystem of meditation practice: population-based survey study. JMIR Ment Health. Apr 28, 2023;10:e43565. [FREE Full text] [CrossRef] [Medline]
- Davies JN, Faschinger A, Galante J, Van Dam NT. Prevalence and 20-year trends in meditation, yoga, guided imagery and progressive relaxation use among US adults from 2002 to 2022. Sci Rep. Jul 01, 2024;14(1):14987. [FREE Full text] [CrossRef] [Medline]
- Jakob R, Harperink S, Rudolf AM, Fleisch E, Haug S, Mair JL, et al. Factors influencing adherence to mHealth apps for prevention or management of noncommunicable diseases: systematic review. J Med Internet Res. May 25, 2022;24(5):e35371. [FREE Full text] [CrossRef] [Medline]
- Osin EN, Turilina II. Mindfulness meditation experiences of novice practitioners in an online intervention: trajectories, predictors, and challenges. Appl Psychol Health Well Being. Feb 15, 2022;14(1):101-121. [CrossRef] [Medline]
- Kim S, Park JY, Chung K. The relationship between the big five personality traits and the theory of planned behavior in using mindfulness mobile apps: cross-sectional survey. J Med Internet Res. Nov 30, 2022;24(11):e39501. [FREE Full text] [CrossRef] [Medline]
- Canby NK, Eichel K, Peters SI, Rahrig H, Britton WB. Predictors of out-of-class mindfulness practice adherence during and after a mindfulness-based intervention. Psychosom Med. Oct 8, 2021;83(6):655-664. [FREE Full text] [CrossRef] [Medline]
- Stojanovic M, Fries S, Grund A. Self-efficacy in habit building: how general and habit-specific self-efficacy influence behavioral automatization and motivational interference. Front Psychol. Aug 2021;12(8):643753-643899. [CrossRef] [Medline]
- Schiwal AT, Fauth EB, Wengreen H, Norton M. The gray matters app targeting health behaviors associated with Alzheimer's risk: improvements in intrinsic motivation and impact on diet quality and physical activity. J Nutr Health Aging. Dec 22, 2020;24(8):893-899. [CrossRef] [Medline]
- Phillips W, Hine D. Self-compassion, physical health, and health behaviour: a meta-analysis. Health Psychol Rev. Mar 22, 2021;15(1):113-139. [CrossRef] [Medline]
- Jones F, Harris P, Waller H, Coggins A. Adherence to an exercise prescription scheme: the role of expectations, self-efficacy, stage of change and psychological well-being. Br J Health Psychol. Sep 2005;10(Pt 3):359-378. [CrossRef] [Medline]
- Laurie J, Blandford A. Making time for mindfulness. Int J Med Inform. Dec 01, 2016;96(4):38-50. [CrossRef] [Medline]
- Banerjee A, Banerji R, Berry J. From proof of concept to scalable policies: challenges and solutions, with an application. Journal of Economic Perspectives. 2017;31(4):73-102.
- Prince M, Patel V, Saxena S, Maj M, Maselko J, Phillips MR, et al. No health without mental health. Lancet. Sep 08, 2007;370(9590):859-877. [CrossRef] [Medline]
- Baumel A, Kane JM. Examining predictors of real-world user engagement with self-guided eHealth interventions: analysis of mobile apps and websites using a novel dataset. J Med Internet Res. Dec 14, 2018;20(12):e11491. [FREE Full text] [CrossRef] [Medline]
- Alqahtani F, Al Khalifah G, Oyebode O, Orji R. Apps for mental health: an evaluation of behavior change strategies and recommendations for future development. Front Artif Intell. Dec 17, 2019;2:30. [FREE Full text] [CrossRef] [Medline]
- Borghouts J, Eikey E, Mark G, De Leon C, Schueller SM, Schneider M, et al. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J Med Internet Res. Mar 24, 2021;23(3):e24387. [CrossRef] [Medline]
- Van Dam NT, van Vugt MK, Vago DR. . Mind the Hype: A Critical Evaluation and Prescriptive Agenda for Research on Mindfulness and Meditation. Perspectives on Psychological Science. 2018. URL: https://doi.org/10.1177/1745691617709589 [accessed 2025-10-24]
- Brazier J, Peasgood T, Mukuria C, Marten O, Kreimeier S, Luo N, et al. The EQ-HWB: Overview of the Development of a Measure of Health and Wellbeing and Key Results. Value Health. Apr 2022;25(4):482-491. [FREE Full text] [CrossRef] [Medline]
- Kessler RC, Andrews G, Colpe LJ, Hiripi E, Mroczek DK, Normand SLT, et al. Short screening scales to monitor population prevalences and trends in non-specific psychological distress. Psychol Med. Aug 2002;32(6):959-976. [FREE Full text] [CrossRef] [Medline]
- Tennant R, Hiller L, Fishwick R, Platt S, Joseph S, Weich S, et al. The Warwick-Edinburgh Mental Well-being Scale (WEMWBS): development and UK validation. Health Qual Life Outcomes. Nov 27, 2007;5(6):63-76. [FREE Full text] [CrossRef] [Medline]
- Cheung F, Lucas RE. Assessing the validity of single-item life satisfaction measures: results from three large samples. Qual Life Res. Dec 27, 2014;23(10):2809-2818. [FREE Full text] [CrossRef] [Medline]
- Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. May 22, 2006;166(10):1092-1097. [CrossRef] [Medline]
- Kroenke K, Strine TW, Spitzer RL, Williams JBW, Berry JT, Mokdad AH. The PHQ-8 as a measure of current depression in the general population. J Affect Disord. Apr 22, 2009;114(1-3):163-173. [CrossRef] [Medline]
- Raes F, Pommier E, Neff KD, Van Gucht D. Construction and factorial validation of a short form of the Self-Compassion Scale. Clin Psychol Psychother. Apr 08, 2011;18(3):250-255. [CrossRef] [Medline]
- Romppel M, Herrmann-Lingen C, Wachter R, Edelmann F, Düngen H-D, Pieske B, et al. A short form of the General Self-Efficacy Scale (GSE-6): Development, psychometric properties and validity in an intercultural non-clinical sample and a sample of patients at risk for heart failure. Psychosoc Med. 2013;10(3):Doc01-Doc05. [FREE Full text] [CrossRef] [Medline]
- Dixon JB, Laurie CP, Anderson ML, Hayden MJ, Dixon ME, O'Brien PE. Motivation, readiness to change, and weight loss following adjustable gastric band surgery. Obesity (Silver Spring). Apr 2009;17(4):698-705. [FREE Full text] [CrossRef] [Medline]
- Eshah NF. Readiness for Behavior Change in Patients Living With Ischemic Heart Disease. J Nurs Res. Dec 2019;27(6):e57-705. [FREE Full text] [CrossRef] [Medline]
- Finsrud I, Nissen-Lie HA, Vrabel K, Høstmælingen A, Wampold BE, Ulvenes PG. It's the therapist and the treatment: The structure of common therapeutic relationship factors. Psychother Res. Feb 2022;32(2):139-150. [FREE Full text] [CrossRef] [Medline]
- Soto CJ, John OP. The next Big Five Inventory (BFI-2): Developing and assessing a hierarchical model with 15 facets to enhance bandwidth, fidelity, and predictive power. J Pers Soc Psychol. Jul 02, 2017;113(1):117-143. [FREE Full text] [CrossRef] [Medline]
- Stoyanov SR, Hides L, Kavanagh DJ, Wilson H. Development and Validation of the User Version of the Mobile Application Rating Scale (uMARS). JMIR Mhealth Uhealth. Jun 10, 2016;4(2):e72-143. [FREE Full text] [CrossRef] [Medline]
- Yohai V. High Breakdown-Point and High Efficiency Robust Estimates for Regression. Ann. Statist. Jun 1, 1987;15(2):e72. [FREE Full text] [CrossRef] [Medline]
- Koller M, Stahel W. Sharpening Wald-type inference in robust regression for small samples. Computational Statistics & Data Analysis. Aug 1, 2011;55(8):2504-2515. [CrossRef]
- Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Methodological). Jan 01, 1995;57(1):289-300. [CrossRef]
- Yentes R, Wilhelm F. careless: Procedures for Computing Indices of Careless Responding. Package ‘careless’. 2021. [FREE Full text]
- Kunst AE, Bos V, Lahelma E, Bartley M, Lissau I, Regidor E, et al. Trends in socioeconomic inequalities in self-assessed health in 10 European countries. Int J Epidemiol. Apr 2005;34(2):295-305. [FREE Full text] [CrossRef] [Medline]
- Mackenbach JP, Stirbu I, Roskam A-JR, Schaap MM, Menvielle G, Leinsalu M, et al. European Union Working Group on Socioeconomic Inequalities in Health. Socioeconomic inequalities in health in 22 European countries. N Engl J Med. Jun 05, 2008;358(23):2468-2481. [FREE Full text] [CrossRef] [Medline]
- Barber MN, Staples M, Osborne RH, Clerehan R, Elder C, Buchbinder R. Up to a quarter of the Australian population may have suboptimal health literacy depending upon the measurement tool: results from a population-based survey. Health Promot Int. Sep 05, 2009;24(3):252-261. [CrossRef] [Medline]
- van der Heide I, Wang J, Droomers M, Spreeuwenberg P, Rademakers J, Uiters E. The relationship between health, education, and health literacy: results from the Dutch Adult Literacy and Life Skills Survey. J Health Commun. Sep 13, 2013;18 Suppl 1(Suppl 1):172-184. [FREE Full text] [CrossRef] [Medline]
- Sevilla A, Gimenez-Nadal JI, Gershuny J. Leisure inequality in the United States: 1965-2003. Demography. Aug 2012;49(3):939-964. [FREE Full text] [CrossRef] [Medline]
- Winter N, Russell L, Ugalde A, White V, Livingston P. Engagement Strategies to Improve Adherence and Retention in Web-Based Mindfulness Programs: Systematic Review. J Med Internet Res. Jan 12, 2022;24(1):e30026-e30064. [FREE Full text] [CrossRef] [Medline]
- Burke A, Lam CN, Stussman B, Yang H. Prevalence and patterns of use of mantra, mindfulness and spiritual meditation among adults in the United States. BMC Complement Altern Med. Jun 15, 2017;17(1):316. [FREE Full text] [CrossRef] [Medline]
- Goldberg LR. The structure of phenotypic personality traits. Am Psychol. Jan 15, 1993;48(1):26-34. [CrossRef] [Medline]
- Khwaja M, Pieritz S, Faisal AA, Matic A. Personality and Engagement with Digital Mental Health Interventions. In: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. 1993. Presented at: UMAP Confefence on User Modelling, Adaptation and Personalization; June 21-25, 2021:235-239; Utrecht, Netherlands. [CrossRef]
- Alqahtani F, Meier S, Orji R. Personality-based approach for tailoring persuasive mental health applications. User Model User-Adap Inter. Jul 16, 2021;32(3):253-295. [FREE Full text] [CrossRef]
- Whitford S, Warren K. Perceived Barriers to Meditation Among College Students. Building Healthy Academic Communities Journal. 2019;32(3):23-33. [CrossRef]
- Delmonte MM. Personality correlates of meditation practice frequency and dropout in an outpatient population. J Behav Med. Dec 29, 1988;11(6):593-597. [CrossRef] [Medline]
- Van Dam N, Targett J, Davies J, Burger A, Galante J. Incidence and Predictors of Meditation-Related Unusual Experiences and Adverse Effects in a Representative Sample of Meditators in the United States. Clinical Psychological Science. Jan 06, 2025;13(3):632-648. [CrossRef] [Medline]
- Jiwani Z, Lam SU, Richard JD, Goldberg SB. Motivation for Meditation and its Association with Meditation Practice in a National Sample of Internet Users. Mindfulness (N Y). Oct 06, 2022;13(10):2641-2651. [FREE Full text] [CrossRef] [Medline]
- Clarke J, Proudfoot J, Whitton A, Birch M-R, Boyd M, Parker G, et al. Therapeutic Alliance With a Fully Automated Mobile Phone and Web-Based Intervention: Secondary Analysis of a Randomized Controlled Trial. JMIR Ment Health. Feb 25, 2016;3(1):e10-2651. [FREE Full text] [CrossRef] [Medline]
- Tremain H, McEnery C, Fletcher K, Murray G. The Therapeutic Alliance in Digital Mental Health Interventions for Serious Mental Illnesses: Narrative Review. JMIR Ment Health. Aug 07, 2020;7(8):e17204. [FREE Full text] [CrossRef] [Medline]
- D'Alfonso S, Lederman R, Bucci S, Berry K. The Digital Therapeutic Alliance and Human-Computer Interaction. JMIR Ment Health. Dec 29, 2020;7(12):e21895. [FREE Full text] [CrossRef] [Medline]
- Armitage CJ, Norman P, Alganem S, Conner M. Expectations are more predictive of behavior than behavioral intentions: evidence from two prospective studies. Ann Behav Med. Apr 29, 2015;49(2):239-246. [FREE Full text] [CrossRef] [Medline]
- Armitage CJ, Norman P, Alganem S, Conner M. Expectations are more predictive of behavior than behavioral intentions: evidence from two prospective studies. Ann Behav Med. Apr 26, 2015;49(2):239-246. [FREE Full text] [CrossRef] [Medline]
- Bhattacherjee A. Understanding Information Systems Continuance: An Expectation-Confirmation Model. MIS Quarterly. Sep 26, 2001;25(3):351-617. [CrossRef]
- Zimmerman GL, Olsen CG, Bosworth MF. A 'stages of change' approach to helping patients change behavior. Am Fam Physician. Mar 01, 2000;61(5):1409-1416. [FREE Full text] [Medline]
- Zimmerman GL, Olsen CG, Bosworth MF. A 'stages of change' approach to helping patients change behavior. Am Fam Physician. Mar 01, 2000;61(5):1409-1416. [Medline]
- Moyers TB, Martin T, Houck JM, Christopher PJ, Tonigan JS. From in-session behaviors to drinking outcomes: a causal chain for motivational interviewing. J Consult Clin Psychol. Dec 2009;77(6):1113-1124. [FREE Full text] [CrossRef] [Medline]
- Singh A. Self-efficacy and well-being among students: role of goal meditation. International Journal of Indian Psychology. 2019;7(2):405-414. [CrossRef] [Medline]
- Goldstein L, Nidich SI, Goodman R, Goodman D. The effect of transcendental meditation on self-efficacy, perceived stress, and quality of life in mothers in Uganda. Health Care Women Int. Jul 2018;39(7):734-754. [FREE Full text] [CrossRef] [Medline]
- Goldstein L, Nidich SI, Goodman R, Goodman D. The effect of transcendental meditation on self-efficacy, perceived stress, and quality of life in mothers in Uganda. Health Care Women Int. Jul 2018;39(7):734-754. [FREE Full text] [CrossRef] [Medline]
- Wells RE, Burch R, Paulsen RH, Wayne PM, Houle TT, Loder E. Meditation for migraines: a pilot randomized controlled trial. Headache. Oct 2014;54(9):1484-1495. [CrossRef] [Medline]
- Peer E, Rothschild D, Gordon A, Evernden Z, Damer E. Data quality of platforms and panels for online behavioral research. Behav Res Methods. Aug 2022;54(4):1643-1662. [FREE Full text] [CrossRef] [Medline]
- Adams J. User and app related factors associated with engagement with mindfulness apps and health and wellbeing: a cross-sectional survey of US mindfulness app users. Open Science Framework (OSF). Mar 21, 2025. URL: https://osf.io/jcv5n/overview [accessed 2025-11-07]
Abbreviations
| BFI-S-2: Big Five Inventory Short Form 2 |
| DWAI-6: 6-item Digital Working Alliance Inventory |
| EQ-HWB-9: 9-item EuroQoL Health and Wellbeing |
| FDR: false discovery rate |
| K10: Kessler Psychological Distress Scale |
| MBP: mindfulness-based program |
| reCAPTCHA: reverse Completely Automated Public Turing Test to Tell Computers and Humans Apart |
| SWLS: Satisfaction with Life Survey |
| uMARS: user Mobile Application Rating Scale |
| WEMWBS: Warwick-Edinburgh Mental Wellbeing Scale |
Edited by A Mavragani, T de Azevedo Cardoso; submitted 30.Jan.2025; peer-reviewed by SU Lam, G Cain; comments to author 03.Mar.2025; revised version received 29.Mar.2025; accepted 20.Aug.2025; published 02.Feb.2026.
Copyright©Julia Adams, Jonathan Davies, Prai Wattanatakulchat, Julieta Galante, Felicity Miller, Simon D'Alfonso, Nicholas T Van Dam. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 02.Feb.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

