Published on in Vol 27 (2025)

From Efficacy to Scale: Addressing Digital Health’s Original Sin

From Efficacy to Scale: Addressing Digital Health’s Original Sin

From Efficacy to Scale: Addressing Digital Health’s Original Sin

Authors of this article:

Trevor van Mierlo, JMIR Correspondent

Key Takeaways

  • Dropout ≠ failure: Attrition is a defining feature of digital health, reflecting diverse user pathways rather than a universal flaw.
  • Efficacy isn’t enough: Efficacy remains essential, but prioritizing completion over engagement has stalled adoption and prevented scale.
  • Engagement must be central: Normalizing attrition reporting and adopting software-informed dissemination models are essential to move from efficacy to scale.

In the late 1990s, when I got my start in digital health, top of mind for researchers and investors was disruption. Internet penetration in North America was still limited. In 1998, only about 22% of US households had home access to the internet [1], and the central debate was not about the technology itself but about equitable access.

In labs and in boardrooms, a future was imagined where behavior change could be delivered at scale—inexpensive, global, and tailored to individual needs. This optimism was understandable and exciting; if people could log on, they could get better.

However, this vision contributed to digital health’s original sin. Rather than being developed and evaluated like software—iterative, user-focused, adaptable—researchers and investors jumped to conclusions, and we modeled trials like pharmaceuticals.

We set ourselves up for failure by defining success as users completing a full course of treatment versus not. Attrition and nonuse were regarded as failures rather than signals about how people realistically engage.

Three decades later, the consequence is clear: while efficacy has been demonstrated, digital health has yet to achieve scale comparable to other platforms that have become part of everyday life.

In 2005, Eysenbach published The Law of Attrition [2]. The central observation was that, unlike pharmaceutical trials where experiment-arm patients received a full dose, published interventions demonstrated substantial drop-off. Eysenbach argued for a “science of attrition,” proposing systematic studies with tools like survival analysis and attrition curves to understand where—and why—users disengage.

A year later, Christensen and Mackinnon [3] sharpened the debate by confirming that not all attrition is negative. Some users may disengage because they achieve their goals, others may drop out because the intervention does not work for them, and still others may drop out because behavior change is inherently difficult to achieve. Nearly a decade after that, researchers were still struggling with how to interpret engagement. In 2014 and 2015, I contributed to the conversation by showing that social network engagement followed power law distributions [4,5]; in 2016, I applied the Gini coefficient to pinpoint participation inequality [6].

But even today, attrition remains an unsettled issue. To understand why, it helps to look more closely at cognitive behavioral therapy (CBT).

CBT is one of the most studied—and most effective—treatments for mood disorders. When delivered digitally, dozens of randomized trials and meta-analyses have shown that CBT works [7-9]. In fact, some reviews conclude that digital CBT can be as effective as face-to-face therapy, provided people engage with it, but this is precisely where our original sin becomes visible.

In the early 2000s, we maintained our utopian assumption that digital CBT would function like a pill: patients would sign up, complete all sessions, and report back. In the real world, even when face-to-face, CBT is not like this. It is not prescribed like an antibiotic with a fixed dose. Some improve after a few sessions, while others require more. Relapse is common. Progress depends on the intensity of homework. Whether or not a therapist resonates with a specific patient is also key.

Major institutions reflect this variability. Harvard Health Publishing suggests that an effective course of CBT may take 12-20 weeks [10]. The United Kingdom’s National Health Service recommends 5-15 weeks [11], while the Mayo Clinic notes anywhere from 5 to 20 sessions may be needed [12].

There is no single prescription. Digital CBT makes this variability visible as usage data show precisely how people engage—or disengage.

In digital health, we often say “RCT” without distinguishing what kind of RCT we mean. This is a problem because there is an important difference between randomized clinical trials and randomized controlled trials.

In medicine, clinical trials are the gold standard. They are slow, expensive, and designed to test safety and efficacy. For pharmaceuticals, this makes perfect sense. Patients receive a standardized dose, and variables like blood pressure, cholesterol, or tumor size are measured against control groups.

But digital health is not a pill. Software is iterative, influenced by design, messaging, and behavioral economics. It’s not enough to show that a digital CBT program works in a specific population. The real challenge is how to make it work better, for more people, through different modalities, in an ever-shifting environment. This is where controlled trials come in. These experiments, common in software development, measure behavior: how often a user logs in, how they respond to cues, or if they prefer a specific design. If you used a large-scale platform today, you likely participated in a controlled trial [13].

Does this replace the need for clinical trials? Absolutely not. Clinical outcomes are essential. Without evidence of efficacy, no intervention can be trusted. However, after decades of trials showing that digital CBT works, the immediate need is to optimize engagement. We need to understand how design choices and behavioral strategies can help more users remain active.

Despite decades of research and dozens of trials, no digital behavioral intervention has reached adoption at the scale of mainstream digital platforms such as Facebook, YouTube, Netflix, or Amazon, which serve hundreds of millions of users.

By treating digital health like pharmaceuticals—whether selling our expertise to venture capitalists or granting agencies—we adopted the wrong sales language. This stigmatized attrition, making honest conversations and investment in engagement difficult.

We also misaligned research investment. Billions were spent on randomized clinical trials to prove efficacy. The original studies were important for establishing a baseline, but instead of exploring how users interacted and how those patterns could be optimized, the field—researchers and companies alike—was myopically focused on proving “my intervention works.”

Finally, pharmaceuticals have a clear path to scale: once efficacy is demonstrated, drugs are patented, prescribed, and distributed. Software does not work this way. Platforms are inexpensive to build but cannot be patented. They require marketing and, after launch, they are tweaked to scale across cultures and contexts.

By forcing digital health into a pharmaceutical mold, we created a business model that could not match the realities of software economics.

The way forward is to build on efficacy. Without clinical efficacy, digital health cannot be trusted, but efficacy alone will never be enough. If we want digital interventions to scale:

  • We must normalize attrition. Dropout data must not be buried in appendices; it maps how people engage.
  • We must rethink dissemination. On large-scale platforms, product-led, freemium, and culturally adaptive models have replaced enterprise sales.
  • With the right data, we can train artificial intelligence systems to engage users ethically and responsibly, turning efficacy into real-world impact at scale [14,15].

Two decades after The Law of Attrition, the lesson is clear: attrition is a system signal, not noise. By confronting our original sin, we can optimize engagement and perhaps finally realize our global vision.

Conflicts of Interest

The author is the Founder and a Strategic Advisor of the Evolution Health platform and acts as a consultant for both academic and commercial digital interventions. No specific product is promoted in this article, and the views expressed are those of the author.

  1. Falling through the net: defining the digital divide. US Department of Commerce | National Telecommunications and Administration Network. 1999. URL: https://www.ntia.gov/sites/default/files/data/fttn99/index.html [accessed 2025-10-16]
  2. Eysenbach G. The law of attrition. J Med Internet Res. Mar 31, 2005;7(1):e11. [CrossRef] [Medline]
  3. Christensen H, Mackinnon A. The law of attrition revisited. J Med Internet Res. Sep 29, 2006;8(3):e20. [CrossRef] [Medline]
  4. van Mierlo T. The 1% rule in four digital health social networks: an observational study. J Med Internet Res. Feb 4, 2014;16(2):e33. [CrossRef] [Medline]
  5. van Mierlo T, Hyatt D, Ching AT. Mapping power law distributions in digital health social networks: methods, interpretations, and practical implications. J Med Internet Res. Jun 25, 2015;17(6):e160. [CrossRef] [Medline]
  6. van Mierlo T, Hyatt D, Ching AT. Employing the Gini coefficient to measure participation inequality in treatment-focused Digital Health Social Networks. Netw Model Anal Health Inform Bioinform. 2016;5(1):32. [CrossRef] [Medline]
  7. Barak A, Hen L, Boniel-Nissim M, Shapira N. A comprehensive review and a meta-analysis of the effectiveness of internet-based psychotherapeutic interventions. J Technol Hum Serv. Jul 3, 2008;26(2-4):109-160. [CrossRef]
  8. Mamukashvili-Delau M, Koburger N, Dietrich S, Rummel-Kluge C. Long-term efficacy of internet-based cognitive behavioral therapy self-help programs for adults with depression: systematic review and meta-analysis of randomized controlled trials. JMIR Ment Health. Aug 22, 2023;10:e46925. [CrossRef] [Medline]
  9. Mohr DC, Kwasny MJ, Meyerhoff J, Graham AK, Lattie EG. The effect of depression and anxiety symptom severity on clinical outcomes and app use in digital mental health treatments: meta-regression of three trials. Behav Res Ther. Dec 2021;147:103972. [CrossRef] [Medline]
  10. Jeong Youn S, Margues L. Intensive CBT: how fast can I get better? Harvard Health Publishing. Oct 23, 2018. URL: https://www.health.harvard.edu/blog/intensive-cbt-how-fast-can-i-get-better-2018102315110 [accessed 2025-10-16]
  11. Overview - cognitive behavioural therapy (CBT). National Health Service. Nov 10, 2022. URL: https:/​/www.​nhs.uk/​mental-health/​talking-therapies-medicine-treatments/​talking-therapies-and-counselling/​cognitive-behavioural-therapy-cbt [accessed 2025-10-16]
  12. Cognitive behavioral therapy. Mayo Clinic. 2019. URL: https://www.mayoclinic.org/tests-procedures/cognitive-behavioral-therapy/about/pac-20384610 [accessed 2025-10-16]
  13. Xu YC, Chen N, Fernandez A, Sinno O, Bhasin A. From infrastructure to culture: A/B testing challenges in large scale social networks. Presented at: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Aug 10-13, 2015; Sydney, NSW, Australia. [CrossRef]
  14. Ghosh SG, Bhanu T, Gao D, et al. Reproducible workflow for online AI in digital health. arXiv. Preprint posted online on Sep 16, 2025. [CrossRef]
  15. van Mierlo T, Fournier R, Kit Yeung S, Lahutina S. Developing a behavioral phenotyping layer for artificial intelligence-driven predictive analytics in a digital resiliency course: protocol for a randomized controlled trial. JMIR Res Protoc. Aug 6, 2025;14:e73773. [CrossRef] [Medline]

Keywords

© JMIR Publications. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 27.Oct.2025.