Published on in Vol 24, No 7 (2022): July

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/35884, first published .
StudyU: A Platform for Designing and Conducting Innovative Digital N-of-1 Trials

StudyU: A Platform for Designing and Conducting Innovative Digital N-of-1 Trials

StudyU: A Platform for Designing and Conducting Innovative Digital N-of-1 Trials

Viewpoint

1Digital Health Center, Hasso Plattner Institute for Digital Engineering, University of Potsdam, Potsdam, Germany

2Digital Engineering Faculty, University of Potsdam, Potsdam, Germany

3Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States

4Department of Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

5The Center for Advanced Design Studies, Palo Alto, CA, United States

Corresponding Author:

Stefan Konigorski, PhD

Digital Health Center

Hasso Plattner Institute for Digital Engineering

University of Potsdam

Prof.-Dr.-Helmert-Straße 2-3

Potsdam, 14482

Germany

Phone: 49 331 5509 4873

Email: stefan.konigorski@hpi.de


N-of-1 trials are the gold standard study design to evaluate individual treatment effects and derive personalized treatment strategies. Digital tools have the potential to initiate a new era of N-of-1 trials in terms of scale and scope, but fully functional platforms are not yet available. Here, we present the open source StudyU platform, which includes the StudyU Designer and StudyU app. With the StudyU Designer, scientists are given a collaborative web application to digitally specify, publish, and conduct N-of-1 trials. The StudyU app is a smartphone app with innovative user-centric elements for participants to partake in trials published through the StudyU Designer to assess the effects of different interventions on their health. Thereby, the StudyU platform allows clinicians and researchers worldwide to easily design and conduct digital N-of-1 trials in a safe manner. We envision that StudyU can change the landscape of personalized treatments both for patients and healthy individuals, democratize and personalize evidence generation for self-optimization and medicine, and can be integrated in clinical practice.

J Med Internet Res 2022;24(7):e35884

doi:10.2196/35884

Keywords



A widespread aim in current medical research is to derive personalized treatment strategies, which are at the heart of treating every single patient with the best possible therapies. One motivation underlying this aim is that many drugs are only effective in up to 50% of patients [1-3]. Hence, treatment guidelines based on population-level randomized controlled trials (RCTs), which derive the best average treatment, may result in ineffective treatment or side effects in up to 50% of patients. As another characteristic of traditional RCTs, study participants only provide data for population-level analyses and do not profit from participation in the studies. Population-level RCTs are not meant to provide insights for individual participants. The gold standard design for evaluating individual-level treatment effects is prospective longitudinal N-of-1 trials [4], which are multi-crossover RCTs of sample size one [5,6]. That is, the study participant is administered the treatments of interest over time in accordance with a predefined setup of treatment length, duration, treatment blocks, and washout phases. In the literature, N-of-1 trials are sometimes used as a synonym for single-case experimental designs (SCEDs), mainly in the United Kingdom, but mostly used to describe a special case of SCED [7].

N-of-1 trials are well suited for studies when there is large interindividual heterogeneity in treatment effects, as well as when there are subpopulation groups or individuals with comorbidities of interest, who have been excluded from population-level RCTs. Important considerations relate to blinding, randomization, the typically arising correlation of measurements over time, and carryover effects of interventions, which can be approached either through the design of N-of-1 trials or in the subsequent statistical analysis. Series of N-of-1 trials can be aggregated to provide population-level estimates of treatment effects with similar efficiency to that of RCTs but require smaller numbers of participants [4,8,9]. Historically, there have been local implementations of N-of-1 trials in hospitals in the United States, Canada, and Australia [4,10]; series of articles on N-of-1 trials have been published in medical and epidemiological journals [11,12]; and networks on N-of-1 studies have been formed [13]. The advancements in digital technologies provide the potential to initiate a new era of N-of-1 trials in terms of scale and scope and have opened up new avenues to offer remote health care. In particular, performing N-of-1 trials digitally allows a seamless integration of trials in daily life, which can save time for participants and researchers since there is no need to visit a study center. This is especially important if daily measurements are collected. Extensive data can also be assessed passively to further reduce the burden on the participant if sensors are linked to a digital N-of-1 trial app. Finally, recruitment of participants can be simplified as participants from all over the world can participate in a trial, which is even more important for rare diseases where there are few potential participants [14].

Nonetheless, N-of-1 trials have not been integrated into mainstream clinical research or clinical practice. One of the underlying reasons might be that despite recently published guidelines [15-17], there are still considerations of the ethical framework when applying N-of-1 trials in clinical care [18]. Typically, series of N-of-1 trials designed with a specific research aim require ethics approval, while single N-of-1 trials with a clinical aim for a single patient do not require it. Often, however, this distinction is not clear. For example, let us consider a potential N-of-1 trial where physicians have the goal of finding out whether a particular drug is effective in off-label use for patients with chronic conditions such as chronic liver disease. The study setting involves patients treated in a specialized clinic department. In addition to treating the patients, the physicians in charge might be interested in knowing if the results are generalizable. In this situation, a series of N-of-1 trials can be designed, comparing standard care to standard care plus off-label drug use over different crossover periods. This example shows how N-of-1 trials can be woven into clinical practice and, as such, how their innovative design might be of high interest to physicians, if they can be performed easily. The latter point presents maybe the most important reason why N-of-1 trials have not been picked up more broadly since there is no platform available that allows for an easy and large-scale implementation of digital N-of-1 trials. As of now, conducting a digital N-of-1 trial generally necessitates the development of a new app.

Here, we present StudyU. StudyU provides an open-source, free, and easy-to-use platform with a study designer app for researchers, which allows easy design, customization, and implementation of N-of-1 trials, as well as a study participant app that allows participants to partake in these trials without having to set up user accounts. This serves to allow novel interactions among researchers, trials, and study participants.


Related Work

A number of apps for conducting N-of-1 trials have been published. In order to attract physicians and researchers to design and conduct digital N-of-1 trials through a platform, the platform has to be accessible as easily as possible, should be able to implement interventions beyond mere symptom tracking, and, more generally, should allow designing studies flexibly for different interventions and different outcomes. Finally, analyzing the results in the app and providing the results back to the participant is an essential component to use the intrinsic patient-empowering potential of N-of-1 trials.

Table 1 presents, to the best of our knowledge, an overview of the most relevant apps that can be used to perform individual-level studies, particularly for N-of-1 trials. We want to note that it does not provide an exhaustive overview, and some other commercial platforms exist with limited publicly available information on their functionality; for example, the N of 1 platform by Digital Infuzion [19], which allows observational tracking of study participants. Furthermore, some apps have been developed, which focus on the cocreation of single N-of-1 trials by study participants themselves [20,21], which are not the focus here.

The Trialist app [22] provides results back to the participants and has been used in different N-of-1 trials but is currently not available for download and general use. The N1 app [23] had one study implemented for iOS users in the United States, which investigated the effects of caffeine and L-theanine on cognitive outcomes. It did not allow customization and further implementation of studies and is currently not available. Several apps provide functionalities for self-tracking and self-quantification but do not allow for an experimental evaluation of interventions (eg, mPower [24] and Parkinson mPower2 [24]). Of these apps, N1 and mPower are based on the Apple Research Kit. OpenClinica [25] allows creating and conducting studies but focuses on electronic data capture and data management and neither reports results to participants nor allows a collaborative creation of studies. QuantifyMe [26] is a platform that allows users to choose from a limited and prespecified set of interventions and outcomes to design a study without further customization possibilities. TummyTrials [27] and SleepCoacher [28] provide possibilities to choose from a set of specified interventions and investigate their effect on sleep and on food triggers in irritable bowel syndrome, respectively. Finally, PACO [29] and movisensXS [30,31] provide tools to design studies but are missing the main component of N-of-1 trials, in that the study app only gathers data—the results are not analyzed in the app and are not reported back to the study participant in the app. PACO has a further restriction that it is only available outside of the European Union and Switzerland. Furthermore, all of these mentioned platforms, except TummyTrials, require user accounts, which can create difficulties in terms of data privacy, especially if apps are planned to be used in different countries.

Table 1. Overview of existing apps and platforms that are suitable for gathering individual-level data. Some report the results of the conducted studies back to the user (column “Statistical evaluation of results”).
NameApp
availability
Possible studies/ diseasesPlatformsStatistical evaluation of resultsCustomizableAble to perform
N-of-1 trials
Requires a user accountLink to the software
TrialistNoMultiple options (for chronic pain only)iOS, Android, or webYesLimited optionsYesYesN/Aa
mPowerOnly United States1 (linked to Parkinson)iOSNoNoNoYes[32]
Parkinson mPower2Yes1 (linked to Parkinson)iOSNoNoNoYes[33]
PACOOutside of the European Union and SwitzerlandFlexible creationiOS, Android, and webNoYesYesYes[34]
movisensXSYesFlexibleAndroidNoYesYesYes[35]
OpenClinicaYes0WebNoYesYesYes[36]
N1Only United States1 (linked to cognitive health)iOSYesNoYesYesN/A
QuantifyMeSource code only4AndroidYesLimited optionsYesYes[37]
TummyTrialsSource code only4 (linked to irritable bowel syndrome)iOSYesLimited optionsYesNo[38]
SleepCoacherYesMultiple options (linked to sleep)iOS and AndroidYesLimited optionsYesYes[39]
StudyUYesFlexible creationiOS, Android, and webYesYesYesNo[40-42]

aN/A: not applicable.

Vision

With StudyU, our goal is to attract more study participants and researchers to conduct and participate in N-of-1 trials by reducing the set-up process and implementation efforts. We envision that health scientists, medical researchers, and physicians worldwide can use it to collaboratively design and conduct N-of-1 trials. StudyU can therefore serve as a platform to contribute to open, transparent, and reproducible medical science by (1) making the study designs of different designed trials directly available to foster reproducibility and well-designed studies, and (2) making the anonymized data contributed by the study participants of the platform available for analysis to foster the generation of novel medical insights on health intervention effects at the individual and population levels. We envision enabling democratization and personalization for evidence generation in medicine and personal self-optimization.

The StudyU Platform

The StudyU platform consists of 3 main parts, as illustrated in Figure 1 (see Supplementary Text 1 in Multimedia Appendix 1 for more details on the architecture):

  1. the StudyU Designer web application for researchers,
  2. the StudyU app for mobile devices, and
  3. the backend where the participant data, study definitions, etc, are safely stored.
Figure 1. Architecture of the StudyU platform. Multiple researchers can collaboratively design and create studies and publish them. Then, study participants can partake in published studies. Study definitions and participant study data are stored in the backend.
View this figure

Designing and implementing a study with the StudyU Designer includes specifying the interventions (Multimedia Appendix 1, p19), eligibility criteria (Multimedia Appendix 1, p20), observations (Multimedia Appendix 1, p21), how they are scheduled, computing the results and displaying them to the participant (see App), and consent (Multimedia Appendix 1, p22). Such designed studies are then available to participants through the StudyU app. This user journey is described in more detail in Multimedia Appendix 1, Supplementary Text 2. The designer and the app are currently available in German and English, with apps in Spanish, French, and Korean planned in the near future. In the following sections, the technical setup and the main parts of StudyU are described.

Technical Setup and Use

The StudyU frontend applications are written in Flutter [43], an open-source, cross-platform user interface framework by Google based on the Dart programming language. With this, one single code base can be compiled to performant applications for multiple platforms: mobile, web, and desktop. Parse [44] is used as a backend, which is a platform that incorporates various functionalities such as object storage, user authentication, and push notifications. All components are organized and composed as Docker [45] containers for easy deployment. The source code for the StudyU platform is publicly and freely available on GitHub [40], and the StudyU app is available on Google Play and the Apple App Store [41,42]. For demonstration purposes, the backend is deployed on Back4App [46], and the frontend applications of the StudyU Designer and StudyU app are deployed on Google Cloud Run [47,48]. StudyU can also be deployed into any HIPAA (Health Insurance Portability and Accountability Act)–compliant and GDPR (General Data Protection Regulation)–compliant cloud system.

In the current implementation of StudyU, two choices can be made regarding how to use the platform, which provides flexibility to the needs of the researcher. First, StudyU can either be installed on one’s own separate server (or cloud) instance, or StudyU can be run and accessed on a central server operated by a third party. Second, studies can be designed and published individually or in collaboration with other researchers from other institutions. For collaborative design, studies can be accessed, edited, and saved by multiple researchers from multiple institutions. The studies can be accessed by multiple researchers at the same time, with the restriction that only one researcher can save data at the same time.

Study Model

StudyU is based on a generic study representation, which is essential to dynamically support multiple studies. The representation encompasses study metadata and study details. The metadata of a study include basic information such as the title, a short description, and the researcher’s contact, including the name of the institutional review board (IRB) and protocol number. The study details contain all information that is needed to execute the study: eligibility questions and criteria, interventions, observations, specification of output and report data, schedule, and consent. All objects and relationships are serialized and stored in JavaScript Object Notation (JSON) format. The overall components of this study model are displayed in Figure 2 (see Multimedia Appendix 1, p23 and Supplementary Text 3 for more details).

The generic study model allows the design of many different N-of-1 trials in StudyU. This is illustrated in 2 example studies that are implemented in StudyU:

  1. Investigation and comparison of the effect of any 2 of the following daily interventions on the intensity of chronic low back pain: willow bark tea, arnica balm, and warming pad
  2. Investigation of the effect of any 2 of the following daily interventions on diffuse abdominal pain in irritable bowel syndrome: gluten-free diet, low-fiber diet, and fructose-free diet.
Figure 2. A simplified overview of the StudyU study model. The notation is based on the Unified Modeling Language class diagram notation, which defines properties of single classes in rectangles and associations between multiple classes as connections. The associations shown in this diagram with a filled diamond at one end mean that one class, for example, "Study," is composed of another class, in this case, "StudyDetails." Numbers shown at associations indicate how many instances of one class take part in this association, for example, n "Observation" objects can be associated with one "StudyDetails" object.
View this figure
StudyU Designer

The StudyU Designer consists of 2 main components: the dashboard and the editor. The rationale behind this concept is to build a user-friendly tool for researchers, which provides a logical framework with all the necessary components to plan and conduct a study. Figure 3 shows the dashboard, which displays drafted studies and published studies. Once a study is published, it is available to users in the app and cannot be edited anymore in the designer. For published studies, researchers can download participant data in comma-separated values (CSV) format.

When adding a new study or editing a draft study, the editor leads through all study specifications as defined in the study model, such as interventions, observations, inclusion and exclusion criteria, consent, the format of the downloadable CSV file with study results, and the specification of reports shown to the user in the app. More editor examples and more details are shown in Multimedia Appendix 1, Supplementary Text 4. The sole responsibility for studies lies with the study designers, and in order to ensure the study participants’ appropriateness and safety of studies published in StudyU, the terms of use of StudyU prohibit misuse of the platform and require that researchers have conducted training on good clinical practice. Researchers have to include an IRB protocol number in the study metadata to assure participants of the adequacy of their study.

Figure 3. The dashboard of the StudyU Designer with drafted and published studies and an editor screen for observation definitions.
View this figure
App

The app enables users to participate in all studies that were created and published in the designer. This has the major advantage that participants do not have to download multiple apps for different studies but can partake in different studies through the same app and have an overview of all of them. After the welcome screen (Figure 4A), users can select which of the published studies they want to participate in. Before enrolling in one study, the study metadata are displayed in the study overview screen (Figure 4B). Then, users are led through the onboarding process with a validation of their eligibility, intervention selection, and declaration of consent. Finally, users arrive at the overview of daily tasks (Figure 4C), which contains the study progress bar and the daily tasks (Figure 4D). This is also the default screen users see when opening the app after the initial onboarding.

A centerpiece of the StudyU app is the result visualization, which is illustrated in Figure 5. In order to ensure that no biases occur after having viewed the results, participants can only view them upon completion of a minimum study length specified by the researcher. For this purpose, a recommended study length is displayed to researchers in the designer, which should be calculated on the basis of a statistical sample size calculation. It should be noted that in the current demonstration of StudyU [48], the results are available from the first day in order to visualize the results. Through progress bars, the current status of the participant in the study is visualized to show how many more observations are needed; the effects of the interventions (if present) can be expected with the specified statistical power if the participants continue with the intervention at least until they reach the minimum study duration and report the measurements without missing data. In the study designer, different report types can be selected: (1) the visualization of a linear regression model that tests if the intervention has an effect on the outcome or (2) the report and explanation of individual results to the participant in bar charts. The definition of report types is implemented in an extensible way. More details are provided in Multimedia Appendix 1, Supplementary Text 5.

Figure 4. Initial screens of the StudyU app including the welcome screen, study overview screen, and daily screens.
View this figure
Figure 5. Examples of study reports. The power bar in the top-left panel indicates whether enough data were collected to observe an effect. Reports are displayed either as a linear regression report or as a report showing data aggregated by day, phase, or intervention.
View this figure
Data Processing

The studies carried out on the StudyU platform adhere to applicable ethical principles and international regulations, in particular, the GDPR (European Union) 2016/679 [49]. When the participant opens the app and accepts the terms of service, a new anonymous user account is created with a random ID that is assigned to the participant. Thus, no user profile is needed—which could be used to identify the participant—there is no log-in requirement for the app, and the participant does not need to create a password. The anonymous account is saved in the backend and on the device. Whenever the app is opened, the anonymous account also gets activated. If the participant completes a daily task, the results are stored inside the user study object and updated on the server. With this setup, there is no risk of data loss due to the study participant logging out of the app or forgetting a password. The only risk is losing the smartphone, in which case the anonymous link cannot be recovered. There is the possibility to opt out of the study, which deletes the unfinished study and the local storage and reference to it. The participant can also choose for his/her data to be deleted locally and on the server. These are some important principles of good clinical practice, and we further require every researcher using the StudyU Designer explicitly to have followed training in good clinical practice.

The legal basis for processing the study data is the consent provided by the participant via the researcher-defined consent form. Researchers can link and analyze data in different ways in the backend using random user IDs but cannot link them to specific participants. The setup of StudyU does not collect identifiable information, and we also discourage researchers from assessing information which includes participant-identifiable data in their designed studies. We anticipate that this setup without user accounts will satisfy the regulations and data security standards in most countries, allowing a broad use of StudyU.


Here, we have presented the StudyU platform, which allows researchers to undertake an easy design, customization, and implementation of N-of-1 trials and allows individuals to participate in those trials without having to set up user accounts. Through the StudyU Designer, researchers can collaboratively design trials. StudyU is available open-source for iOS, Android, and the web, is free to use, and provides anonymized data entry, which prevents tracing back the data to the participants. It allows conducting the entire study process digitally: study design, participant recruitment, inclusion and exclusion of participants through the study app, automatically analyzing the individual data in the app and reporting the results back to the participant, and saving the data in the secure backend so that researchers can analyze it further and aggregate it across N-of-1 trials. As further innovative concepts, we provide electronic consent and the possibility for the study participants to view their progress through the study on a progress bar. With these features, StudyU is currently the only available platform that allows flexibility to the N-of-1 trial design and the capacity to conduct them completely digitally. All other existing apps have limitations in the platforms they support, the possibility and customizability to design individual trials, the freedom to use the app without having to set up a user account, and the automated in-app statistical analysis to provide the results back to the participant.

As participants likely start the trial with high intrinsic motivation and high expectations of getting insights into their health, it is critical that they are not disappointed and drop out of the trial. The progress bar keeps the participant informed when they have reached the targeted study length and when they can view a statistical evaluation of their results. We expect that it can also encourage them to continue the study for a longer time before viewing the results in order to estimate more precise treatment effects, thereby extending the classical statistical power–based sample size calculation. With this, we envision that participants understand the value of long-term participation in the study and stay motivated for a longer time so that dropout rates can be decreased, thus mixing elements of extrinsic motivation to intrinsic motivation [50]. Regarding the use of progress bars, there is some conflicting evidence in the literature [51,52] regarding whether they have a positive effect on increasing adherence, with some suggestions that only specific types of progress bars (ie, fast-to-slow presentation) are beneficial. For this reason, we propose a new approach of the progress bar in future iterations of StudyU to offer participants the chance to look at the results at any time if they want, with the caveat that their results will only be statistically evaluated once to avoid biased results. We expect positive effects on study adherence of such a design, similar to the endowed progress effect [53].

Using the StudyU platform, N-of-1 trials can be designed not only to study the effect of many different health interventions and lifestyle factors on health outcomes in rare diseases and chronic diseases, but also to evaluate the effectiveness of digital health apps. N-of-1 trials can evaluate the effect of health interventions truly in the real-world setting. Especially with the ongoing COVID-19 pandemic highlighting the importance of remote and digital medicine, evaluation and digital integration into the home environment are of high value. While fully digital trials in the home environment can provide challenges for N-of-1 trials owing to possible carryover or confounding effects that have to be considered in the automated analysis, the challenges can be addressed through the implementation of more advanced statistical and machine learning methods. In fact, recent years have seen an unprecedented development of deep learning methods for estimating the individual-level effects of health interventions from population-level studies and to predict individual disease trajectories and individual treatment effects [54-58]. These methods are often based on nontestable assumptions, require large data sets, and have limitations in their interpretability of individual treatment effects in complex causal graphs. Combining them with the design advantages of N-of-1 trials can help derive fully automated analyses of complex real-life trials.

Two important considerations in N-of-1 trials are randomization and blinding to ensure that unbiased estimates of causal effects can be obtained. Randomizing study designs in a within-persons (eg, the order of treatment A and B within each cycle) and a between-persons manner can be implemented in StudyU if desired, but it should be considered that a deterministic sequence might be able to counterbalance specific time-confounding effects for a given participant, while a randomized sequence can achieve this on average. Blinding can occur on 2 levels: blinding researchers to treatment allocation and blinding study participants to treatment allocation. Researchers can be blinded in StudyU by incorporating another person who controls the allocation in the design of the trial. Blinding of study participants with respect to which interventions they are currently following is not possible in many digital N-of-1 trials, as, for example, drinking tea or using a warming patch are visibly different. Blinding would have to be achieved with help of a researcher, physician, or third person and can be implemented in StudyU by naming interventions anonymized A and B and providing, for example, similar looking pills for A and B. Such blinding can prevent biases that might arise from the participants during the trial. However, this has to be balanced with the aim of N-of-1 trials to benefit and empower the participant. More importantly, it should be remembered that any conclusion that intervention A works better than B for a given participant also holds for any nonblinded trial—we only do not know the extent to which this effect was due to the intervention or accompanying beliefs. However, the participant might not care why the intervention worked but might rather care about the fact that it worked.

We plan to include several extensions in StudyU in the future. First, for the study designer, we plan to add more features encouraging the collaboration on study designs. Setting up a database of interested researchers, clinicians, and institutions can help search for partners in designing and conducting the studies. In the current version of StudyU, all studies are, by default, public to enhance full collaboration and allow for open-access study development. We are working on a more fine-grained collaboration platform, which allows the researcher to make both the creation and conducting of studies fully public or private to a selected group of collaborators and selected group of invited study participants. As a second new feature in StudyU, we will include the possibility to link sensor-based data to measure health outcomes and covariates and also allow the integration of other digital health apps in StudyU. Third, we will provide the possibility to design adaptive trials, for example, including elements from microrandomized trials and just-in-time adaptive interventions [59-61]. Fourth, we plan to implement a more elaborate progress bar visualizing the study progress of the participants. The current progress bar is based on the study duration and number of past measurements, but a more exact measurement would be to focus on the number of nonmissing measurements. This feature can be added by including an automated check for the validity and completeness of the recorded data and feeding it back into the progress bar. Fifth, we plan to integrate more complex statistical and machine learning methods in the study app so that complex individual-level treatment effects of potentially time-varying treatments and time-varying confounders can be included in the modeling and result reporting to the individuals. Currently, only linear regression and t tests are implemented in StudyU. They provide simple models with easily interpretable results and have been shown to provide efficient and robust treatment effect estimates even when autocorrelation and time trends are present [62]. Nonetheless, implementing more complex statistical models such as Bayesian mixed models or G-estimation will allow a more fine-grained and powerful analysis. Finally, we are working on the development of user-centric N-of-1 trials designed by the study participants themselves and are excited to integrate these study designs as well as the study results into StudyU [20]. We envision that placing a higher focus on the cocreation of trials with participants can be very important to increase adherence to the trial, especially for long-term experiments, which is not straightforward as shown in other studies [21,63]. Furthermore, fully cocreated trials, where the participant defines what he/she wants to evaluate, might have a higher chance of exerting an actual effect on health behavior change. It would be interesting to embed such trials into models of health behavior change, such as the one by Prochaska et al [64], and think about which elements can map to each stage of precontemplation, contemplation, preparation, action, maintenance, and termination. Building on this, linking N-of-1 trials further to electronic health records in the future has the potential to connect N-of-1 trials into clinical care and clinical workflows and can further enhance the integration of medical research and clinical practice.

Acknowledgments

This work has received funding from the European Union’s Horizon 2020 research and innovation program under the grant agreement 826117 Smart4Health, building a citizen-centered EU–electronic health record exchange for personalized health, and from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation; project 491466077).

Authors' Contributions

AMZ, NS, MM, FP, FH, and DFR developed and implemented StudyU and StudyU Designer under supervision of SK, TS, SW, and EB. SK, SW, TS, and EB conceived the project. BO and JAE provided critical input regarding design aspects. MD, EG, and MZ provided critical input regarding technical and ethical points. SK, SW, and TS drafted the manuscript. All authors reviewed and commented on the drafts of the manuscript and approved the final version for publication.

Conflicts of Interest

None declared.

Multimedia Appendix 1

Supplementary text and figures.

PDF File (Adobe PDF File), 1984 KB

  1. Spear BB, Heath-Chiozzi M, Huff J. Clinical application of pharmacogenetics. Trends Mol Med 2001 May;7(5):201-204. [CrossRef] [Medline]
  2. Leucht S, Helfer B, Gartlehner G, Davis JM. How effective are common medications: a perspective based on meta-analyses of major drugs. BMC Med 2015 Oct 02;13:253 [FREE Full text] [CrossRef] [Medline]
  3. Eichler H, Abadie E, Breckenridge A, Flamion B, Gustafsson LL, Leufkens H, et al. Bridging the efficacy-effectiveness gap: a regulator's perspective on addressing variability of drug response. Nat Rev Drug Discov 2011 Jul 01;10(7):495-506. [CrossRef] [Medline]
  4. Lillie EO, Patay B, Diamant J, Issell B, Topol EJ, Schork NJ. The n-of-1 clinical trial: the ultimate strategy for individualizing medicine? Per Med 2011 Mar;8(2):161-173 [FREE Full text] [CrossRef] [Medline]
  5. Nikles J, Mitchell G. The Essential Guide to N-of-1 Trials in Health. Amsterdam: Springer; 2015.
  6. Wilkinson J, Arnold KF, Murray EJ, van Smeden M, Carr K, Sippy R, et al. Time to reality check the promises of machine learning-powered precision medicine. Lancet Digit Health 2020 Dec;2(12):e677-e680 [FREE Full text] [CrossRef] [Medline]
  7. Bentley KH, Kleiman EM, Elliott G, Huffman JC, Nock MK. Real-time monitoring technology in single-case experimental design research: Opportunities and challenges. Behav Res Ther 2019 Jun;117:87-96. [CrossRef] [Medline]
  8. Blackston JW, Chapple AG, McGree JM, McDonald S, Nikles J. Comparison of Aggregated N-of-1 Trials with Parallel and Crossover Randomized Controlled Trials Using Simulation Studies. Healthcare (Basel) 2019 Nov 06;7(4):137 [FREE Full text] [CrossRef] [Medline]
  9. Punja S, Schmid CH, Hartling L, Urichuk L, Nikles CJ, Vohra S. To meta-analyze or not to meta-analyze? A combined meta-analysis of N-of-1 trial data with RCT data on amphetamines and methylphenidate for pediatric ADHD. J Clin Epidemiol 2016 Aug;76:76-81. [CrossRef] [Medline]
  10. Mirza RD, Punja S, Vohra S, Guyatt G. The history and development of N-of-1 trials. J R Soc Med 2017 Aug;110(8):330-340 [FREE Full text] [CrossRef] [Medline]
  11. Kravitz RL, Schmid CH, Marois M, Wilsey B, Ward D, Hays RD, et al. Effect of Mobile Device-Supported Single-Patient Multi-crossover Trials on Treatment of Chronic Musculoskeletal Pain: A Randomized Clinical Trial. JAMA Intern Med 2018 Oct 01;178(10):1368-1377 [FREE Full text] [CrossRef] [Medline]
  12. Knottnerus JA, Tugwell P, Tricco AC. Individual patients are the primary source and the target of clinical research. J Clin Epidemiol 2016 Aug;76:1-3. [CrossRef] [Medline]
  13. International Collaborative Network for N-of-1 Clinical Trials and Single-Case Designs.   URL: https://www.nof1sced.org/ [accessed 2022-04-17]
  14. Müller AR, Brands MMMG, van de Ven PM, Roes KCB, Cornel MC, van Karnebeek CDM, et al. Systematic Review of N-of-1 Studies in Rare Genetic Neurodevelopmental Disorders: The Power of 1. Neurology 2021 Mar 16;96(11):529-540 [FREE Full text] [CrossRef] [Medline]
  15. Porcino AJ, Shamseer L, Chan A, Kravitz RL, Orkin A, Punja S, SPENT group. SPIRIT extension and elaboration for n-of-1 trials: SPENT 2019 checklist. BMJ 2020 Feb 27;368:m122. [CrossRef] [Medline]
  16. Vohra S, Shamseer L, Sampson M, Bukutu C, Schmid CH, Tate R, CENT Group. CONSORT extension for reporting N-of-1 trials (CENT) 2015 Statement. BMJ 2015 May 14;350:h1738 [FREE Full text] [CrossRef] [Medline]
  17. Tate RL, Perdices M, Rosenkoetter U, Wakim D, Godbee K, Togher L, et al. Revision of a method quality rating scale for single-case experimental designs and n-of-1 trials: the 15-item Risk of Bias in N-of-1 Trials (RoBiNT) Scale. Neuropsychol Rehabil 2013;23(5):619-638. [CrossRef] [Medline]
  18. Stunnenberg BC, Deinum J, Nijenhuis T, Huysmans F, van der Wilt GJ, van Engelen BGM, et al. N-of-1 Trials: Evidence-Based Clinical Care or Medical Research that Requires IRB Approval? A Practical Flowchart Based on an Ethical Framework. Healthcare (Basel) 2020 Feb 27;8(1):49 [FREE Full text] [CrossRef] [Medline]
  19. Product: N Of 1 Health Research. Digital Infuzion.   URL: https://www.digitalinfuzion.com/products-patents/n-of-1-health-research/ [accessed 2022-04-17]
  20. Zenner A, Böttinger E, Konigorski S. StudyMe: a new mobile app for user-centric N-of-1 Trials. arXiv :2108.00320 Preprint posted online July 31, 2021 [FREE Full text]
  21. Daskalova N, Kyi E, Ouyang K, Borem A, Chen S, Park SH, et al. Self-E: Smartphone-Supported Guidance for Customizable Self-Experimentation. 2021 Presented at: CHI '21: CHI Conference on Human Factors in Computing Systems; May 8-13, 2021; Yokohama   URL: https://doi.org/10.1145/3411764.3445100 [CrossRef]
  22. Barr C, Marois M, Sim I, Schmid CH, Wilsey B, Ward D, et al. The PREEMPT study - evaluating smartphone-assisted n-of-1 trials in patients with chronic pain: study protocol for a randomized controlled trial. Trials 2015 Feb 27;16:67 [FREE Full text] [CrossRef] [Medline]
  23. Golden E, Johnson M, Jones M, Viglizzo R, Bobe J, Zimmerman N. Measuring the Effects of Caffeine and L-Theanine on Cognitive Performance: A Protocol for Self-Directed, Mobile N-of-1 Studies. Front Comput Sci 2020 Feb 13;2:4. [CrossRef]
  24. Bot BM, Suver C, Neto EC, Kellen M, Klein A, Bare C, et al. The mPower study, Parkinson disease mobile data collected using ResearchKit. Sci Data 2016 Mar 03;3:160011 [FREE Full text] [CrossRef] [Medline]
  25. Ngari MM, Waithira N, Chilengi R, Njuguna P, Lang T, Fegan G. Experience of using an open source clinical trials data management software system in Kenya. BMC Res Notes 2014 Nov 26;7:845 [FREE Full text] [CrossRef] [Medline]
  26. Taylor S, Sano A, Ferguson C, Mohan A, Picard RW. QuantifyMe: An Open-Source Automated Single-Case Experimental Design Platform. Sensors (Basel) 2018 Apr 05;18(4):1097 [FREE Full text] [CrossRef] [Medline]
  27. Karkar R, Schroeder J, Epstein DA, Pina LR, Scofield J, Fogarty J, et al. TummyTrials: A Feasibility Study of Using Self-Experimentation to Detect Individualized Food Triggers. Proc SIGCHI Conf Hum Factor Comput Syst 2017 May 02;2017:6850-6863 [FREE Full text] [CrossRef] [Medline]
  28. Daskalova N, Metaxa D, Tran A, Nugent NR, Boergers J, McGeary J, et al. SleepCoacher: A Personalized Automated Self-Experimentation System for Sleep Recommendations. 2016 Presented at: UIST '16: The 29th Annual ACM Symposium on User Interface Software and Technology; October 16-19, 2016; Tokyo. [CrossRef]
  29. Evans B. Paco-Applying Computational Methods to Scale Qualitative Methods. Ethnogr Prax Ind Conf Proc 2016 Nov 29;2016(1):348-368. [CrossRef]
  30. Giessing L, Oudejans RRD, Hutter V, Plessner H, Strahler J, Frenkel MO. Acute and Chronic Stress in Daily Police Service: A Three-Week N-of-1 Study. Psychoneuroendocrinology 2020 Dec;122:104865. [CrossRef] [Medline]
  31. Mühlbauer E, Bauer M, Ebner-Priemer U, Ritter P, Hill H, Beier F, et al. Effectiveness of smartphone-based ambulatory assessment (SBAA-BD) including a predicting system for upcoming episodes in the long-term treatment of patients with bipolar disorders: study protocol for a randomized controlled single-blind trial. BMC Psychiatry 2018 Oct 26;18(1):349 [FREE Full text] [CrossRef] [Medline]
  32. mPower. GitHub.   URL: https://github.com/Sage-Bionetworks/mPower [accessed 2022-04-17]
  33. Parkinson mPower 2. Apple App Store.   URL: https://apps.apple.com/us/app/parkinson-mpower-2/id1375781575 [accessed 2022-04-17]
  34. PACO: The Personal Analytics Companion.   URL: https://pacoapp.com [accessed 2022-04-17]
  35. movisensXD. movisens GmbH.   URL: https://www.movisens.com/en/products/movisensxs/ [accessed 2022-04-17]
  36. OpenClinica.   URL: https://www.openclinica.com [accessed 2022-04-17]
  37. QuantifyMe. GitHub.   URL: https://github.com/mitmedialab/AffectiveComputingQuantifyMeAndroid [accessed 2022-04-17]
  38. TummyTrials. GitHub.   URL: https://github.com/tractdb/tummytrials [accessed 2022-04-17]
  39. SleepCoacher.   URL: https://sleepcoacher.cs.brown.edu/ [accessed 2022-04-17]
  40. StudyU platform. GitHub.   URL: https://github.com/hpi-studyu [accessed 2022-04-17]
  41. StudyU Health app. Google Play.   URL: https://play.google.com/store/apps/details?id=health.studyu.app [accessed 2022-04-17]
  42. StudyU Health app. Apple App Store.   URL: https://apps.apple.com/us/app/studyu-health/id1571991198 [accessed 2022-04-17]
  43. Flutter.   URL: https://flutter.dev [accessed 2022-04-17]
  44. Parse.   URL: https://parseplatform.org [accessed 2022-04-17]
  45. Docker.   URL: https://www.docker.com [accessed 2022-04-17]
  46. Back4App.   URL: https://www.back4app.com [accessed 2022-04-17]
  47. StudyU designer.   URL: https://studyu-designer-v1.web.app [accessed 2022-04-17]
  48. StudyU app.   URL: https://studyu-app-v1.web.app/#/welcome [accessed 2022-04-17]
  49. The European Parliament and the Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance). Official Journal of the European Union 2016 [FREE Full text]
  50. Benabou R, Tirole J. Intrinsic and Extrinsic Motivation. Rev Econ Studies 2003 Jul;70(3):489-520 [FREE Full text] [CrossRef]
  51. Villar A, Callegaro M, Yang Y. Where Am I? A Meta-Analysis of Experiments on the Effects of Progress Indicators for Web Surveys. Soc Sci Comput Rev 2013 Aug 19;31(6):744-762 [FREE Full text] [CrossRef]
  52. Conrad FG, Couper MP, Tourangeau R, Peytchev A. The impact of progress indicators on task completion. Interact Comput 2010 Sep 01;22(5):417-427 [FREE Full text] [CrossRef] [Medline]
  53. Nunes J, Drèze X. The Endowed Progress Effect: How Artificial Advancement Increases Effort. J CONSUM RES 2006 Mar;32(4):504-512 [FREE Full text] [CrossRef]
  54. Bica I, Alaa AM, Lambert C, van der Schaar M. From Real-World Patient Data to Individualized Treatment Effects Using Machine Learning: Current and Future Methods to Address Underlying Challenges. Clin Pharmacol Ther 2021 Jan;109(1):87-100. [CrossRef] [Medline]
  55. Shalit U, Johansson F, Sontag D. Estimating individual treatment effect: generalization bounds and algorithms. 2017 Presented at: International Conference on Machine Learning. PMLR; 2017; Sydney, Australia p. 3076-3085   URL: https://proceedings.mlr.press/v70/shalit17a/shalit17a.pdf
  56. Lee C, Mastronarde N, van DSM. Estimation of Individual Treatment Effect in Latent Confounder Models via Adversarial Learning. arXiv: 1811.08943 2018 [FREE Full text]
  57. Alaa A, van DSM. Bayesian Inference of Individualized Treatment Effects using Multi-task Gaussian Processes. arXiv Preprint posted online May 28, 2017 [FREE Full text]
  58. Boruvka A, Almirall D, Witkiewitz K, Murphy SA. Assessing Time-Varying Causal Effect Moderation in Mobile Health. J Am Stat Assoc 2018;113(523):1112-1121 [FREE Full text] [CrossRef] [Medline]
  59. Walton AE, Collins LM, Klasnja P, Nahum-Shani I, Rabbi M, Walton MA, et al. The Micro-Randomized Trial for Developing Digital Interventions: Experimental Design Considerations. arXiv Preprint posted online April 23, 2020 [FREE Full text]
  60. Qian T, Russell MA, Collins LM, Klasnja P, Lanza ST, Yoo H, et al. The micro-randomized trial for developing digital interventions: data analysis methods. arXiv Preprint posted online April 21, 2020 [FREE Full text]
  61. Nahum-Shani I, Smith SN, Spring BJ, Collins LM, Witkiewitz K, Tewari A, et al. Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health: Key Components and Design Principles for Ongoing Health Behavior Support. Ann Behav Med 2018 May 18;52(6):446-462 [FREE Full text] [CrossRef] [Medline]
  62. Chen X, Chen P. A comparison of four methods for the analysis of N-of-1 trials. PLoS One 2014;9(2):e87752 [FREE Full text] [CrossRef] [Medline]
  63. Karkar R, Zia J, Vilardaga R, Mishra SR, Fogarty J, Munson SA, et al. A framework for self-experimentation in personalized health. J Am Med Inform Assoc 2016 May;23(3):440-448 [FREE Full text] [CrossRef] [Medline]
  64. Prochaska JO, Velicer WF. The transtheoretical model of health behavior change. Am J Health Promot 1997;12(1):38-48. [CrossRef] [Medline]


CSV: comma-separated values
GDPR: General Data Protection Regulation
HIPAA: Health Insurance Portability and Accountability Act
IRB: institutional review board
JSON: JavaScript Object Notation
RCT: randomized controlled trial
SCED: single-case experimental design


Edited by T Leung; submitted 21.12.21; peer-reviewed by J Nikles, S Mangelsdorf; comments to author 16.03.22; revised version received 17.04.22; accepted 18.04.22; published 05.07.22

Copyright

©Stefan Konigorski, Sarah Wernicke, Tamara Slosarek, Alexander M Zenner, Nils Strelow, Darius F Ruether, Florian Henschel, Manisha Manaswini, Fabian Pottbäcker, Jonathan A Edelman, Babajide Owoyele, Matteo Danieletto, Eddye Golden, Micol Zweig, Girish N Nadkarni, Erwin Böttinger. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 05.07.2022.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.