Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/54090, first published .
Ambiguity in Statistical Analysis Methods and Nonconformity With Prespecified Commitment to Data Sharing in a Cluster Randomized Controlled Trial

Ambiguity in Statistical Analysis Methods and Nonconformity With Prespecified Commitment to Data Sharing in a Cluster Randomized Controlled Trial

Ambiguity in Statistical Analysis Methods and Nonconformity With Prespecified Commitment to Data Sharing in a Cluster Randomized Controlled Trial

Letter to the Editor

1Department of Epidemiology and Biostatistics, School of Public Health, Indiana University Bloomington, Bloomington, IN, United States

2Department of Biostatistics, University of Arkansas for Medical Sciences, Little Rock, AR, United States

3Arkansas Children's Research Institute, Little Rock, AR, United States

Corresponding Author:

David B Allison, PhD

Department of Epidemiology and Biostatistics

School of Public Health

Indiana University Bloomington

1025 E 7th St, PH 111

Bloomington, IN, 47405

United States

Phone: 1 8128551250

Email: allison@iu.edu



In a published cluster randomized controlled trial (cRCT) [1] on the effects of desks on physical behaviors, 66 individuals were randomized into 3 groups by their office space (ie, clusters): the seated desk control (n=21; 8 clusters), sit-to-stand desk (n=23; 9 clusters), or treadmill desk (n=22; 7 clusters) group.

In the article, there is ambiguity regarding whether clustering (potential nonindependence of observations within the same office space) and nesting (due to the hierarchical structure of the data; see definitions from Jamshidi-Naeini et al [2]) have been accounted for. Furthermore, it appears that the data underlying the published results are not available to other researchers, which is contrary to the journal’s policy indicating “a submission to JMIR journals implies that…all relevant raw data, will be freely available to any researcher wishing to use them for non-commercial purposes….”

The description of methods indicates using random intercept mixed linear models accounting for repeated measures and clusters. However, it is stated elsewhere that “[t]he cluster effect did not significantly (all P values >.05) account for the variability in any of the outcome variables….Therefore, aim 1 and aim 2 outcome observations…were analyzed at the participant level instead of cluster…” [1].

The statement regarding accounting for the clustering effect is ambiguous. If the authors ignored the clustering effect based on the reasoning that participant outcomes within the same cluster are unrelated, as indicated by intraclass correlation coefficient (ICC) values that one chooses to describe as “small” or nonstatistical significance of the ICC values at some nominal α level (eg, 0.05), such reasoning is erroneous. A sample ICC or its associated P value is not an appropriate metric on which to determine whether one should account for clustering. Ignoring clustering, regardless of a sample ICC’s magnitude or associated P value, potentially leads to miscalculation of variance components and type I error rates above the nominal significance level [3,4].

In a cRCT with such unequal cluster sizes (ranging from 1 to 11 participants), there is no exact size α test, and type I error inflation may occur. Therefore, to ensure control of type I error rate, it is essential to apply appropriate weighting for unequal cluster sizes. In addition, the nesting effect that arises from the hierarchical structure of the data in cRCTs was not considered in the statistical analyses. Adjusting df (eg, by between-within determination [4]) could have accounted for this nesting effect.

We requested the raw data to reproduce the analyses (see definition of “reproducing” in Reproducibility and Replicability in Science [5]) and potentially use alternative corrected methods to reanalyze the data. Data were not shared with us. The authors stated that this decision was made “to ensure the integrity of ongoing research being conducted using the same dataset.” Nevertheless, sharing data for the purpose of reproducing published results does not compromise the integrity of further analyses on the same data set. Withholding data, on the other hand, renders the study irreproducible and thus compromises the trustworthiness of the published results.

The concerns raised herein should be addressed to ensure the integrity, transparency, and reproducibility of the published findings.

Acknowledgments

The authors are supported in part by R25DK099080, R25HL124208, R25GM141507, and the Gordon and Betty Moore Foundation. The opinions expressed are those of the authors and do not necessarily represent those of the National Institutes of Health (NIH) or any other organization.

Conflicts of Interest

Collectively, the authors and their institutions have received payments for consultations, grants, contracts, in-kind donations, and contributions from multiple for-profit and not-for-profit entities interested in statistical design and analysis of experiments but not directly related to the research questions addressed in the paper in question.

  1. Arguello D, Cloutier G, Thorndike AN, Castaneda Sceppa C, Griffith J, John D. Impact of sit-to-stand and treadmill desks on patterns of daily waking physical behaviors among overweight and obese seated office workers: cluster randomized controlled trial. J Med Internet Res. May 16, 2023;25:e43018. [FREE Full text] [CrossRef] [Medline]
  2. Jamshidi-Naeini Y, Brown AW, Mehta T, Glueck DH, Golzarri-Arroyo L, Muller KE, et al. A practical decision tree to support editorial adjudication of submitted parallel cluster randomized controlled trials. Obesity (Silver Spring). Mar 2022;30(3):565-570. [FREE Full text] [CrossRef] [Medline]
  3. Brown AW, Li P, Brown MMB, Kaiser KA, Keith SW, Oakes JM, et al. Best (but oft-forgotten) practices: designing, analyzing, and reporting cluster randomized controlled trials. Am J Clin Nutr. Aug 2015;102(2):241-248. [FREE Full text] [CrossRef] [Medline]
  4. Golzarri-Arroyo L, Dickinson SL, Jamshidi-Naeini Y, Zoh RS, Brown AW, Owora AH, et al. Evaluation of the type I error rate when using parametric bootstrap analysis of a cluster randomized controlled trial with binary outcomes and a small number of clusters. Comput Methods Programs Biomed. Mar 2022;215:106654. [FREE Full text] [CrossRef] [Medline]
  5. National Academies of Sciences, Engineering, and Medicine. Reproducibility and Replicability in Science. Washington, DC. The National Academies Press; 2019.


cRCT: cluster randomized controlled trial
ICC: intraclass correlation coefficient


Edited by T Leung; This is a non–peer-reviewed article. submitted 29.10.23; accepted 22.02.24; published 03.04.24.

Copyright

©Yasaman Jamshidi-Naeini, Lilian Golzarri-Arroyo, Deependra K Thapa, Andrew W Brown, Daniel E Kpormegbey, David B Allison. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 03.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.