Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/79772, first published .
Authors’ Reply: Foundation Models for Generative AI in Time-Series Forecasting

Authors’ Reply: Foundation Models for Generative AI in Time-Series Forecasting

Authors’ Reply: Foundation Models for Generative AI in Time-Series Forecasting

Authors of this article:

Rosemary He1, 2 Author Orcid Image ;   Jeffrey Chiang2, 3 Author Orcid Image

Letter to the Editor

1Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, United States

2Department of Computational Medicine, University of California, Los Angeles, Los Angeles, CA, United States

3Department of Neurosurgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States

Corresponding Author:

Jeffrey Chiang, PhD

Department of Neurosurgery

David Geffen School of Medicine

University of California, Los Angeles

300 Stein Plaza, Suite 560

Los Angeles, CA, 90095

United States

Phone: 1 310 825 5111

Email: njchiang@g.ucla.edu



We thank the authors for their thoughtful letter [1] regarding our article “Generative AI Models in Time-Varying Biomedical Data: Scoping Review” [2]. We appreciate their careful reading and constructive feedback, which provides us with an opportunity to clarify important aspects of our work and address areas where our presentation may have been unclear.

We acknowledge the authors’ concern regarding our definition of foundation models (FMs). Their observation is well-taken, and we recognize that our use of terminology was imprecise in certain instances. We appreciate this opportunity to provide clarification. In our work, we intended to refer to FMs as models that have been trained on extremely large and typically unlabeled datasets, encompassing both models that are inherently capable of generative tasks and models that can be adapted for generative forecasting tasks (eg, clinical language models). This broader conceptualization reflects the evolving landscape of how these models are being applied in biomedical time-series analysis, where the distinction between inherently generative models and models adapted for generative purposes is becoming increasingly nuanced. However, the authors raise an important point about the distinction between masked language models and truly generative artificial intelligence (AI) models. We acknowledge that models like GatorTron are indeed masked language models rather than generative AI models in the strictest sense and appreciate the reference to GatorTronGPT as an example of a true generative variant.

The authors identified an important error in our presentation in Figure 3; we incorrectly labeled certain models as “foundation models” when they should have been categorized as “generative models.” While TimeGPT and denoising diffusion probabilistic models at large scale are FMs, the rest are generative models to be distinguished from FMs. We have corrected this label to avoid further confusion [3].

The field of generative AI applications to biomedical time-series data is rich and ever-growing, and the manuscript intends to offer multiple methodological options for researchers and practitioners. We thank the authors again for their careful attention to our work and their constructive feedback, which ultimately serves to strengthen the scientific discourse in this important area.

Conflicts of Interest

None declared.

  1. Beltramin D, Bousquet C. Foundation models for generative AI in time-series forecasting. J Med Internet Res. 2025:e76964. [CrossRef]
  2. He R, Sarwal V, Qiu X, Zhuang Y, Zhang L, Liu Y, et al. Generative AI models in time-varying biomedical data: scoping review. J Med Internet Res. Mar 10, 2025;27:e59792. [FREE Full text] [CrossRef] [Medline]
  3. He R, Sarwal V, Qiu X, Zhuang Y, Zhang L, Liu Y, et al. Figure correction: generative AI models in time-varying biomedical data: scoping review. J Med Internet Res. 2025;27:e79605. [CrossRef]


AI: artificial intelligence
FM: foundation model


Edited by T Leung; This is a non–peer-reviewed article. submitted 27.06.25; accepted 14.07.25; published 25.07.25.

Copyright

©Rosemary He, Jeffrey Chiang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.07.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.