Search Articles

View query in Help articles search

Search Results (1 to 10 of 13 Results)

Download search results: CSV END BibTex RIS


Transformer-Based Language Models for Group Randomized Trial Classification in Biomedical Literature: Model Development and Validation

Transformer-Based Language Models for Group Randomized Trial Classification in Biomedical Literature: Model Development and Validation

Pretrained transformer language models, like bidirectional encoder representations from transformers (BERT) [16-18], have outperformed the existing deep neural network models, including convolutional neural networks and recurrent neural networks. Examples of transformer-based models trained on biomedical data include Bio BERT [19], Bio Link BERT [20], Blue BERT [21], and Bio Med BERT [22], which are pretrained on biomedical literature and clinical text.

Elaheh Aghaarabi, David Murray

JMIR Med Inform 2025;13:e63267

Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes

Exploring Biases of Large Language Models in the Field of Mental Health: Comparative Questionnaire Study of the Effect of Gender and Sexual Orientation in Anorexia Nervosa and Bulimia Nervosa Case Vignettes

The technique of modeling words in a large context has been referred to as transformer-based large language modeling [8]. This may not only facilitate the automatic analysis of large amounts of text data [9,10] but, by modeling words in a large context, also allow the generation of meaningful text and the interactive use of this technology [5,10]. Thus, the application of LLMs may improve efficiency and effectiveness of data processing in various fields—including health care [5].

Rebekka Schnepper, Noa Roemmel, Rainer Schaefert, Lena Lambrecht-Walzinger, Gunther Meinlschmidt

JMIR Ment Health 2025;12:e57986

Transformers for Neuroimage Segmentation: Scoping Review

Transformers for Neuroimage Segmentation: Scoping Review

In this review, we focused on the deep transformer–based techniques that have gained more attention recently. From the proposed models, we can find transformer-based, CNN with transformer-based, and generative adversarial network with transformer-based techniques. At the same time, the methods based on Trans BTS, Trans UNet, Sein UNet, and U-Net with transformer are the most used models for neuroimage segmentation. Figure 3 illustrates these models in terms of architecture.

Maya Iratni, Amira Abdullah, Mariam Aldhaheri, Omar Elharrouss, Alaa Abd-alrazaq, Zahiriddin Rustamov, Nazar Zaki, Rafat Damseh

J Med Internet Res 2025;27:e57723

Multifaceted Natural Language Processing Task–Based Evaluation of Bidirectional Encoder Representations From Transformers Models for Bilingual (Korean and English) Clinical Notes: Algorithm Development and Validation

Multifaceted Natural Language Processing Task–Based Evaluation of Bidirectional Encoder Representations From Transformers Models for Bilingual (Korean and English) Clinical Notes: Algorithm Development and Validation

Zhang and Jankowski [25] proposed average pooling transformer layers handling token-, sentence-, and document-level embeddings for classifying International Classification of Diseases codes. Their model outperformed the BERT-base model by 11 points. For the reading comprehension task, BERT can be used to determine the answer span within a given text. Pampari et al [26] proposed the electronic medical record question answering (emr QA) dataset to determine the answer span to a question in a clinical context.

Kyungmo Kim, Seongkeun Park, Jeongwon Min, Sumin Park, Ju Yeon Kim, Jinsu Eun, Kyuha Jung, Yoobin Elyson Park, Esther Kim, Eun Young Lee, Joonhwan Lee, Jinwook Choi

JMIR Med Inform 2024;12:e52897

Enhancing Type 2 Diabetes Treatment Decisions With Interpretable Machine Learning Models for Predicting Hemoglobin A1c Changes: Machine Learning Model Development

Enhancing Type 2 Diabetes Treatment Decisions With Interpretable Machine Learning Models for Predicting Hemoglobin A1c Changes: Machine Learning Model Development

Since its introduction in 2017, the transformer model has excelled in various time-series predictive tasks, solidifying its position as a core technology across multiple fields [27-32]. The transformer model incorporates an attention mechanism simplifying the extraction of temporal relationships and setting it apart from other models [33-35].

Hisashi Kurasawa, Kayo Waki, Tomohisa Seki, Akihiro Chiba, Akinori Fujino, Katsuyoshi Hayashi, Eri Nakahara, Tsuneyuki Haga, Takashi Noguchi, Kazuhiko Ohe

JMIR AI 2024;3:e56700

Wearable Data From Subjects Playing Super Mario, Taking University Exams, or Performing Physical Exercise Help Detect Acute Mood Disorder Episodes via Self-Supervised Learning: Prospective, Exploratory, Observational Study

Wearable Data From Subjects Playing Super Mario, Taking University Exams, or Performing Physical Exercise Help Detect Acute Mood Disorder Episodes via Self-Supervised Learning: Prospective, Exploratory, Observational Study

We proposed a novel E4-tailored transformer (E4mer) architecture (Figure 2) [43] and showed that SSL is a viable paradigm, outperforming both fully supervised E4mer and classical machine learning (CML) models using handcrafted features in distinguishing acute MD episodes from clinical stability (euthymia in psychiatric parlance), that is, a time-series (binary) classification task. We investigated what makes SSL successful.

Filippo Corponi, Bryan M Li, Gerard Anmella, Clàudia Valenzuela-Pascual, Ariadna Mas, Isabella Pacchiarotti, Marc Valentí, Iria Grande, Antoni Benabarre, Marina Garriga, Eduard Vieta, Allan H Young, Stephen M Lawrie, Heather C Whalley, Diego Hidalgo-Mazzei, Antonio Vergari

JMIR Mhealth Uhealth 2024;12:e55094

Patient Phenotyping for Atopic Dermatitis With Transformers and Machine Learning: Algorithm Development and Validation Study

Patient Phenotyping for Atopic Dermatitis With Transformers and Machine Learning: Algorithm Development and Validation Study

Although similar, this study differs in the following ways: (1) we survey a wide range of supervised ML algorithms as opposed to only using lasso regularized logistic regression, (2) we use transformer embeddings of sentences to represent information in each patient’s records and aggregate these embeddings with multilayer perceptron (MLP) networks to create a patient vector representation for patient phenotyping, and (3) we performed an ablation study of processing methods to compare the impact on performance

Andrew Wang, Rachel Fulton, Sy Hwang, David J Margolis, Danielle Mowery

JMIR Form Res 2024;8:e52200

Detecting Tweets Containing Cannabidiol-Related COVID-19 Misinformation Using Transformer Language Models and Warning Letters From Food and Drug Administration: Content Analysis and Identification

Detecting Tweets Containing Cannabidiol-Related COVID-19 Misinformation Using Transformer Language Models and Warning Letters From Food and Drug Administration: Content Analysis and Identification

Deep learning [23,28] and transformer language models [22,23,25,29] are the most commonly used techniques, likely because of their efficiency and highly advanced ability to interpret and understand natural language, which is key to examining content on social media.

Jason Turner, Mehmed Kantardzic, Rachel Vickers-Smith, Andrew G Brown

JMIR Infodemiology 2023;3:e38390

Exploiting Intersentence Information for Better Question-Driven Abstractive Summarization: Algorithm Development and Validation

Exploiting Intersentence Information for Better Question-Driven Abstractive Summarization: Algorithm Development and Validation

In this paper, a novel question-driven abstractive summarization based on transformer is proposed, namely Trans-Att, that incorporates a two-step attention mechanism and an overall integration mechanism to summarize the document with respect to the nonfactoid questions. Concretely, the two-step attention mechanism can learn richer structural dependencies among sentences and the relevance of the question and the document.

Xin Wang, Jian Wang, Bo Xu, Hongfei Lin, Bo Zhang, Zhihao Yang

JMIR Med Inform 2022;10(8):e38052

Transformer- and Generative Adversarial Network–Based Inpatient Traditional Chinese Medicine Prescription Recommendation: Development Study

Transformer- and Generative Adversarial Network–Based Inpatient Traditional Chinese Medicine Prescription Recommendation: Development Study

The prescription prediction subsystem, which is a transformer-based deep learning model, automatically generates a prescription based on input vector data. Together, the first 3 subsystems accomplish the task of extracting relevant information from an EHR file to form input variables for the prediction model. Similar representation learning operations were described in our previous paper [17].

Hong Zhang, Jiajun Zhang, Wandong Ni, Youlin Jiang, Kunjing Liu, Daying Sun, Jing Li

JMIR Med Inform 2022;10(5):e35239