Search Articles

View query in Help articles search

Search Results (1 to 10 of 13 Results)

Download search results: CSV END BibTex RIS


Harnessing Moderate-Sized Language Models for Reliable Patient Data Deidentification in Emergency Department Records: Algorithm Development, Validation, and Implementation Study

Harnessing Moderate-Sized Language Models for Reliable Patient Data Deidentification in Emergency Department Records: Algorithm Development, Validation, and Implementation Study

Transformers, introduced by Vaswani et al [20] in 2017, provided a novel approach to handling sequential data using self-attention mechanisms, thereby obviating the need for recurrent layers and significantly augmenting training efficiency. This pivotal innovation paved the way for the advent of progressively sophisticated and expansive models.

Océane Dorémus, Dylan Russon, Benjamin Contrand, Ariel Guerra-Adames, Marta Avalos-Fernandez, Cédric Gil-Jardiné, Emmanuel Lagarde

JMIR AI 2025;4:e57828

Transformer-Based Tool for Automated Fact-Checking of Online Health Information: Development Study

Transformer-Based Tool for Automated Fact-Checking of Online Health Information: Development Study

The process included the following three steps: (1) Classification of web page content into 3 thematic categories, semiology, epidemiology, and management by evaluating various transformer models, including bidirectional encoder representations from transformers (BERT), Sci BERT, and Bio BERT, as well as traditional models such as random forest (RF) and support vector machine (SVM), (2) automating the creation of Pub Med queries combining “Wellcome Bert Mesh” and “Key BERT” models, (3) automatic extraction of top

Azadeh Bayani, Alexandre Ayotte, Jean Noel Nikiema

JMIR Infodemiology 2025;5:e56831

Health Care Language Models and Their Fine-Tuning for Information Extraction: Scoping Review

Health Care Language Models and Their Fine-Tuning for Information Extraction: Scoping Review

In the context of medical text, transformers excel in interpreting and extracting medically relevant information by effectively handling context and meaning, even in complex and specialized language. Transformers can be trained as language models (LMs) on raw text in a self-supervised manner, enabling them to develop a statistical understanding of the text they were trained on [7]. However, the benefits of this approach are only fully realized when fine-tuning a downstream task.

Miguel Nunes, Joao Bone, Joao C Ferreira, Luis B Elvas

JMIR Med Inform 2024;12:e60164

Neural Conversational Agent for Weight Loss Counseling: Protocol for an Implementation and Feasibility Study

Neural Conversational Agent for Weight Loss Counseling: Protocol for an Implementation and Feasibility Study

Architectures of the first type, which include Bidirectional Encoder Representations from Transformers [59] and its variants [60-63], use only the encoder stack of the Transformer and are typically used in a transfer learning scenario to create dense representations of text for a particular downstream natural language processing task.

Alexander Kotov, April Idalski Carcone, Elizabeth Towner

JMIR Res Protoc 2024;13:e60361

Patient Phenotyping for Atopic Dermatitis With Transformers and Machine Learning: Algorithm Development and Validation Study

Patient Phenotyping for Atopic Dermatitis With Transformers and Machine Learning: Algorithm Development and Validation Study

Differences in the number of documents, sentences, and tokens between patients with atopic dermatitis (AD) and those without AD. a BERT: Bidirectional Encoder Representations from Transformers. Mean number of tokens for sentences identified in each category. a BERT: Bidirectional Encoder Representations from Transformers.

Andrew Wang, Rachel Fulton, Sy Hwang, David J Margolis, Danielle Mowery

JMIR Form Res 2024;8:e52200

A Motivational Interviewing Chatbot With Generative Reflections for Increasing Readiness to Quit Smoking: Iterative Development Study

A Motivational Interviewing Chatbot With Generative Reflections for Increasing Readiness to Quit Smoking: Iterative Development Study

It used intent classifiers and transformers to understand and generate utterances, including MI reflections. In a pilot trial of 34 smokers, participants reported that the chatbot had a strong competency in MI but only scored 3 out of 5 on user satisfaction, leaving room for improvement. The goal of this study was to determine the impact of several versions of an MI-oriented chatbot, which uses generative reflections, on moving smokers toward the decision to quit smoking.

Andrew Brown, Ash Tanuj Kumar, Osnat Melamed, Imtihan Ahmed, Yu Hao Wang, Arnaud Deza, Marc Morcos, Leon Zhu, Marta Maslej, Nadia Minian, Vidya Sujaya, Jodi Wolff, Olivia Doggett, Mathew Iantorno, Matt Ratto, Peter Selby, Jonathan Rose

JMIR Ment Health 2023;10:e49132

Enabling Early Health Care Intervention by Detecting Depression in Users of Web-Based Forums using Language Models: Longitudinal Analysis and Evaluation

Enabling Early Health Care Intervention by Detecting Depression in Users of Web-Based Forums using Language Models: Longitudinal Analysis and Evaluation

ALBERT: A Lite Bidirectional Encoder Representations from Transformers; BERT: Bidirectional Encoder Representations from Transformers; LM: language model; SVM: support vector machine; TF-IDF: term frequency–inverse document frequency; RSDD: Reddit Self-reported Depression Diagnosis. The results of the preemptive depression identification experiment are presented in Tables 5-8. Each table shows a variation in the number of matched control users.

David Owen, Dimosthenis Antypas, Athanasios Hassoulas, Antonio F Pardiñas, Luis Espinosa-Anke, Jose Camacho Collados

JMIR AI 2023;2:e41205

Sentiment Analysis of Insomnia-Related Tweets via a Combination of Transformers Using Dempster-Shafer Theory: Pre– and Peri–COVID-19 Pandemic Retrospective Study

Sentiment Analysis of Insomnia-Related Tweets via a Combination of Transformers Using Dempster-Shafer Theory: Pre– and Peri–COVID-19 Pandemic Retrospective Study

We designed a sentiment analysis pipeline based on pretrained transformers’ architectures. The output of transformers was combined via Dempster-Shafer theory (DST; theory of belief) to achieve higher accuracy in the recognition of sentiments. The performance of this model was verified for accuracy by using a manually annotated data set.

Arash Maghsoudi, Sara Nowakowski, Ritwick Agrawal, Amir Sharafkhaneh, Mark E Kunik, Aanand D Naik, Hua Xu, Javad Razjouyan

J Med Internet Res 2022;24(12):e41517

One Clinician Is All You Need–Cardiac Magnetic Resonance Imaging Measurement Extraction: Deep Learning Algorithm Development

One Clinician Is All You Need–Cardiac Magnetic Resonance Imaging Measurement Extraction: Deep Learning Algorithm Development

Transformer-based neural networks like Bidirectional Encoder Representations from Transformers (BERT) [8,9] have achieved state-of-the-art results across a wide variety of natural language processing (NLP) tasks [10]. These models are pretrained on large amounts of text to learn general linguistic structure and produce contextualized representations of language.

Pulkit Singh, Julian Haimovich, Christopher Reeder, Shaan Khurshid, Emily S Lau, Jonathan W Cunningham, Anthony Philippakis, Christopher D Anderson, Jennifer E Ho, Steven A Lubitz, Puneet Batra

JMIR Med Inform 2022;10(9):e38178