e.g. mhealth
Search Results (1 to 4 of 4 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 3 Journal of Medical Internet Research
- 1 JMIR Aging
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Research Protocols
- 0 JMIR Human Factors
- 0 JMIR Medical Informatics
- 0 JMIR Public Health and Surveillance
- 0 JMIR mHealth and uHealth
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Biomedical Engineering
- 0 JMIR Data
- 0 JMIR Cardio
- 0 JMIR Formative Research
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Pediatrics and Parenting
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

We observed how Black American caregivers expressed the need for multimodality of interaction to equitably access what generative AI tools have to offer. Indeed, they expressed the importance of being able to “talk with the tool” and hear back (eg, read content out loud) in the fashion of a conversational voice assistant.
JMIR Aging 2025;8:e60566
Download Citation: END BibTex RIS

Multimodal Large Language Models in Health Care: Applications, Challenges, and Future Outlook
Finally, multimodality characterizes systems designed to generate outputs in >1 modality, such as systems capable of producing both textual and image-based content [40].
Several previous works have developed basic M-LLMs by aligning the well-trained encoders from different modalities with the textual feature space of LLMs. This approach enables LLMs to process inputs other than text, as seen in various examples [41-44].
J Med Internet Res 2024;26:e59505
Download Citation: END BibTex RIS

The Impact of Multimodal Large Language Models on Health Care’s Future
As medicine is a multimodal discipline, the potential future versions of LLMs that can handle multimodality—meaning that they could interpret and generate not only text but also images, videos, sound, and even comprehensive documents—can be conceptualized as a significant evolution in the field of AI. Such advancements would enable more holistic patient assessments, drawing from diverse data sources for accurate diagnoses and treatment recommendations.
J Med Internet Res 2023;25:e52865
Download Citation: END BibTex RIS

To use such diverse multimodality information as alternative evidence to facilitate accurate classifications, we propose a data mining and machine learning (ML) framework as an alternative to commonly used hypothesis-driven parametric models. The goal of this study is to provide reliable data-driven support for clinicians, even those who do not have comprehensive experience in diagnosing the emerging disease COVID-19.
J Med Internet Res 2021;23(4):e23948
Download Citation: END BibTex RIS