e.g. mhealth
Search Results (1 to 10 of 12 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 6 Journal of Medical Internet Research
- 2 JMIR Biomedical Engineering
- 1 JMIR Cardio
- 1 JMIR Pediatrics and Parenting
- 1 JMIR Research Protocols
- 1 JMIR mHealth and uHealth
- 0 Medicine 2.0
- 0 Interactive Journal of Medical Research
- 0 iProceedings
- 0 JMIR Human Factors
- 0 JMIR Medical Informatics
- 0 JMIR Public Health and Surveillance
- 0 JMIR Serious Games
- 0 JMIR Mental Health
- 0 JMIR Rehabilitation and Assistive Technologies
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Medical Education
- 0 JMIR Cancer
- 0 JMIR Challenges
- 0 JMIR Diabetes
- 0 JMIR Data
- 0 JMIR Formative Research
- 0 Journal of Participatory Medicine
- 0 JMIR Dermatology
- 0 JMIR Aging
- 0 JMIR Perioperative Medicine
- 0 JMIR Nursing
- 0 JMIRx Med
- 0 JMIRx Bio
- 0 JMIR Infodemiology
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR AI
- 0 JMIR Neurotechnology
- 0 Asian/Pacific Island Nursing Journal
- 0 Online Journal of Public Health Informatics
- 0 JMIR XR and Spatial Computing (JMXR)

This pilot study describes a novel diagnostic technology using audio recordings from a standard mobile phone. Prior publications have sought both invasive and noninvasive means of describing cardiac function, but very few have moved out of research phases to clinical or practical use [12-16].
JMIR Cardio 2024;8:e57111
Download Citation: END BibTex RIS

Through the inclusion of audio tools for information dissemination, we aim to acknowledge potential stigma facilitators, such as low literacy rates, in affected groups [30]. Offering learning content in local languages (rather than French), the audio tools could potentially improve campaign effectiveness by increasing self-efficacy through the ability to participate in the intervention as well as being able to understand the content [24-26].
JMIR Res Protoc 2024;13:e52106
Download Citation: END BibTex RIS

Voice data are often captured, transmitted, and stored in various digital formats that may include compression, a common practice used to reduce the size of audio files, making them more manageable and efficient for storage and transmission [9]. It is necessary to consider the potential impact of audio data compression on the overall process of vocal biomarker development as the process can have significant effects on the audio [10].
JMIR Biomed Eng 2024;9:e56246
Download Citation: END BibTex RIS

This may result in subtle but detectable differences in the way pauses are present in authentic versus cloned audio. Indeed, when humans were asked to distinguish between audio deepfakes and authentic voices, one of the primary justifications for a fake audio classification was unnatural pauses in the recordings [10].
JMIR Biomed Eng 2024;9:e56245
Download Citation: END BibTex RIS

The entire night recording of each patient was divided into 30-second segments and converted to a Mel spectrogram for visual representation of audio to visualize how the sound energy in each frequency bin changed over time. The Mel spectrograms were synchronized with manually annotated sleep apnea events from PSG. In this paper, we omitted central apnea and regarded only OSA events as apnea.
J Med Internet Res 2023;25:e44818
Download Citation: END BibTex RIS

Furthermore, audio characteristics vary among individuals (eg, one COVID-19–positive participant may produce a similar spectrogram as another COVID-19–negative participant). This is not considered in most conventional audio-based COVID-19 detection systems, which so far have only used a single audio sample rather than sequences. This makes automatic detection a difficult task and may lead to wrong predictions.
J Med Internet Res 2022;24(6):e37004
Download Citation: END BibTex RIS

In this work, we propose a machine learning–based approach to predict signs of autism directly from self-recorded semistructured home audio clips recording a child’s natural behavior. We use random forests, convolutional neural networks (CNNs), and fine-tuned wav2vec 2.0 models to identify differences in speech between children with autism and neurotypical (NT) controls. One strength of our approach is that our models are trained on mobile device audio recordings of varying audio quality.
JMIR Pediatr Parent 2022;5(2):e35406
Download Citation: END BibTex RIS

To have a backup of the voice responses, we used an audio recorder (Philips DVT4010, Koninklijke Philips N.V.).
Based on Kocaballi et al [28], we tested commonly used unimodal and multimodal VAs. To operationalize the variables developer and modality, we employed the 3 most common Vas (ie, Amazon Alexa, Apple Siri, and Google Assistant) and aimed for the 2 most frequently used devices (ie, smart speaker and smartphone) for each VA [10].
J Med Internet Res 2021;23(12):e32161
Download Citation: END BibTex RIS

Reference 47: Accelerating mobile audio sensing algorithms through on-chip gpu offloadingaudio
J Med Internet Res 2021;23(10):e25460
Download Citation: END BibTex RIS

Moreover, the noise in ASR audio input may result in specific types of word errors in the output text interfering with the documentation when extracting relevant medical information. The existing publicly and commercially available ASR models are optimized for the daily conversation and thus may perform poorly when applied to domain-specific clinical speech [30,31].
ASR consists of multiple components to convert input audio to output text.
JMIR Mhealth Uhealth 2021;9(10):e32301
Download Citation: END BibTex RIS