Journal of Medical Internet Research
The leading peer-reviewed journal for digital medicine and health and health care in the internet age.
Editor-in-Chief:
Gunther Eysenbach, MD, MPH, FACMI, Founding Editor and Publisher; Adjunct Professor, School of Health Information Science, University of Victoria, Canada Rachele Hendricks-Sturrup, DHSc, MSc, MA, FACTS, Lead Editor; Research Director of Real-World Evidence, Duke-Margolis Institute for Health Policy, Washington, DC
Impact Factor 6.0 More information about Impact Factor CiteScore 11.7 More information about CiteScore
Recent Articles

Generative artificial intelligence (GenAI) tools are increasingly used in scientific research to support literature searches, evidence synthesis, and manuscript preparation. While these systems promise substantial efficiency gains, concerns have emerged regarding their reliability, particularly their tendency to cite inaccurate, fabricated, or retracted literature. The unrecognized inclusion of retracted studies poses a serious risk to research integrity and evidence-based decision-making. Whether commonly used GenAI tools can reliably detect, exclude, or transparently communicate the retraction status of scientific publications remains unclear.

The integration of artificial intelligence–generated content (AIGC) tools into academic research offers transformative potential for enhancing productivity and innovation. However, within the highly regulated and ethically sensitive medical context, the use of AIGC is accompanied by significant challenges. Medical postgraduates, as the future vanguard of medical science, play a crucial role in the advancement of digital health, and their intention to use AIGC tools will significantly influence the use of these emerging technologies in medical research. Despite the growing popularity of AIGC tools, there remains a paucity of in-depth understanding of the factors driving or hindering medical postgraduates’ intention to use these tools in academic research. A clear comprehension of these influencing factors is essential to foster the responsible, effective, and sustainable integration of AIGC into medical research.

The integration of artificial intelligence (AI) into clinical medicine presents a persistent paradox: diagnostic models routinely demonstrate benchmark superiority over human experts, yet bedside adoption remains fragile, and clinician trust is low. Conventional forecasting approaches—projecting model performance along optimistic trend lines—are epistemologically insufficient because they cannot account for the nonlinear sociotechnical transitions that separate technical capability from institutional trust. This Viewpoint applies backcasting, a normative futures methodology with a 4-decade evidence base in energy policy and public governance, to the specific challenge of clinician adoption of AI diagnostics, with the aim of identifying the structural interventions required to achieve durable trust by 2040. Consistent with the tradition of single-expert normative foresight analysis, we applied backcasting as a structured reasoning framework using a STEEP (social, technological, economic, environmental, and political) analysis. Sources from PubMed, IEEE Xplore, Google Scholar, and policy repositories (the US Food and Drug Administration, World Health Organization, Organisation for Economic Co-Operation and Development, and European Commission) published between 2010 and 2025 were reviewed; barriers and enablers were coded across STEEP dimensions to identify pivot points representing convergent, time-bound structural changes. Working backward from a defined 2040 vision state—a health care ecosystem with risk-stratified clinician trust thresholds, semantic transparency of AI outputs, integrated AI governance, and futures literacy in medical education—we identified three temporal pivot points: (1) the 2030 standardization of dual-process AI architectures, in which large language models are verified in real time by locally deployed small language models, producing a calibrated confidence score; (2) the 2035 institutionalization of agentic AI orchestration governed by a formally designated chief AI officer; and (3) the 2040 integration of futures literacy and human-AI teaming competencies into standard medical curricula. The AI trust gap is an institutional design problem, not a technical inevitability. Backcasting reframes the central question from “when will AI be ready for medicine?” to “what must we build to make medicine ready for AI?” The 3 pivot points identified here—verifiable AI by 2030, agentic governance by 2035, and futures literacy by 2040—are structural commitments that clinicians, health system leaders, and policymakers can begin building today.

Shared decision-making allows patients and clinicians to make decisions together to help determine the most appropriate option. Patients need comprehensive health information to participate and evaluate different options during the shared decision-making process. Patients with diabetes need to constantly monitor their health status. They experience an array of health information needs during their ongoing health management. Online health information acquisition is a common behavior among patients with diabetes, and online information can impact the interaction between patients with diabetes and health care providers.

Patients with breast cancer often experience health-related quality of life (HRQoL) impairments that remain difficult to predict on an individual level. Prediction models can aid in understanding individual survivorship trajectories. However, current prognostic models are based on fixed intervals, limiting their utility in clinical follow-up schedules.

Digital oral health builds on the broader framework of eHealth, leveraging digital technologies to improve patient care, increase access to dental services, and enhance oral health outcomes. However, health care organizations and institutions encounter challenges in implementing digital oral health interventions across various levels. Addressing these challenges requires a comprehensive understanding of the barriers and facilitators that influence its successful adoption.

In an era of widespread mobile phone usage, digital public health interventions offer a new cost-effective way of improving public health. In the context of smoking cessation, studies indicate that mobile technologies have the potential to support individuals to quit smoking. However, there is no systematic synthesis of how often they are used by smokers and former smokers.

A recent systematic review and meta-analysis of randomized trials evaluating digital health interventions for family members of intensive care unit (ICU) patients found no significant improvements in anxiety, depression, posttraumatic stress, quality of life, or communication quality. Rather than concluding that digital approaches are inherently ineffective, we argue that these null findings reflect identifiable and remediable limitations in intervention design, outcome measurement, and trial methodology. In this commentary, we examine four structural barriers that currently constrain the evidence base and outline the conditions that next-generation trials must meet to adequately address the questions raised by this review.

The COVID-19 pandemic had an unprecedented impact on the delivery of health care, with digital interventions accelerating more than ever before. However, evidence of how hybrid care models, combining digital health interventions with in-person care, were implemented during the pandemic remains scattered. Understanding hybrid care models is imperative to build resilient health systems that can ensure access to care during crisis situations.

Newborn screening (NBS), a mandated public health intervention, allows the identification of babies with potentially life-threatening disorders and facilitates disease diagnosis and management before the onset of symptoms. While NBS saves lives, the process can be fraught with anxiety and unanswered questions from parents or guardians of newborns, especially as they wait for an appointment with a clinician.

Close follow-up of stable patients with axial spondyloarthritis (axSpA) presents a financial burden and inconvenience to patients. A remote monitoring patient-reported outcome measures (PROMs)–based model of care (PROMise) was designed to reduce the frequency of in-person consultations for stable patients with axSpA. However, little is known about the facilitators and barriers of implementing a remote monitoring PROMise.
Preprints Open for Peer Review
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-

















