Journal of Medical Internet Research
The leading peer-reviewed journal for digital medicine and health and health care in the internet age.
Editor-in-Chief:
Gunther Eysenbach, MD, MPH, FACMI, Founding Editor and Publisher; Adjunct Professor, School of Health Information Science, University of Victoria, Canada
Impact Factor 6.0 CiteScore 11.7
Recent Articles


In the context of digital health, just-in-time adaptive interventions (JITAIs) are nascent precision medicine systems that can extend personalized health care support to everyday life. A challenge in designing JITAIs is that personalized support often involves sophisticated decision-making algorithms. These decision-making algorithms can require numerous nontrivial design decisions that must be made between successive JITAI deployments (eg, hyperparameter selection for an artificial intelligence algorithm). Making design decisions between deployments—rather than during deployment—ensures intervention fidelity and enhances the ability to replicate results. Yet, each deployment can be costly, precluding the use of A/B testing for every design decision. How should design decisions be made strategically between JITAI deployments? This paper introduces “digital twins for just-in-time adaptive interventions (JITAI-Twins)” to address this question. JITAI-Twins are “digital twins of a subpopulation” (term used in the 2023 National Academies workshop proceedings on digital twins). JITAI-Twins are used to virtually simulate the potential outcomes of a JITAI’s design decisions for an upcoming deployment. Based on simulation results, design decisions are made for the deployed JITAI. To continually improve the JITAI, data collected during deployment are used to update the JITAI-Twin—and this bidirectional feedback between deployments and simulation environments continues. JITAI-Twins are thus “fit-for-purpose” (term used in the National Academies 2024 consensus report on digital twins) instantiations of the digital twin concept. In this paper, we elucidate the specifics and design process of JITAI-Twins, with examples of prior use in clinical settings. JITAI-Twins highlight continuity over the course of a JITAI’s optimization and continual improvement, emphasizing the need for bidirectional feedback between versions of a simulation environment and a JITAI’s deployments.

Growth of generative artificial intelligence (GenAI) has exploded in recent years. Many have noted its substantial potential to increase access to scalable digital mental health interventions or provide companions for individuals who are socially isolated. At the same time, seeking mental health support from mainstream GenAI models may involve risks. Several recent examples of exacerbation of delusions have received attention in the popular press, leading to a call for empirical research to document the scope of interactions with GenAI among individuals experiencing symptoms of psychosis.

Clinical decision support systems (CDSS) have the potential to improve patient safety and reduce costs in primary care. However, CDSS adoption remains limited due to development and implementation challenges. CDSSs are complex interventions involving multiple interacting components that require technological innovation and behavioral and organizational change. Additionally, the primary care context is considered a complex system with high care demand, fragmented structures, and many independent yet interdependent organizations. Established determinant frameworks for implementing and scaling up complex health care interventions support the identification of implementation determinants. However, they offer limited guidance on the underlying processes of these determinants, such as the implementation processes involved in complex interorganizational collaboration in primary care.

Survey research has the potential to elevate the experiences and opinions of marginalized populations. The rising number of bot attacks, a method of participant fraud that creates multiple records in survey data using automated software, threatens to drown out those voices and produce inaccurate findings. Rapid identification and mitigation of bot attacks are vital; however, there is limited guidance for researchers on scalable approaches to address this problem.

Opioid use disorder (OUD) remains a critical public health crisis in the United States. Despite widespread policy and clinical interventions, early identification of individuals at risk for developing OUD remains challenging due to limitations in traditional screening approaches and a lack of individualized risk stratification methods. Machine learning (ML) methods offer an opportunity to develop timely, high-performing, and explainable predictive models that can enhance OUD prevention strategies in clinical settings.

Digital addiction, including internet, smartphone, and gaming addiction, has emerged as a significant global health concern. Although a wide range of interventions has been evaluated, the fragmented and siloed nature of existing meta-analyses limits a clear understanding of the comparative effectiveness of different interventions across addiction subtypes.

Artificial intelligence (AI) promises efficiency and equity in health care. However, adoption remains fragmented due to weak foundations of trust. This Viewpoint highlights the gap between intrinsic trust, based on interpretability, and extrinsic trust, based on functional validation. We propose a contractual framework between AI systems and users defined by 3 promises: reliability, scope and equity, and shift and uncertainty. Illustrated through a vignette, we show how health systems can operationalize these promises through structured evidence and governance, translating trustworthy AI into accountable clinical deployment.

Artificial intelligence triage in general practice is developing rapidly within the primary care digital transformation, promising efficiency gains and safety standardization in overwhelmed primary care systems. However, current evidence is drawn from retrospective validations, emergency settings, or vignettes, with scant evaluation of real-world outcomes and almost no equity-stratified safety data, despite known disparities across age, ethnicity, language, and deprivation. From a sociotechnical standpoint, which considers the fit between people, tasks, technology, and organizational context, risks arise not only from algorithmic bias and undertriage but also from human factors, workflow misalignment, governance gaps, and inadequate postdeployment monitoring. We argue that ensuring artificial intelligence triage is safe and equitable requires real-world evaluations in primary care settings, equity-focused performance reporting using theoretically informed frameworks, and rigorous postmarket surveillance. Without these, deployment may widen existing health inequalities rather than moderate them.
Preprints Open for Peer Review
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-
Open Peer Review Period:
-



















