e.g. mhealth
Search Results (1 to 10 of 551 Results)
Download search results: CSV END BibTex RIS
Skip search results from other journals and go to results- 170 Journal of Medical Internet Research
- 71 JMIR Medical Education
- 57 JMIR Formative Research
- 46 JMIR Medical Informatics
- 41 JMIR AI
- 28 JMIR Research Protocols
- 26 JMIR Mental Health
- 21 JMIR Human Factors
- 15 JMIR Dermatology
- 13 JMIR Aging
- 10 JMIR Cancer
- 8 Interactive Journal of Medical Research
- 6 JMIR Nursing
- 5 JMIR Cardio
- 5 JMIR mHealth and uHealth
- 4 JMIR Diabetes
- 3 JMIR Infodemiology
- 3 JMIR Neurotechnology
- 3 JMIR Pediatrics and Parenting
- 3 JMIR Public Health and Surveillance
- 3 JMIR Rehabilitation and Assistive Technologies
- 2 Asian/Pacific Island Nursing Journal
- 2 Iproceedings
- 1 JMIR Biomedical Engineering
- 1 JMIR Perioperative Medicine
- 1 JMIR Serious Games
- 1 JMIRx Med
- 1 Journal of Participatory Medicine
- 1 Online Journal of Public Health Informatics
- 0 Medicine 2.0
- 0 iProceedings
- 0 JMIR Preprints
- 0 JMIR Bioinformatics and Biotechnology
- 0 JMIR Challenges
- 0 JMIR Data
- 0 JMIRx Bio
- 0 Transfer Hub (manuscript eXchange)
- 0 JMIR XR and Spatial Computing (JMXR)

Many blunt this enthusiasm with caution, as the field struggles to genuinely address AI ethics, accountability, privacy, and governance [2].
Along with the hope (and hype) of AI within health care, the public is swiftly taking AI into their own hands. Consumers are at the forefront in this era of AI. A survey conducted in January 2025 by Imagining the Digital Future Center found that 52% of US adults used Chat GPT, Gemini, Co Pilot, or other LLMs.
J Particip Med 2025;17:e75794
Download Citation: END BibTex RIS

One way of building confidence in applying models within health care is the use of explainable artificial intelligence (AI) [13,14]. However, easily explainable outputs are difficult to evaluate due to the complexity of how LLMs process and output data [13,15,16]. Recent work revealed that these models often exhibit high confidence even when presenting incorrect information [17]. This raises questions about the underlying mechanisms that prompt an LLM to label certain statements as “more factual.”
JMIR Med Inform 2025;13:e66917
Download Citation: END BibTex RIS

The integration of AI models such as Chat GPT with GPT-4o into hypothesis generation presents novel ethical and academic challenges that must be carefully addressed. While the use of AI has the potential to accelerate scientific discovery by generating innovative research questions and experimental designs, it also raises concerns regarding scientific attribution, research integrity, and the possible misuse of AI-generated content.
J Med Internet Res 2025;27:e66161
Download Citation: END BibTex RIS

This regulatory gap can lead to uncertainty and hesitation in adopting AI technologies in clinical practice. A comprehensive AI governance framework is critical as the medical community considers adopting AI as a second reader in screening programs. Hence, there is an urgent need to develop a holistic AI governance framework to support this ongoing transition [7,8].
J Med Internet Res 2025;27:e62941
Download Citation: END BibTex RIS

Rapid advances in technology, computing, and artificial intelligence (AI) in recent years have led to a rise in the development of digital interventions aiming to solve this scalability problem, and there are an estimated 10,000-20,000 smartphone apps available for mental health support [6,7].
J Med Internet Res 2025;27:e69351
Download Citation: END BibTex RIS
Go back to the top of the page Skip and go to footer section

Generative artificial intelligence (AI), powered by large language models (LLMs), has emerged as a promising tool for enhancing medical decision-making [1]. These AI models, which process vast amounts of text data to generate human-like responses, have demonstrated capabilities in drug discovery and dosing optimization [2,3].
Recent studies have extensively evaluated the performance of generative AI models in medical question-answering scenarios.
JMIR AI 2025;4:e66796
Download Citation: END BibTex RIS

This study offers timely insights for health care leaders, educators, and policymakers considering the responsible adoption of generative AI tools. By reflecting on global perspectives from frontline users, our findings may help shape discussions on how to balance innovation with safety and trust in clinical AI applications.
This study was conducted as a cross-sectional survey between April 20 and July 3, 2023 (Multimedia Appendix 1).
JMIR Med Educ 2025;11:e58801
Download Citation: END BibTex RIS

While various studies developed multimodal AI models for sentiment classification, many major breakthroughs emerged from data competitions hosted by social media companies.
Facebook’s parent company, Meta Platforms, Inc, launched the Hateful Memes Challenge [29], which provided a dataset of memes with “benign confounders” for the expressed intent of challenging unimodal models and advancing multimodal AI approaches.
J Med Internet Res 2025;27:e72822
Download Citation: END BibTex RIS

To create the baseline model, we employed Fast Text (Facebook AI Research) to generate word embeddings, followed by a logistic regression model. Logistic regression has been widely recognized in the literature as an effective classifier for text data due to its simplicity, interpretability, and robust performance across various domains [23].
JMIR Med Inform 2025;13:e63267
Download Citation: END BibTex RIS