Search Articles

View query in Help articles search

Search Results (1 to 10 of 551 Results)

Download search results: CSV END BibTex RIS


From E-Patients to AI Patients: The Tidal Wave Empowering Patients, Redefining Clinical Relationships, and Transforming Care

From E-Patients to AI Patients: The Tidal Wave Empowering Patients, Redefining Clinical Relationships, and Transforming Care

Many blunt this enthusiasm with caution, as the field struggles to genuinely address AI ethics, accountability, privacy, and governance [2]. Along with the hope (and hype) of AI within health care, the public is swiftly taking AI into their own hands. Consumers are at the forefront in this era of AI. A survey conducted in January 2025 by Imagining the Digital Future Center found that 52% of US adults used Chat GPT, Gemini, Co Pilot, or other LLMs.

Susan S Woods, Sarah M Greene, Laura Adams, Grace Cordovano, Matthew F Hudson

J Particip Med 2025;17:e75794

Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study

Benchmarking the Confidence of Large Language Models in Answering Clinical Questions: Cross-Sectional Evaluation Study

One way of building confidence in applying models within health care is the use of explainable artificial intelligence (AI) [13,14]. However, easily explainable outputs are difficult to evaluate due to the complexity of how LLMs process and output data [13,15,16]. Recent work revealed that these models often exhibit high confidence even when presenting incorrect information [17]. This raises questions about the underlying mechanisms that prompt an LLM to label certain statements as “more factual.”

Mahmud Omar, Reem Agbareia, Benjamin S Glicksberg, Girish N Nadkarni, Eyal Klang

JMIR Med Inform 2025;13:e66917

AI-Assisted Hypothesis Generation to Address Challenges in Cardiotoxicity Research: Simulation Study Using ChatGPT With GPT-4o

AI-Assisted Hypothesis Generation to Address Challenges in Cardiotoxicity Research: Simulation Study Using ChatGPT With GPT-4o

The integration of AI models such as Chat GPT with GPT-4o into hypothesis generation presents novel ethical and academic challenges that must be carefully addressed. While the use of AI has the potential to accelerate scientific discovery by generating innovative research questions and experimental designs, it also raises concerns regarding scientific attribution, research integrity, and the possible misuse of AI-generated content.

Yilan Li, Tianshu Gu, Chengyuan Yang, Minghui Li, Congyi Wang, Lan Yao, Weikuan Gu, DianJun Sun

J Med Internet Res 2025;27:e66161


Challenges in Implementing Artificial Intelligence in Breast Cancer Screening Programs: Systematic Review and Framework for Safe Adoption

Challenges in Implementing Artificial Intelligence in Breast Cancer Screening Programs: Systematic Review and Framework for Safe Adoption

This regulatory gap can lead to uncertainty and hesitation in adopting AI technologies in clinical practice. A comprehensive AI governance framework is critical as the medical community considers adopting AI as a second reader in screening programs. Hence, there is an urgent need to develop a holistic AI governance framework to support this ongoing transition [7,8].

Serene Goh, Rachel Sze Jen Goh, Bryan Chong, Qin Xiang Ng, Gerald Choon Huat Koh, Kee Yuan Ngiam, Mikael Hartman

J Med Internet Res 2025;27:e62941


Combining Artificial Intelligence and Human Support in Mental Health: Digital Intervention With Comparable Effectiveness to Human-Delivered Care

Combining Artificial Intelligence and Human Support in Mental Health: Digital Intervention With Comparable Effectiveness to Human-Delivered Care

Rapid advances in technology, computing, and artificial intelligence (AI) in recent years have led to a rise in the development of digital interventions aiming to solve this scalability problem, and there are an estimated 10,000-20,000 smartphone apps available for mental health support [6,7].

Clare E Palmer, Emily Marshall, Edward Millgate, Graham Warren, Michael Ewbank, Elisa Cooper, Samantha Lawes, Alastair Smith, Chris Hutchins-Joss, Jessica Young, Malika Bouazzaoui, Morad Margoum, Sandra Healey, Louise Marshall, Shaun Mehew, Ronan Cummins, Valentin Tablan, Ana Catarino, Andrew E Welchman, Andrew D Blackwell

J Med Internet Res 2025;27:e69351


Performance of 3 Conversational Generative Artificial Intelligence Models for Computing Maximum Safe Doses of Local Anesthetics: Comparative Analysis

Performance of 3 Conversational Generative Artificial Intelligence Models for Computing Maximum Safe Doses of Local Anesthetics: Comparative Analysis

Generative artificial intelligence (AI), powered by large language models (LLMs), has emerged as a promising tool for enhancing medical decision-making [1]. These AI models, which process vast amounts of text data to generate human-like responses, have demonstrated capabilities in drug discovery and dosing optimization [2,3]. Recent studies have extensively evaluated the performance of generative AI models in medical question-answering scenarios.

Mélanie Suppan, Pietro Elias Fubini, Alexandra Stefani, Mia Gisselbaek, Caroline Flora Samer, Georges Louis Savoldelli

JMIR AI 2025;4:e66796


Global Health care Professionals’ Perceptions of Large Language Model Use In Practice: Cross-Sectional Survey Study

Global Health care Professionals’ Perceptions of Large Language Model Use In Practice: Cross-Sectional Survey Study

This study offers timely insights for health care leaders, educators, and policymakers considering the responsible adoption of generative AI tools. By reflecting on global perspectives from frontline users, our findings may help shape discussions on how to balance innovation with safety and trust in clinical AI applications. This study was conducted as a cross-sectional survey between April 20 and July 3, 2023 (Multimedia Appendix 1).

Ecem Ozkan, Aysun Tekin, Mahmut Can Ozkan, Daniel Cabrera, Alexander Niven, Yue Dong

JMIR Med Educ 2025;11:e58801


Decoding Digital Discourse Through Multimodal Text and Image Machine Learning Models to Classify Sentiment and Detect Hate Speech in Race- and Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual Community–Related Posts on Social Media: Quantitative Study

Decoding Digital Discourse Through Multimodal Text and Image Machine Learning Models to Classify Sentiment and Detect Hate Speech in Race- and Lesbian, Gay, Bisexual, Transgender, Queer, Intersex, and Asexual Community–Related Posts on Social Media: Quantitative Study

While various studies developed multimodal AI models for sentiment classification, many major breakthroughs emerged from data competitions hosted by social media companies. Facebook’s parent company, Meta Platforms, Inc, launched the Hateful Memes Challenge [29], which provided a dataset of memes with “benign confounders” for the expressed intent of challenging unimodal models and advancing multimodal AI approaches.

Thu T Nguyen, Xiaohe Yue, Heran Mane, Kyle Seelman, Penchala Sai Priya Mullaputi, Elizabeth Dennard, Amrutha S Alibilli, Junaid S Merchant, Shaniece Criss, Yulin Hswen, Quynh C Nguyen

J Med Internet Res 2025;27:e72822


Transformer-Based Language Models for Group Randomized Trial Classification in Biomedical Literature: Model Development and Validation

Transformer-Based Language Models for Group Randomized Trial Classification in Biomedical Literature: Model Development and Validation

To create the baseline model, we employed Fast Text (Facebook AI Research) to generate word embeddings, followed by a logistic regression model. Logistic regression has been widely recognized in the literature as an effective classifier for text data due to its simplicity, interpretability, and robust performance across various domains [23].

Elaheh Aghaarabi, David Murray

JMIR Med Inform 2025;13:e63267