Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/71618, first published .
Large Language Models Could Revolutionize Health Care, but Technical Hurdles May Limit Their Applications

Large Language Models Could Revolutionize Health Care, but Technical Hurdles May Limit Their Applications

Large Language Models Could Revolutionize Health Care, but Technical Hurdles May Limit Their Applications

1Medical Information Department, Hospices Civils de Lyon, 3 Quai des Célestins, Lyon, France

2Laboratory of Medical Informatics and Knowledge Engineering in e-Health, Sorbonne University, Paris, France

3Public Health and Medical Information Unit, Saint-Étienne University Hospital Center, Saint-Étienne, France

4Laboratoire Inserm Santé Ingénierie Biologie, U1059, Dysfonction Vasculaire et Hémostase, Université Jean Monnet, Saint-Étienne, France

Corresponding Author:

Diva Beltramin, MD, MSc



Zhang et al [1] recently published an article in the Journal of Medical Internet Research titled “Revolutionizing Health Care: The Transformative Impact of Large Language Models in Medicine.” The authors synthesized all the possible applications of large language models (LLMs) very well, not only detailing applications related to clinical medicine, but also offering some examples of LLMs’ potential in a broader hospital environment and in public health policies. It was not the authors’ objective in their Viewpoint paper to explain how these applications would be implemented, but we believe that the next steps in their research should also consider the technical hurdles of implementing LLM applications. We also observed a few minor inaccuracies in the way the authors distinguished encoder models like bidirectional encoder representations from transformers (BERT) and decoder models like generative pretrained transformers (GPTs).

The authors reproduced a figure (“The Transformer – model architecture”; the first figure in their paper) from the famous 2017 paper “Attention Is All You Need” by Vaswani et al [2] (the original figure was captioned “The architectural designs of LLMs”). However, they represented a nonexistent connection of layers between BERT and GPT models. Encoder models like BERT use encoding-only blocks, while GPT models use decoder-only blocks. Therefore, there is no encoder/decoder attention layer in the GPT model.

Moreover, while there is still a lack of evidence for the use in medicine of LLMs that take only text as input, there is even less evidence for the use of multimodal LLMs. Of course, LLMs can easily adapt to any kind of image and can produce a coherent medical report. However, in highly specialized fields such as computed tomography scans, magnetic resonance imaging, or digital histopathology [3], fine-tuned deep learning models could have better performance in image interpretation. LLMs are not necessarily a medical Swiss Army knife, and we should not force their use everywhere, as other technologies exist that are more performant on specific tasks.

Another example is in the authors’ third figure (“Integration of LLMs in health care systems across different scales”), in which the authors suggest that LLMs should be used to perform resource allocation, even though such resources are not based on unstructured text data but on structured data implying tasks, actors, and duration of interventions. Already existing techniques such as operational research rely on mathematical approaches that help to identify the optimum corresponding to the highest-performing organization. We believe that the authors should evaluate the technical solutions already available before proposing applications based only on LLMs.

Technical details should include the resolution of problems related to interoperability between the electronic health record and LLMs, given that it is necessary that LLMs can access patient data. Expert systems, such as DXplain [4] or Internist-1 [5], that help clinicians in the diagnostic process already exist, but despite having high performance, they were discarded because patient data had to be entered into the expert system.

To conclude, we encourage the authors in their approach and recommend they dive into more technical details in the implementation of LLM-based applications.

Conflicts of Interest

None declared.

  1. Zhang K, Meng X, Yan X, et al. Revolutionizing health care: the transformative impact of large language models in medicine. J Med Internet Res. Jan 7, 2025;27:e59069. [CrossRef] [Medline]
  2. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A. Attention is all you need. In: Advances in Neural Information Processing Systems 30 (NIPS 2017). Curran Associates; 2017.
  3. Tward JD, Huang HC, Esteva A, et al. Prostate cancer risk stratification in NRG Oncology phase III randomized trials using multimodal deep learning with digital histopathology. JCO Precis Oncol. Oct 2024;8:e2400145. [CrossRef] [Medline]
  4. Barnett GO, Cimino JJ, Hupp JA, Hoffer EP. DXplain. An evolving diagnostic decision-support system. JAMA. Jul 3, 1987;258(1):67-74. [CrossRef] [Medline]
  5. Miller RA, Pople HE Jr, Myers JD. Internist-1, an experimental computer-based diagnostic consultant for general internal medicine. N Engl J Med. Aug 19, 1982;307(8):468-476. [CrossRef] [Medline]


BERT: bidirectional encoder representations from transformers
GPT: generative pretrained transformer
LLM: large language model


Edited by Tiffany Leung; This is a non–peer-reviewed article. submitted 24.01.25; accepted 23.05.25; published 25.06.25.

Copyright

© Diva Beltramin, Cédric Bousquet, Théophile Tiffet. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 25.6.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.