Accessibility settings

Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/64348, first published .
Token Probabilities to Mitigate Large Language Models Overconfidence in Answering Medical Questions: Quantitative Study

Token Probabilities to Mitigate Large Language Models Overconfidence in Answering Medical Questions: Quantitative Study

Token Probabilities to Mitigate Large Language Models Overconfidence in Answering Medical Questions: Quantitative Study

Journals

  1. Chan J, Kwek R. Uncovering bias and variability in how large language models attribute cardiovascular risk. Frontiers in Digital Health 2025;7 View
  2. Callens S. Effective prompt design for large language models in clinical practice. Acta Clinica Belgica 2026;81(2):118 View
  3. Small W, Crowley R, Pariente C, Zhang J, Eaton K, Jiang L, Oermann E, Aphinyanaphongs Y. Enhancing the prediction of hospital discharge disposition with extraction-based language model classification. npj Health Systems 2026;3(1) View
  4. Santarelli V, Lombardo R, Romagnoli M, Sequi M, Coppola L, Rosato E, De Cillis S, Checcucci E, Amparore D, Ragonese M, Foschi N, Spatafora P, Tema G, Nacchia A, Cicione A, Franco A, Pastore A, Al Salhi Y, Gallo G, Pagliarulo V, Rocco B, Gacci M, Fiori C, Finazzi Agro E, Sciarra A, Del Giudice F, Tubaro A, De Nunzio C. Accuracy, readability, and understandability of European Association of Urology guidelines bot for Sexual and Reproductive Health Guidelines. The Journal of Sexual Medicine 2026;23(4) View