Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51837, first published .
What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

Journals

  1. Schlussel L, Samaan J, Chan Y, Chang B, Yeo Y, Ng W, Rezaie A. Evaluating the accuracy and reproducibility of ChatGPT-4 in answering patient questions related to small intestinal bacterial overgrowth. Artificial Intelligence in Gastroenterology 2024;5(1) View
  2. Raman R, Mandal S, Das P, Kaur T, Sanjanasri J, Nedungadi P, Kuhail M. Exploring University Students’ Adoption of ChatGPT Using the Diffusion of Innovation Theory and Sentiment Analysis With Gender Dimension. Human Behavior and Emerging Technologies 2024;2024(1) View
  3. Adel A, Ahsan A, Davison C. ChatGPT Promises and Challenges in Education: Computational and Ethical Perspectives. Education Sciences 2024;14(8):814 View
  4. Ahn S. The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions. The Korean Journal of Physiology & Pharmacology 2024;28(5):393 View
  5. Orynbay L, Bekmanova G, Yergesh B, Omarbekova A, Sairanbekova A, Sharipbay A. The role of cognitive computing in NLP. Frontiers in Computer Science 2025;6 View
  6. de Kok T. ChatGPT for Textual Analysis? How to Use Generative LLMs in Accounting Research. Management Science 2025 View
  7. Zhou W, Zhu X, Han Q, Li L, Chen X, Wen S, Xiang Y. The Security of Using Large Language Models: A Survey with Emphasis on ChatGPT. IEEE/CAA Journal of Automatica Sinica 2025;12(1):1 View
  8. Omar M, Sorin V, Agbareia R, Apakama D, Soroush A, Sakhuja A, Freeman R, Horowitz C, Richardson L, Nadkarni G, Klang E. Evaluating and addressing demographic disparities in medical large language models: a systematic review. International Journal for Equity in Health 2025;24(1) View
  9. Ho J, Hartanto A, Koh A, Majeed N. Gender biases within Artificial Intelligence and ChatGPT: Evidence, Sources of Biases and Solutions. Computers in Human Behavior: Artificial Humans 2025;4:100145 View
  10. Allan K, Azcona J, Sripada S, Leontidis G, Sutherland C, Phillips L, Martin D. Stereotypical bias amplification and reversal in an experimental model of human interaction with generative artificial intelligence. Royal Society Open Science 2025;12(4) View
  11. Spennemann D. Who Is to Blame for the Bias in Visualizations, ChatGPT or DALL-E?. AI 2025;6(5):92 View
  12. Bohren J, Hull P, Imas A. Systemic Discrimination: Theory and Measurement. The Quarterly Journal of Economics 2025;140(3):1743 View

Books/Policy Documents

  1. Freeburg T, Chang T. Digital Health, AI and Generative AI in Healthcare. View
  2. Shafik W. Integrating AI With Haptic Systems for Smarter Healthcare Solutions. View
  3. Voutyrakou D, Tsoukalas S, Karelis M, Katsiampoura G, Mikalef P, Avlonitis M. Artificial Intelligence Applications and Innovations. AIAI 2025 IFIP WG 12.5 International Workshops. View

Conference Proceedings

  1. Richter$$ M. REGION V ROZVOJI SPOLEČNOSTI 2024 / REGION IN THE DEVELOPMENT OF SOCIETY 2024. GENERATIVNÍ UMĚLÁ INTELIGENCE (AI) JAKO ZRCADLO GENDEROVÝCH BIASŮ / GENERATIVE ARTIFICIAL INTELLIGENCE (AI) AS A MIRROR OF GENDER BIASES View
  2. Kelly M, Tahaei M, Smyth P, Wilcox L. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. Understanding Gender Bias in AI-Generated Product Descriptions View
  3. Gupta I, Joshi I, Dey A, Parikh T. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency. “Since Lawyers are Males..”: Examining Implicit Gender Bias in Hindi Language Generation by LLMs View