Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51837, first published .
What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

Journals

  1. Schlussel L, Samaan J, Chan Y, Chang B, Yeo Y, Ng W, Rezaie A. Evaluating the accuracy and reproducibility of ChatGPT-4 in answering patient questions related to small intestinal bacterial overgrowth. Artificial Intelligence in Gastroenterology 2024;5(1) View
  2. Raman R, Mandal S, Das P, Kaur T, Sanjanasri J, Nedungadi P, Kuhail M. Exploring University Students’ Adoption of ChatGPT Using the Diffusion of Innovation Theory and Sentiment Analysis With Gender Dimension. Human Behavior and Emerging Technologies 2024;2024(1) View
  3. Adel A, Ahsan A, Davison C. ChatGPT Promises and Challenges in Education: Computational and Ethical Perspectives. Education Sciences 2024;14(8):814 View
  4. Ahn S. The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions. The Korean Journal of Physiology & Pharmacology 2024;28(5):393 View