Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51837, first published .
What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

Journals

  1. Schlussel L, Samaan J, Chan Y, Chang B, Yeo Y, Ng W, Rezaie A. Evaluating the accuracy and reproducibility of ChatGPT-4 in answering patient questions related to small intestinal bacterial overgrowth. Artificial Intelligence in Gastroenterology 2024;5(1) View
  2. Raman R, Mandal S, Das P, Kaur T, Sanjanasri J, Nedungadi P, Kuhail M. Exploring University Students’ Adoption of ChatGPT Using the Diffusion of Innovation Theory and Sentiment Analysis With Gender Dimension. Human Behavior and Emerging Technologies 2024;2024(1) View