Published on in Vol 26 (2024)
Preprints (earlier versions) of this paper are
available at
https://preprints.jmir.org/preprint/51837, first published
.
![What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT](https://asset.jmir.pub/assets/ac1e9a1885e7024dfc284f79243d152e.png 480w,https://asset.jmir.pub/assets/ac1e9a1885e7024dfc284f79243d152e.png 960w,https://asset.jmir.pub/assets/ac1e9a1885e7024dfc284f79243d152e.png 1920w,https://asset.jmir.pub/assets/ac1e9a1885e7024dfc284f79243d152e.png 2500w)
Journals
- Schlussel L, Samaan J, Chan Y, Chang B, Yeo Y, Ng W, Rezaie A. Evaluating the accuracy and reproducibility of ChatGPT-4 in answering patient questions related to small intestinal bacterial overgrowth. Artificial Intelligence in Gastroenterology 2024;5(1) View
- Raman R, Mandal S, Das P, Kaur T, Sanjanasri J, Nedungadi P, Kuhail M. Exploring University Students’ Adoption of ChatGPT Using the Diffusion of Innovation Theory and Sentiment Analysis With Gender Dimension. Human Behavior and Emerging Technologies 2024;2024(1) View