Published on in Vol 26 (2024)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/51837, first published .
What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT

Deanna M Kaplan   1 , PhD ;   Roman Palitsky   2 , PhD ;   Santiago J Arconada Alvarez   3 , MS ;   Nicole S Pozzo   1 , BA ;   Morgan N Greenleaf   3 , MS ;   Ciara A Atkinson   4 , PhD ;   Wilbur A Lam   5 , MD, PhD

1 Department of Family and Preventive Medicine, Emory University School of Medicine, Atlanta, GA, United States

2 Emory Spiritual Health, Woodruff Health Science Center, Emory University, Atlanta, GA, United States

3 Emory University School of Medicine, Atlanta, GA, United States

4 Department of Campus Recreation, University of Arizona, Tucson, AZ, United States

5 Wallace H Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, United States

Corresponding Author:

  • Deanna M Kaplan, PhD
  • Department of Family and Preventive Medicine
  • Emory University School of Medicine
  • Administrative Offices, Wesley Woods Campus
  • 1841 Clifton Road, NE, 5th Floor
  • Atlanta, GA, 30329
  • United States
  • Phone: 1 520 370 6752
  • Email: deanna.m.kaplan@emory.edu