Published on in Vol 27 (2025)

Preprints (earlier versions) of this paper are available at https://preprints.jmir.org/preprint/50708, first published .
Perspectives of Black, Latinx, Indigenous, and Asian Communities on Health Data Use and AI: Cross-Sectional Survey Study

Perspectives of Black, Latinx, Indigenous, and Asian Communities on Health Data Use and AI: Cross-Sectional Survey Study

Perspectives of Black, Latinx, Indigenous, and Asian Communities on Health Data Use and AI: Cross-Sectional Survey Study

Research Letter

1San Francisco School of Medicine, University of California, San Francisco, CA, United States

2Department of Dermatology, University of California, San Francisco, San Francisco, CA, United States

*these authors contributed equally

Corresponding Author:

Jenna C Lester, MD

Department of Dermatology

University of California, San Francisco

1701 Divisadero St

San Francisco, CA, 94115

United States

Phone: 1 (415) 353 7800

Email: jenna.lester@ucsf.edu


Despite excitement around artificial intelligence (AI)–based tools in health care, there is work to be done before they can be equitably deployed. The absence of diverse patient voices in discussions on AI is a pressing matter, and current studies have been limited in diversity. Our study inquired about the perspectives of racial and ethnic minority patients on the use of their health data in AI, by conducting a cross-sectional survey among 230 participants who were at least 18 years of age and identified as Black, Latinx, Indigenous, or Asian. While familiarity with AI was high, a smaller proportion of participants understood how AI can be used in health care (152/199, 76.4%), and an even smaller proportion understood how AI can be applied to dermatology (133/199, 66.8%). Overall, 69.8% (139/199) of participants agreed that they trusted the health care system to treat their medical information with respect; however, this varied significantly by income (P=.045). Only 64.3% (128/199) of participants felt comfortable with their medical data being used to build AI tools, and 83.4% (166/199) believed they should be compensated if their data are used to develop AI. To our knowledge, this is the first study focused on understanding opinions about health data use for AI among racial and ethnic minority individuals, as similar studies have had limited diversity. It is important to capture the opinions of diverse groups because the inclusion of their data is essential for building equitable AI tools; however, historical harms have made inclusion challenging.

J Med Internet Res 2025;27:e50708

doi:10.2196/50708

Keywords



Despite excitement around artificial intelligence (AI)–based tools in health care, there is work to be done before they can be safely deployed. Addressing dataset diversity and racism perpetuated by large language models is paramount [Omiye JA, Lester JC, Spichak S, Rotemberg V, Daneshjou R. Large language models propagate race-based medicine. NPJ Digit Med. Oct 20, 2023;6(1):195. [FREE Full text] [CrossRef] [Medline]1-Adamson A, Smith A. Machine learning and health care disparities in dermatology. JAMA Dermatol. Nov 01, 2018;154(11):1247-1248. [FREE Full text] [CrossRef] [Medline]3]. The absence of diverse patient voices in AI discussions is a pressing matter, and current studies have been limited in diversity [Nelson CA, Pérez-Chada LM, Creadore A, Li SJ, Lo K, Manjaly P, et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study. JAMA Dermatol. May 01, 2020;156(5):501-512. [FREE Full text] [CrossRef] [Medline]4,Salvador T, Gu L, Hay JL, Kurtansky NR, Masterson-Creber R, Halpern AC, et al. Consent and identifiability for patient images in research, education, and image-based artificial intelligence. JAMA Dermatol. Apr 01, 2024;160(4):470-472. [FREE Full text] [CrossRef] [Medline]5]. Our study examined the perspectives of racial and ethnic minority patients on the use of health data in AI.


Overview

A cross-sectional survey was administered via Qualtrics to participants aged 18+ years who identified as Black, Latinx, Indigenous, or Asian. Categorical variables were summarized by frequency and percentage. The chi-square test was used to assess the relationship between responses and demographic variables. Statistical significance was based on P<.05. No multiple testing adjustment was performed. All analysis was done by R (version 4.0.5; R Foundation for Statistical Computing).

Ethical Considerations

This study was exempt from approval by the University of California, San Francisco Institutional Review Board (IRB #22-36156). Informed consent was collected. All data collected were anonymized. Participants were not compensated.


Overall, 230 participants enrolled, and 199 surveys were completed (Table 1). While familiarity with AI was high, a smaller proportion of participants understood how AI can be used in health care (152/199, 76.4%), and an even smaller proportion understood how AI can be applied to dermatology (133/199, 66.8%). These outcomes did not vary significantly when stratifying by demographics such as race, age, gender, income, insurance type, or schooling (all P>.05).

Most participants (139/199, 69.8%) agreed that they trusted the health care system to treat their medical information with respect, which varied significantly by income (P=.045). Patients with lower incomes often experience more structural barriers to health care, and this could be one factor contributing to our finding that they are not as likely to trust the health care system. Only 64.3% (128/199) participants felt comfortable with their medical information being used to build AI tools. Most (181/199, 91%) want to be notified if their medical information is used in such a way. This varied by race (P=.002) and age (P=.03). Most (166/199, 83.4%) agreed they should be compensated if their medical information is used to develop AI, which varied by age (P=.045; Figure 1).

Table 1. Demographics of survey participants.
DemographicsValue (n=199), n (%)
Gender

Female131 (65.8)

Male63 (31.7)

Other or unknown5 (2.5)
Age group (years)

18-34108 (54.3)

35-5455 (27.6)

55-7432 (16.1)

75+3 (1.5)

Other or unknown1 (0.5)
Race and ethnicity

American Indian or Alaskan Native4 (2)

Asian57 (28.6)

Black102 (51.3)

Hispanic or Latino9 (4.5)

Multiracial21 (10.6)

White4 (2)

Other or unknown2 (1)
Schooling

High school or lower19 (9.5)

Associate’s degree or trade school30 (15.1)

Bachelor’s degree85 (42.7)

Graduate degree62 (31.2)

Other or unknown3 (1.5)
Insurance

Private insurance109 (54.8)

Medicare or Medicaid62 (31.2)

TRICARE or veterans7 (3.5)

Indian Health Service2 (1)

No health insurance6 (3)

Other or unknown13 (6.5)
Household income (US $)

Less than 25,00033 (16.6)

25,000-50,00044 (22.1)

50,000-100,00053 (26.6)

100,000-250,00046 (23.1)

More than 250,00012 (6)

Other or unknown11 (5.5)
Geography

Suburban80 (40.2)

Urban93 (46.7)

Rural21 (10.6)

Other or unknown5 (2.5)
Number of visits to the doctor in a year

0-264 (32.2)

3-571 (35.7)

6+61 (30.7)

Other or unknown3 (1.5)
Figure 1. Percentage of participants who would like to be compensated if their medical information was being used for AI by age group and race/ethnicity.

To our knowledge, this is the first study focused on understanding opinions about health data use for AI among people who identify as Black, Latinx, Indigenous, or Asian. Similar studies have had limited diversity [Nelson CA, Pérez-Chada LM, Creadore A, Li SJ, Lo K, Manjaly P, et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study. JAMA Dermatol. May 01, 2020;156(5):501-512. [FREE Full text] [CrossRef] [Medline]4,Salvador T, Gu L, Hay JL, Kurtansky NR, Masterson-Creber R, Halpern AC, et al. Consent and identifiability for patient images in research, education, and image-based artificial intelligence. JAMA Dermatol. Apr 01, 2024;160(4):470-472. [FREE Full text] [CrossRef] [Medline]5]. It is important to capture the opinions of diverse groups because the inclusion of their data is essential for building equitable AI tools; however, historical harms have made robust inclusion challenging. For racial and ethnic minority communities, historical experiences of racism, discrimination, and exploitation (eg, medical experimentation on populations without informed consent) may contribute to distrust in the health care system and, hence, decrease their comfort level with their data being used to develop AI systems. This suggests that AI development may not be inclusive, ethical, or aligned with the concerns of racial and ethnic minority communities enough to be safely deployed in health care settings. Ignoring these concerns could deepen distrust in AI, leading to lower rates of research participation in these already underrepresented communities, deepening biases and inequities in industries where AI is being deployed.

The educational gaps in AI uncovered by this inquiry should be addressed, as a fundamental understanding of AI is essential for patients to offer fully informed consent for health data use in AI development. There are existing initiatives addressing these educational gaps [AI4ALL. URL: https://ai-4-all.org/ [accessed 2024-11-25] 6].

Most participants agreed that they should be notified if their data are used to build AI tools, which is not the current convention. For example, in dermatology, photos taken for clinical monitoring can be repurposed as data in AI tools without patient permission as long as they are “deidentified.”

Over 80% of participants agreed that they should be compensated when their data are used to build AI tools. Community-based participatory research shares findings and benefits with study participants. Sharing revenue with patients who contribute their data to build these valuable tools is thematically similar to sharing benefits of a work product and is something that should be explored.

Limitations include the study population. Participants were recruited from an academic medical center and research database, meaning that they may not represent the broader population. Future research should include a more diverse and representative sample from various backgrounds and regions to enhance generalizability.

Acknowledgments

We would like to thank Li Zhang for her statistical support.

Conflicts of Interest

None declared.

  1. Omiye JA, Lester JC, Spichak S, Rotemberg V, Daneshjou R. Large language models propagate race-based medicine. NPJ Digit Med. Oct 20, 2023;6(1):195. [FREE Full text] [CrossRef] [Medline]
  2. Daneshjou R, Vodrahalli K, Novoa RA, Jenkins M, Liang W, Rotemberg V, et al. Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci Adv. Aug 12, 2022;8(32):eabq6147. [FREE Full text] [CrossRef] [Medline]
  3. Adamson A, Smith A. Machine learning and health care disparities in dermatology. JAMA Dermatol. Nov 01, 2018;154(11):1247-1248. [FREE Full text] [CrossRef] [Medline]
  4. Nelson CA, Pérez-Chada LM, Creadore A, Li SJ, Lo K, Manjaly P, et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study. JAMA Dermatol. May 01, 2020;156(5):501-512. [FREE Full text] [CrossRef] [Medline]
  5. Salvador T, Gu L, Hay JL, Kurtansky NR, Masterson-Creber R, Halpern AC, et al. Consent and identifiability for patient images in research, education, and image-based artificial intelligence. JAMA Dermatol. Apr 01, 2024;160(4):470-472. [FREE Full text] [CrossRef] [Medline]
  6. AI4ALL. URL: https://ai-4-all.org/ [accessed 2024-11-25]


AI: artificial intelligence


Edited by K Williams; submitted 08.07.24; peer-reviewed by GK Gupta, J Lopes; comments to author 19.09.24; revised version received 09.11.24; accepted 11.12.24; published 21.02.25.

Copyright

©Fatuma-Ayaan Rinderknecht, Vivian B Yang, Mekaleya Tilahun, Jenna C Lester. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 21.02.2025.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research (ISSN 1438-8871), is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.