News Warner Logo

News Warner

Transgender, nonbinary and disabled people more likely to view AI negatively, study shows

Transgender, nonbinary and disabled people more likely to view AI negatively, study shows

  • A new study found that transgender, nonbinary, and disabled people are more likely to view AI negatively than their cisgender and nondisabled counterparts.
  • The study surveyed over 700 people in the US, including a nationally representative sample and an intentional oversample of trans, nonbinary, disabled, and racial minority individuals, and found that these groups reported significantly more negative attitudes toward AI.
  • Nonbinary people had the most negative AI attitudes, followed by transgender people, with cisgender women reporting more negative attitudes than men.
  • Disabled participants also had significantly more negative views of AI than nondisabled participants, particularly those who are neurodivergent or have mental health conditions.
  • The study suggests that policymakers and technology developers should prioritize providing meaningful consent, data transparency, and privacy protections to mitigate the negative attitudes towards AI among marginalized groups.

Transgender and nonbinary people report negative attitudes toward AI. alvaro gonzalez/Moment via Getty Images

AI seems to be well on its way to becoming pervasive. You hear rumbles of AI being used, somewhere behind the scenes, at your doctor’s office. You suspect it may have played a role in hiring decisions during your last job search. Sometimes – maybe even often – you use it yourself.

And yet, while AI now influences high-stakes decisions such as what kinds of medical care people receive, who gets hired and what news people see, these decisions are not always made equitably. Research has shown that algorithmic bias often harms marginalized groups. Facial recognition systems often misclassify transgender and nonbinary people, AI used in law enforcement can lead to the unwarranted arrest of Black people at disproportionately high rates, and algorithmic diagnostic systems can prevent disabled people from accessing necessary health care.

These inequalities raise a question: Do gender and racial minorities and disabled people have more negative attitudes toward AI than the general U.S. population?

I’m a social computing scholar who studies how marginalized people and communities use social technologies. In a new study, my colleagues Samuel Reiji Mayworm, Alexis Shore Ingber, Nazanin Andalibi and I surveyed over 700 people in the U.S., including a nationally representative sample and an intentional oversample of trans, nonbinary, disabled and racial minority individuals. We asked participants about their general attitudes toward AI: whether they believed it would improve their lives or work, whether they viewed it positively, and whether they expected to use it themselves in the future.

The results reveal a striking divide. Transgender, nonbinary and disabled participants reported, on average, significantly more negative attitudes toward AI than their cisgender and nondisabled counterparts. These results indicate that when gender minorities and disabled people are required to use AI systems, such as in workplace or health care settings, they may be doing so while harboring serious concerns or hesitations. These findings challenge the prevailing tech industry narrative that AI systems are inevitable and will benefit everyone.

Public perception plays a powerful role in shaping how AI is developed, adopted and regulated. The vision of AI as a social good falls apart if it mostly benefits those who already hold power. When people are required to use AI while simultaneously disliking or distrusting it, it can limit participation, erode trust and compound inequities.

Gender, disability and AI attitudes

Nonbinary people in our study had the most negative AI attitudes. Transgender people overall, including trans men and trans women, also expressed significantly negative AI attitudes. Among cisgender people – those whose gender identity matches the sex they were assigned at birth – women reported more negative attitudes than men, a trend echoing previous research, but our study adds an important dimension by examining nonbinary and trans attitudes as well.

Disabled participants also had significantly more negative views of AI than nondisabled participants, particularly those who are neurodivergent or have mental health conditions.

These findings are consistent with a growing body of research showing how AI systems often misclassify, perpetuate discrimination toward or otherwise harm trans and disabled people. In particular, identities that defy categorization clash with AI systems that are inherently designed to reduce complexity into rigid categories. In doing so, AI systems simplify identities and can replicate and reinforce bias and discrimination – and people notice.

A more complex picture for race

In contrast to our findings about gender and disability, we found that people of color, and Black participants in particular, held more positive views toward AI than white participants. This is a surprising and complex finding, considering that prior research has extensively documented racial bias in AI systems, from discriminatory hiring algorithms to disproportionate surveillance.

Our results do not suggest that AI is working well for Black communities. Rather, they may reflect a pragmatic or hopeful openness to technology’s potential, even in the face of harm. Future research might qualitatively examine Black individuals’ ambivalent balance of critique and optimism around AI.

a Black man wearing glasses looks at a computer screen in an office

Black participants in the study reported more positive attitudes about AI than most demographics, despite facing algorithmic bias.
Laurence Dutton/E+ via Getty Images

Policy and technology implications

If marginalized people don’t trust AI – and for good reason – what can policymakers and technology developers do?

First, provide an option for meaningful consent. This would give everyone the opportunity to decide whether and how AI is used in their lives. Meaningful consent would require employers, health care providers and other institutions to disclose when and how they are using AI and provide people with real opportunities to opt out without penalty.

Next, provide data transparency and privacy protections. These protections would help people understand where the data comes from that informs AI systems, what will happen with their data after the AI collects it, and how their data will be protected. Data privacy is especially critical for marginalized people who have already experienced algorithmic surveillance and data misuse.

Further, when building AI systems, developers can take extra steps to test and assess impacts on marginalized groups. This may involve participatory approaches involving affected communities in AI system design. If a community says no to AI, developers should be willing to listen.

Finally, I believe it’s important to recognize what negative AI attitudes among marginalized groups tell us. When people at high risk of algorithmic harm such as trans people and disabled people are also those most wary of AI, that’s an indication for AI designers, developers and policymakers to reassess their efforts. I believe that a future built on AI should account for the people the technology puts at risk.

The Conversation

Oliver L. Haimson receives funding from National Science Foundation.

link

Q. What is the main finding of the study about AI attitudes among marginalized groups?
A. The study found that transgender, nonbinary, and disabled people reported significantly more negative attitudes toward AI than their cisgender and nondisabled counterparts.

Q. Why do researchers believe that marginalized people may have more negative attitudes toward AI?
A. Researchers believe that marginalized people may have more negative attitudes toward AI because they are often required to use AI systems in high-stakes settings, such as healthcare or employment, while simultaneously harboring concerns or hesitations about the technology.

Q. What is a surprising finding of the study regarding racial minorities and AI?
A. The study found that people of color, particularly Black participants, held more positive views toward AI than white participants, despite prior research documenting racial bias in AI systems.

Q. Why do researchers think that negative AI attitudes among marginalized groups are important to consider?
A. Researchers believe that negative AI attitudes among marginalized groups indicate a need for AI designers, developers, and policymakers to reassess their efforts and prioritize the needs of these communities.

Q. What can policymakers and technology developers do to address concerns about AI trust?
A. Policymakers and technology developers can provide meaningful consent options, ensure data transparency and privacy protections, and involve marginalized communities in AI system design.

Q. Why is data privacy critical for marginalized people?
A. Data privacy is critical for marginalized people because they have already experienced algorithmic surveillance and data misuse, and need protection from further exploitation.

Q. What does the study suggest about the role of public perception in shaping AI development and adoption?
A. The study suggests that public perception plays a powerful role in shaping how AI is developed, adopted, and regulated, and that a vision of AI as a social good must be reevaluated if it mostly benefits those who already hold power.

Q. How do AI systems often misclassify or harm marginalized groups?
A. AI systems often misclassify or harm marginalized groups by simplifying complex identities into rigid categories, perpetuating discrimination, and replicating bias and discrimination.

Q. What is the significance of recognizing negative AI attitudes among marginalized groups?
A. Recognizing negative AI attitudes among marginalized groups indicates a need for AI designers, developers, and policymakers to prioritize the needs of these communities and reassess their efforts to ensure that AI benefits everyone, not just those who already hold power.

Q. How can researchers and policymakers work together to address concerns about AI trust?
A. Researchers and policymakers can work together by involving marginalized communities in AI system design, providing meaningful consent options, ensuring data transparency and privacy protections, and prioritizing the needs of these communities.