Study reveals covert racism in AI language models

Written by

Published 30 Aug 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free An artist’s illustration of artificial intelligence (AI). This illustration depicts language models which generate text. It was created by Wes Cockx as part of the Visualising AI project l... Stock Photo

A recent study by researchers from the University of Chicago, Stanford University, and the Allen Institute for AI, published in Nature, has uncovered significant biases in large language models (LLMs) against African American English (AAE).

Analyzing over 2,000 social media posts written in AAE and their standardized English counterparts, the study revealed that LLMs, including GPT-4, frequently associate AAE with negative adjectives like “dirty,” “lazy,” and “stupid” while favoring more positive language for standardized American English.

The study’s authors argue that while filters have been added to LLMs to prevent overtly racist responses, covert racism remains a persistent issue. Therefore, training artificial intelligence (AI) models to avoid overt racism does not eliminate the covert biases embedded within linguistic prejudice.

“A lot of people don’t see linguistic prejudice as a form of covert racism… but all of the language models that we examined have this very strong covert racism against speakers of African American English,” said co-author and University of Chicago linguist Sharese King. She explained that these biases are often subtle and can reinforce stereotypes in the way models interpret and generate responses based on dialects.

In a News and Views piece in the same journal issue, Su Lin Blodgett of Microsoft and Zeerak Talat of the Mohamed Bin Zayed University of Artificial Intelligence highlighted the nuanced nature of covert racism, which is harder to detect and mitigate compared to overt racism.

Valentin Hofmann, a computational linguist at the Allen Institute for AI and co-author of the study, pointed out that using AI in the real world to perform tasks like screening job candidates bears a lot of risks. The team found that the models associated AAE speakers with jobs such as “cook” and “guard” rather than “architect” or “astronaut.”

In prompts where details of hypothetical criminal trials were fed into, models tended to favor convictions for speakers of AAE over those of Standard American English. Additionally, in subsequent assessments, these models showed a higher probability of sentencing AAE speakers to death rather than life imprisonment.

This is a concerning trend, given the growing use of AI in hiring processes and law enforcement, where AI systems are already being used to draft police reports. “Our results clearly show that doing so bears a lot of risks,” Hofmann says.

Hofmann also went on X to reiterate the same sentiment, writing, “Our findings reveal real and urgent concerns as business and jurisdiction are areas for which AI systems involving LLMs are currently being developed or deployed.”

The findings are not unexpected, but they are shocking for Soroush Vosoughi, a computer scientist at Dartmouth College. He notes that larger models, which have been shown to have less overt bias, exhibited even worse linguistic prejudice, which is worrying.

The research team calls for more comprehensive strategies to address these biases. They emphasize that current measures of teaching LLMs new patterns of retrieving information with human feedback are insufficient and that more work is needed to ensure that AI systems do not perpetuate harmful stereotypes.