November 21, 2024

Experts find AI tools becoming covertly more racist

Report reveals discrimination by ChatGPT and Gemini against African American Vernacular English speakers

A new report reveals that popular AI tools are becoming more covertly racist as they advance. Researchers found that large language models such as OpenAI’s ChatGPT and Google’s Gemini exhibit racist stereotypes towards speakers of African American Vernacular English (AAVE), an English dialect spoken by Black Americans.

Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence and co-author of the paper published in arXiv, explained that these technologies are commonly used by companies for tasks like screening job applicants. He noted that previous research had primarily focused on overt racial biases and had not examined how these AI systems respond to less obvious markers of race, such as dialect differences.

The paper notes that Black individuals who use AAVE “are known to experience racial discrimination in a wide range of contexts, including education, employment, housing, and legal outcomes.”

Hoffman and his colleagues instructed the AI models to evaluate the intelligence and employability of individuals based on their use of AAVE compared to those who use what they call “standard American English.”

For instance, the AI model was tasked with comparing the sentences “I be so happy when I wake up from a bad dream cus they be feelin’ too real” and “I am so happy when I wake up from a bad dream because they feel too real.”

Once people reach a certain education level, they may refrain from using slurs directly, but underlying racism persists, similar to language models

The models were notably inclined to label AAVE speakers as “stupid” and “lazy,” often assigning them to lower-wage positions.

Hoffman is concerned that these findings suggest AI models may penalize job seekers for code-switching—adjusting their language based on their audience—between AAVE and standard American English.

“If a job applicant used this dialect in their social media posts,” he explained to the Guardian, “it’s plausible that the language model would overlook them for selection because of their online language use.”

The AI models were also notably more inclined to suggest the death penalty for hypothetical criminal defendants who used AAVE in their court statements.

“I would hope that we are far from a point where this technology is utilized to make determinations about criminal convictions,” Hoffman expressed. “That scenario might seem like a dystopian future, and hopefully, it remains so.”

However, Hoffman acknowledged to the Guardian that predicting the future applications of language learning models is challenging.

“Ten years ago, even five years ago, we had no idea about all the different contexts in which AI would be used today,” he said, urging developers to pay attention to the new paper’s warnings about racism in large language models.

Notably, AI models are already employed in the US legal system for tasks like generating court transcripts and conducting legal research.

For years, prominent AI experts like Timnit Gebru, former co-leader of Google’s ethical artificial intelligence team, have advocated for federal government intervention to regulate the largely unregulated use of large language models.

“It feels like a gold rush,” Gebru told the Guardian last year. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it.”

Google’s AI model, Gemini, faced controversy recently due to a series of social media posts displaying its image generation tool depicting historical figures—such as popes, US founding fathers, and, notably, German World War II soldiers—as people of color.

As large language models ingest more data, they refine their ability to mimic human speech by analyzing text from billions of web pages. However, a well-recognized issue with this learning process is that the model can replicate any racist, sexist, or otherwise harmful stereotypes it encounters online, a problem often summed up by the adage “garbage in, garbage out.” This phenomenon led to incidents like Microsoft’s Tay chatbot regurgitating neo-Nazi content from Twitter users in 2016.

In response, entities like OpenAI developed guardrails—ethical guidelines that govern the content language models such as ChatGPT can generate for users. As these models grow in size, they also tend to exhibit fewer overtly racist tendencies.

However, Hoffman and his team discovered that as language models expand, covert racism becomes more prevalent. They realized that ethical guardrails merely encourage language models to be more subtle in their racial biases.

“It doesn’t resolve the fundamental issue; the guardrails appear to mimic the behavior of educated individuals in the United States,” remarked Avijit Ghosh, an AI ethics researcher at Hugging Face, whose research centers on the intersection of public policy and technology.

After reaching a certain level of education, individuals may refrain from using slurs directly, but the underlying racism remains. This parallels the behavior of language models: garbage in, garbage out. These models do not unlearn problematic behaviors; they simply become better at concealing them.”

The US private sector is expected to increasingly adopt language models over the next decade. The broader market for generative AI is projected to reach $1.3 trillion by 2032, according to Bloomberg. Meanwhile, federal labor regulators such as the Equal Employment Opportunity Commission have only recently begun to protect workers from AI-based discrimination, with the first case of its kind coming before the EEOC late last year.

Ghosh is among the increasing number of AI experts who, like Gebru, are concerned about the potential harm that language learning models could inflict if technological progress continues to outstrip federal regulation.

“You don’t have to halt innovation or impede AI research, but restricting the use of these technologies in certain sensitive areas is a positive initial measure,” he stated. “Racist individuals are present throughout the country; we don’t need to incarcerate them, but we aim to prevent them from overseeing hiring and recruitment. Technology should be regulated in a similar manner.”

Copyright © All rights reserved | WebbSocial |