Unmasking Racial Bias in AI: A Deep Dive into the Influence of Language and Dialect on Large Language Models

3 min read

Occurrences

Segments

Performances

Occurrences

Segments

Performances

Prejudiced AI: ChatGPT, Copilot, more prone to impose death sentences on African-American defendants, according to Cornell research

Those pursuing LLMs assume they have eliminated racial prejudice. Yet, recent studies suggest that the original bias persists, merely undergoing slight changes. It remains prejudiced towards specific ethnicities.

A recent research conducted by Cornell University proposes that extensive language models (LLMs) tend to display prejudice towards users who use African American English. The study suggests that the specific dialect spoken can impact how artificial intelligence (AI) algorithms view individuals, which could alter decisions regarding their personality, job suitability, and possible criminal behavior.

The research centered on extensive language models such as OpenAI's ChatGPT and GPT-4, Meta's LLaMA2, and French Mistral 7B. These large language models are advanced learning algorithms developed to create text that resembles human writing.

Scientists employed a method known as "matched guise probing," in which they used both African American English and Standardized American English to prompt the Language Learning Models (LLMs). Subsequently, they evaluated how these models discerned different traits of individuals from the language employed.

Valentin Hofmann, a researcher at the Allen Institute for AI, suggests that the findings of the research show that GPT-4 technology tends to give death sentences more frequently to defendants who use English typically related to African Americans, even when their race is not revealed.

In a message shared on the social media network X (previously known as Twitter), Hofmann underscored the pressing issues regarding the prejudices inherent in AI systems that use large language models (LLMs). He particularly stressed this in areas like business and legal sectors where these systems are being used more frequently.

The research additionally showed that LLMs often presume that individuals who speak African American English have less esteemed occupations compared to those speaking Standard English, even without knowledge of the speakers' racial backgrounds.

Surprisingly, the study discovered that the bigger the LLM, the more it comprehends African American English, and it becomes more inclined to steer clear of openly racist language. However, the magnitude of the LLM didn't influence its hidden prejudices.

Hofmann warned not to perceive the reduction in explicit racism in LLMs as an indication that racial prejudice has been completely eradicated. Rather, he emphasized that the research shows a change in the way racial prejudice is exhibited in LLMs.

The conventional approach of educating big language models (LLMs) through human input doesn't adequately tackle hidden racial prejudice, as suggested by the research.

Instead of reducing prejudice, this method may in fact teach LLMs to expertly hide their inherent racial biases, while preserving them at a more ingrained level.

Search for us on YouTube

Headline News

Related Articles

NVIDIA’s Jensen Huang declares AI illusions can be fixed, with artificial general intelligence predicted within the next 5 years

Apple introduces MM1, their novel multimodal AI model for generating text and images

Microsoft recruits Mustafa Suleyman, DeepMind's co-founder, to head their new consumer AI division

South Korean semiconductor producers Samsung and Rebellions aim to surpass NVIDIA

AI illusions can be fixed, says NVIDIA’s Jensen Huang, predicting artificial general intelligence within half a decade

Apple finally unveils MM1, their new AI system for creating text and images

Microsoft brings on board DeepMind co-founder Mustafa Suleyman to steer their new AI team for consumers

Samsung and Rebellions, chipmakers from South Korea, set their sights on outdoing NVIDIA

Available on YouTube

Firstpost holds all rights reserved, copyright © 2024.

You May Also Like

More From Author

+ There are no comments

Add yours