Unveiling Racial Bias in Large Language Models: A Deep Dive into AI’s Discrimination Against African American English

Activities

Divisions

Performances

Activities

Divisions

Performances

Prejudiced AI: A study by Cornell University reveals that ChatGPT, Copilot, and others are more inclined to recommend capital punishment for African-American defendants

Individuals pursuing LLMs are under the impression that they have successfully eliminated racial prejudice. Nonetheless, recent tests indicate that the initial bias persists and has merely undergone minor alterations. It remains discriminatory towards specific racial groups.

Cornell University's latest research implies that substantial language models (LLMs) may show prejudice towards users who communicate in African American English. The study shows that the particular dialect used can impact the way AI algorithms interpret people, influencing their assessments of individuals' personalities, job suitability, and possible criminal behavior.

The research concentrated on extensive language models such as OpenAI's ChatGPT and GPT-4, Meta's LLaMA2, and French Mistral 7B. These language models are advanced learning algorithms engineered to produce text that resembles human writing.

Scientists carried out a study known as "matched guise probing," where they provided cues in both African American English and Standard American English to the LLMs. Following this, they examined how these models discerned different traits of individuals from the language utilized.

Researcher Valentin Hofmann from the Allen Institute for AI suggests that the study's findings show a tendency of GPT-4 technology to deliver death sentences more frequently to defendants using English typically linked with African Americans, even when their race is not revealed.

In a post on the social media network X, previously known as Twitter, Hofmann underscored the critical need to address biases in AI systems that employ large language models (LLMs). This is particularly essential in areas like business and jurisdiction where the use of these systems is on the rise.

The research also disclosed that LLMs generally presume that individuals who speak African American English occupy less esteemed positions compared to those speaking Standard English, even when they aren't aware of the speakers' racial backgrounds.

Fascinatingly, the study discovered that a bigger LLM exhibited a better comprehension of African American English, and it tended to steer clear of overtly racist terms. Nonetheless, the magnitude of the LLM did not influence its hidden prejudiced tendencies.

Hofmann warned not to view the reduction in blatant prejudice in LLMs as an indication that racial bias has been eliminated. He emphasized that the research showcases a change in how racial bias is expressed in LLMs.

The conventional approach of educating big language models (LLMs) through human feedback falls short in tackling hidden racial prejudice, as suggested by the research.

Instead of reducing prejudice, this method might inadvertently teach LLMs to subtly mask their inherent racial biases while still preserving them at a more profound level.

Look for us on YouTube

Top Programs

Related Topics

NVIDIA's Jensen Huang believes AI hallucinations can be rectified and anticipates artificial general intelligence will be achieved in about 5 years

Apple has at long last introduced its multimodal AI model for generating text and images, MM1

Microsoft has recruited DeepMind's co-founder, Mustafa Suleyman, to head their new consumer AI team

Samsung and Rebellions, South Korean chip makers, intend to outcompete NVIDIA

AI hallucinations can be addressed, and artificial general intelligence is roughly 5 years from now, according to NVIDIA’s Jensen Huang

Apple has recently unveiled MM1, its multimodal AI model designed to generate text and images

Mustafa Suleyman, co-founder of DeepMind, is now leading Microsoft's new consumer-focused AI team

Samsung and Rebellions, chip producers from South Korea, aim to surpass NVIDIA

is available on YouTube

All rights reserved by Firstpost, copyright 2024.

You May Also Like

More From Author

+ There are no comments

Add yours