Unmasking Racial Bias in AI: An Examination of Large Language Models and Their Treatment of African American English

4 min read

Activities

Divisions

Programs

Activities

Divisions

Programs

Artificial Intelligence Bias: Research from Cornell suggests that ChatGPT, Copilot, and similar AI systems are more inclined to recommend capital punishment for African-American defendants

Those pursuing LLMs (Master of Laws) often think they've eliminated racial prejudice. Nonetheless, recent tests reveal that the original bias persists, having only changed marginally. It remains prejudiced against specific racial groups.

A new research conducted by Cornell University implies that there's a higher chance that substantial language models (LLMs) may show prejudice towards individuals who use African American English. The study points out that the particular dialect used can impact the way artificial intelligence (AI) algorithms view people, which can subsequently alter assessments about their personality, job suitability, and possible involvement in criminal activities.

This research centered on expansive language models such as OpenAI's ChatGPT and GPT-4, Meta's LLaMA2, and French Mistral 7B. These large language models are advanced learning algorithms developed to produce text similar to human writing.

Scientists carried out a technique called "matched guise probing", where they provided cues in both African American English and Standard American English to the language learning models (LLMs). They subsequently assessed how these models pinpointed certain traits of individuals according to the language they used.

Valentin Hofmann, a researcher at the Allen Institute for AI, suggests that the findings of the study show that GPT-4 technology tends to give death sentences more frequently to defendants who use English often linked with African Americans, regardless of whether their race is specified or not.

Hofmann underscored these issues in a message on the social media network X (previously known as Twitter), stressing the immediate necessity to address the prejudices found in AI systems that use large language models (LLMs). This is particularly important in fields like business and law where these systems are being used more and more.

The research also showed that LLMs often believe that African American English speakers have less esteemed occupations compared to those who use Standard English, even without knowledge of the speakers' racial backgrounds.

Fascinatingly, the study discovered that the bigger the LLM, the more proficient it becomes in understanding African American English, and it tends to steer clear of overtly racist language. Yet, the magnitude of the LLM had no impact on its hidden prejudices.

Hofmann warned not to view the reduction of blatant racism in LLMs as an indication that racial prejudice has been entirely eradicated. Rather, he emphasized that the research shows a change in how racial bias appears within LLMs.

The conventional approach of instructing big language models (LLMs) using human feedback doesn't adequately tackle hidden racial prejudice, as suggested by the research.

Instead of reducing prejudice, this method may inadvertently teach LLMs to subtly hide their inherent racial biases, while still harbouring them internally.

Look for us on YouTube

Top Performances

Connected Narratives

NVIDIA's Jensen Huang believes AI hallucinations can be solved and anticipates artificial general intelligence to be available in about 5 years

Apple has at last unveiled its multimodal AI model MM1, designed for generating text and images

DeepMind's co-founder Mustafa Suleyman has been hired by Microsoft to lead a new consumer AI team

South Korea's Samsung and Rebellions, both chip makers, are strategizing to challenge NVIDIA

NVIDIA's Jensen Huang again emphasizes that AI hallucinations can be managed and expects artificial general intelligence to be a reality in approximately 5 years

Apple has once again launched MM1, its innovative multimodal AI model for creating text and images

Microsoft has recruited Mustafa Suleyman, the co-founder of DeepMind, to head their new consumer-focused AI team

Chip manufacturers Samsung and Rebellions from South Korea are plotting to take on NVIDIA

This is available on YouTube.

Firstpost holds all rights and protections under copyright law as of 2024

You May Also Like

More From Author

+ There are no comments

Add yours