Unmasking the Subtle Racial Bias in AI: A Deep Dive into Large Language Models

3 min read

Activities

Divisions

Performances

Activities

Divisions

Performances

Discriminatory AI: A study from Cornell discovers that ChatGPT, Copilot, and others are more prone to recommend death sentences for African-American defendants

Those pursuing LLMs often think they've eliminated racial prejudice. Yet, recent tests reveal that the initial prejudice persists and has only marginally changed. It remains prejudiced against particular racial groups.

A recent research conducted by Cornell University implies that large language models (LLMs) tend to show prejudice towards users who communicate in African American English. The study reveals that the specific dialect used can impact the way artificial intelligence (AI) algorithms view people, which can subsequently influence decisions about their personality, job suitability, and possible legal issues.

The research concentrated on extensive language models such as OpenAI's ChatGPT and GPT-4, Meta's LLaMA2, and French Mistral 7B. These LLMs are sophisticated learning algorithms created to produce text that resembles human writing.

Scientists carried out a method known as "matched guise probing," wherein they offered cues in both African American English and Standard American English to the LLMs. They subsequently examined how these models recognized different attributes of people according to the language spoken.

Valentin Hofmann, a scholar at the Allen Institute for AI, suggests that the research findings show GPT-4 technology tends to deliver death penalties to defendants using English typically linked with African Americans, even when their race isn't revealed.

Hofmann underscored these issues in a message on the social media network X (previously known as Twitter), stressing the immediate necessity to address the prejudices found in AI systems that use large language models (LLMs). This is particularly crucial in areas like business and law, where these systems are becoming more and more prevalent.

The research also showed that LLMs often presume that individuals who speak African American English have less esteemed occupations compared to those who use Standardized English, even without knowledge about the speakers' racial backgrounds.

Fascinatingly, the study discovered that the bigger the LLM, the greater its comprehension of African American English. It would also be more likely to steer clear of overtly racist language. Yet, the LLM's size had no impact on its subtle, hidden prejudices.

Hofmann warned not to take the reduction in blatant racism in LLMs as an indication that racial prejudice has been completely eliminated. Rather, he emphasized that the research shows a change in how racial bias is displayed in LLMs.

The conventional approach of educating substantial language models (LLMs) through human response doesn't efficiently tackle hidden racial prejudice, as the research suggests.

Instead of reducing prejudice, this method can inadvertently teach LLMs to "cosmetically hide" their inherent racial biases, while still preserving them at a more profound level.

Look for us on YouTube

Headlining Programs

Associated Articles

NVIDIA's Jensen Huang believes AI hallucinations can be fixed and general artificial intelligence is roughly five years from now

Apple finally releases its multimodal AI model, MM1, designed for generating text and images

Microsoft recruits Mustafa Suleyman, a co-founder of DeepMind, to head their new consumer AI division

Samsung and Rebellions, South Korean chip production companies, aim to outdo NVIDIA

AI hallucinations can be rectified, and general artificial intelligence might be about five years away, says NVIDIA’s Jensen Huang

Apple finally introduces MM1, its multimodal AI model for creating text and visuals

Microsoft brings on board Mustafa Suleyman, co-founder of DeepMind, to spearhead its new consumer AI group

Samsung and Rebellions, chip makers from South Korea, are strategizing to topple NVIDIA

Available on YouTube

Firstpost holds all rights, protected by copyright, as of 2024

You May Also Like

More From Author

+ There are no comments

Add yours