Racial Bias in AI: How Language Models Perpetrate Racial Stereotypes – A Cornell Study Analysis

3 min read

Occasions

Segments

Programs

Occasions

Segments

Programs

Bias in AI: Research reveals ChatGPT, Copilot, and others are more prone to recommend capital punishment for African-American defendants, according to a study by Cornell University

LLM researchers thought they had eliminated racial prejudice. However, recent tests indicate that the original bias persists, albeit with minor changes. It remains prejudiced against specific racial groups.

A recent investigation by Cornell University implies that sizable language models (LLMs) may show more prejudice against individuals who use African American English. The study reveals that the particular dialect used can sway how artificial intelligence (AI) algorithms view people, impacting assessments of their personality, job suitability, and possible law-breaking tendencies.

This research centered on substantial language models such as OpenAI's ChatGPT and GPT-4, Meta's LLaMA2, and French Mistral 7B. These LLMs are advanced learning systems created to produce text that resembles human writing.

Scientists carried out a technique called "matched guise probing," where they gave cues in both African American English and Standardized American English to the language learning models. They then studied how these models determined different traits of individuals through the language they used.

Researcher Valentin Hofmann from the Allen Institute for AI suggests that the results from the study show a tendency for GPT-4 technology to pass death sentences on defendants who use English often linked with African Americans, despite no explicit mention of their race.

In a message shared on the social media site X (previously known as Twitter), Hofmann drew attention to the serious issues surrounding prejudices in AI systems that use large language models (LLMs). He particularly stressed on areas like business and law where these systems are being used more frequently.

The research additionally showed that LLMs often presume that individuals who speak African American English have less esteemed occupations compared to those who use Standardized English, regardless of not knowing the speakers' racial backgrounds.

Fascinatingly, the study revealed that the bigger the LLM, the more comprehensively it grasped African American English and was more likely to steer clear of overtly prejudiced language. Nevertheless, the magnitude of the LLM didn't influence its subtle, hidden biases.

Hofmann warned not to view the reduction in blatant racism in LLMs as an indication that racial prejudice has been eliminated. He emphasized that the research reveals a transformation in the way racial bias is expressed in LLMs.

The conventional approach of educating big language models (LLMs) through human feedback doesn't adequately tackle hidden racial prejudice, as the research suggests.

Instead of reducing prejudice, this method may just teach LLMs to subtly hide their inherent racial biases, while continuing to harbor them on a more profound level.

Locate us on YouTube

Highlighted Programs

Connected Articles

NVIDIA's Jensen Huang believes AI hallucinations can be resolved, predicts artificial general intelligence in around 5 years

Apple finally unveils its MM1, a multimodal AI model for generating text and images

Microsoft recruits Mustafa Suleyman, cofounder of DeepMind, to head their new consumer AI division

Samsung and Rebellions, South Korean chip producers, aim to challenge NVIDIA

AI hallucinations can be dealt with, according to NVIDIA's Jensen Huang, who also anticipates artificial general intelligence in about 5 years

Apple has rolled out its MM1, a multimodal AI model designed for text and image creation

Microsoft brings onboard DeepMind's cofounder Mustafa Suleyman to steer their fresh consumer AI team

Samsung and Rebellions, chip makers from South Korea, have set their sights on overtaking NVIDIA

Available on YouTube

Firstpost holds all rights, protected under copyright, as of 2024

You May Also Like

More From Author

+ There are no comments

Add yours