Microsoft Whistleblower Raises Alarm on OpenAI-Powered Copilot’s Potential for Harmful Imagery Creation

4 min read

Occasions

Divisions

Programs

Occasions

Divisions

Programs

Microsoft insider informs US authorities about OpenAI-backed Copilot's ability to generate damaging visuals 'too effortlessly'

Similar to Google’s Gemini AI image generator, Microsoft’s Copilot, supported by OpenAI’s ChatGPT, can also produce some highly troublesome yet persuasive counterfeit pictures. An informant alerted US regulatory bodies and Microsoft’s board about this concern.

A Microsoft employee has expressed worries regarding the damaging and inappropriate images generated by the company's AI image-creator tool. Shane Jones, who sees himself as an informant, has written to US regulatory bodies and the board of directors of Microsoft, asking them to intervene, according to an Associated Press report.

Jones had a recent meeting with US Senate personnel where he shared his worries and also sent a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to respond further.

Microsoft expressed its dedication to tackling employee issues and valued Jones' contribution in examining the technology. Nevertheless, it advised utilizing in-house reporting systems to scrutinize and rectify the problems.

Jones, a senior software engineering manager, has dedicated a quarter of a year to tackling security issues associated with Microsoft's Copilot Designer. He drew attention to the danger of the tool creating damaging materials, even when given harmless cues. For instance, when given the cue 'car accident,' Copilot Designer could possibly produce unsuitable, sexually suggestive pictures of women.

Jones stressed to FTC Chair Lina Khan that Copilot Designer carries substantial threats as it produces damaging content even in response to harmless user prompts. For example, if given the term 'car accident,' it may occasionally display sexually degrading pictures of women. He further underscored additional problematic content such as violent scenes, partisan bias, underage alcohol and drug consumption, violations of copyright, unfounded conspiracy theories, and religious symbols.

Jones had earlier voiced his worries in public. At first, Microsoft suggested that he should reach out to OpenAI with his findings, which he did. He later published a letter to OpenAI on LinkedIn in December, which resulted in Microsoft's legal department insisting on its removal. Nonetheless, Jones continued to press on, presenting his issues to the Commerce Committee of the US Senate and the Attorney General's office in Washington State.

Jones pointed out that the primary problem pertains to OpenAI's DALL-E model, yet users employing OpenAI's ChatGPT for AI image production are less prone to come across damaging results because of the distinct safety precautions taken by the two firms.

"He communicated through a text message that most of the issues associated with Copilot Designer are already taken care of by the intrinsic safety measures of ChatGPT."

In 2022, the development of remarkable AI image generators like OpenAI's DALL-E 2 and the subsequent launch of ChatGPT sparked considerable public intrigue. This led major technology companies such as Microsoft and Google to create their own iterations.

Nevertheless, in the absence of strong protective measures, this technology poses dangers, allowing users to generate damaging "deepfake" pictures portraying political leaders, scenes of war, or nonvoluntary nudity, inaccurately ascribing them to actual persons.

Due to various issues, particularly involving portrayals of race and ethnicity like showing people of color in Nazi-era uniforms, Google has temporarily halted the image creation capability of the Gemini chatbot.

(Incorporating information from various sources)

Look for us on YouTube

Highlighted Programs

Associated Narratives

NVIDIA's Jensen Huang believes AI hallucinations can be resolved, anticipates artificial general intelligence in approximately 5 years

OpenAI's Sora has the ability to create lifelike nude videos, developers are hastily working on a solution

Apple has at last introduced MM1, its integrated AI system for generating text and images

Microsoft recruits Mustafa Suleyman, cofounder of DeepMind, to head their new customer-oriented AI group

NVIDIA's Jensen Huang asserts that AI hallucinations are manageable, and predicts the arrival of artificial general intelligence in roughly 5 years

OpenAI's Sora possesses the capability to produce authentic seeming nude videos, with developers swiftly responding with a fix

Apple has officially rolled out MM1, its hybrid AI framework for the creation of text and visuals

Microsoft brings on board Mustafa Suleyman, one of the founders of DeepMind, to steer their new AI team focusing on consumers

This can be found on YouTube.

All rights reserved under Firstpost copyright, 2024.

You May Also Like

More From Author

+ There are no comments

Add yours