Whistleblower Raises Alarm on OpenAI-Powered Microsoft Copilot’s Ability to Generate Harmful Imagery

4 min read

Activities

Divisions

Performances

Activities

Divisions

Performances

Microsoft insider alerts US authorities that OpenAI-driven Copilot may generate damaging visuals 'too effortlessly'

Similar to Google's Gemini AI image generator, Microsoft's Copilot, fueled by OpenAI's ChatGPT, can also produce some extremely deceptive and potentially harmful false pictures. An informer reached out to US governing bodies and Microsoft's board to caution them about this concern.

A Microsoft engineer, Shane Jones, has expressed worries about inappropriate and damaging visuals generated by the firm's AI image-creation tool. Viewing himself as a whistleblower, Jones has penned letters to American watchdogs and Microsoft's governing body, pushing them to intervene, according to an Associated Press report.

Recently, Jones had a meeting with staff members of the US Senate to talk about his worries and forwarded a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to provide any more details.

Microsoft expressed its dedication to resolving employee issues and valued Jones' contributions in examining the technology. Nonetheless, it suggested utilizing in-house reporting systems to probe into and handle these problems.

Jones, who is a lead principal software engineer, has dedicated a quarter of a year to tackling security issues linked to Microsoft's Copilot Designer. He pointed out the potential danger of the tool creating damaging content even when given harmless prompts. For instance, if given the prompt 'car accident', Copilot Designer might generate unsuitable, sexualized pictures of women.

Jones stressed to FTC Chair Lina Khan that Copilot Designer can potentially produce damaging content, even in response to harmless user prompts. For example, it may produce inappropriate images of women when prompted with 'car accident.' He also underlined other problematic content such as violent scenes, political prejudice, underage alcohol and drug consumption, copyright violations, conspiracy theories, and religious symbols.

Jones has earlier voiced these worries in a public forum. Initially, Microsoft suggested that he should bring his discoveries to OpenAI. Subsequently, in December, he published a letter on LinkedIn, addressed to OpenAI, which resulted in Microsoft's legal department insisting on its removal. However, Jones remained undeterred, continuing to express his concerns to the US Senate's Commerce Committee and the Attorney General's office in Washington State.

Jones emphasized that although the primary concern is with OpenAI's DALL-E model, those using OpenAI's ChatGPT for AI image creation are less prone to coming across damaging results because of the distinct safety precautions taken by both firms.

He expressed through a message that most issues related to Copilot Designer are already handled by the inherent protective measures of ChatGPT.

The debut of remarkable AI image creators in 2022, such as OpenAI's DALL-E 2 and the later launch of ChatGPT, sparked considerable public curiosity. This led major technology companies like Microsoft and Google to start creating their own variations.

Nonetheless, without strong protective measures, the technology poses threats, allowing users to generate damaging "deepfake" pictures of political personalities, conflict situations, or unauthorized nudity, incorrectly associating them with actual persons.

Due to arising issues, Google has momentarily halted the image creation capability of the Gemini chatbot. This is mainly due to controversies involving representations of race and ethnicity, such as putting individuals of color in uniforms from the Nazi period.

(Incorporating information from various sources)

Search for us on YouTube

Headlining Programs

Associated Reports

AI hallucinations can be resolved, and general artificial intelligence is approximately half a decade away, according to NVIDIA's Jensen Huang.

OpenAI's Sora has the ability to produce life-like nude videos, with developers scrambling to apply a solution.

Apple has at last rolled out MM1, its multimodal AI system designed for generating text and images.

Microsoft has brought on board DeepMind cofounder Mustafa Suleyman to spearhead their new consumer AI department.

AI hallucinations are manageable, and we could see general artificial intelligence in roughly five years, says NVIDIA’s Jensen Huang.

OpenAI's Sora is capable of creating nude videos that appear very realistic, and developers are in a hurry to create a fix.

Apple has introduced MM1, its new multimodal AI model that generates both text and images.

Microsoft has recruited DeepMind cofounder Mustafa Suleyman to take the lead of their new consumer AI team.

Find more on YouTube.

Firstpost holds all rights and protections, copyright @ 2024.

You May Also Like

More From Author

+ There are no comments

Add yours