Microsoft Whistleblower Warns of Harmful Imagery Created by OpenAI-powered Copilot: An Insider’s Perspective on the Risks of AI Image Generators

4 min read

Activities

Divisions

Programs

Activities

Divisions

Programs

A Microsoft informant has informed US authorities that OpenAI's Copilot can effortlessly generate harmful images. Similar to Google's Gemini AI that generates fake images, Microsoft's Copilot, which is backed by OpenAI’s ChatGPT, can also produce potentially troublesome and highly realistic counterfeit images. The informant raised this concern to US regulatory bodies and Microsoft's board through letters.

A Microsoft engineer, Shane Jones, voiced worries about the offensive and damaging pictures created by the company's AI image-generator tool. Identifying himself as a whistleblower, Jones has written to American regulatory bodies and Microsoft's board, urging them to intervene, according to an Associated Press report.

Jones recently had a meeting with the staff members of the US Senate to talk about his worries, and he sent a letter to the Federal Trade Commission (FTC). They acknowledged getting the letter but chose not to comment further.

Microsoft expressed its dedication to tackle employee issues and valued Jones' contribution in examining the technology. Nonetheless, it suggested utilizing in-house reporting systems for probing and resolving the problems.

Jones, a chief software engineering manager, has dedicated a quarter of a year to tackling safety issues related to Microsoft's Copilot Designer. He underscored the danger of the tool producing damaging content even in response to harmless commands. For instance, if given the prompt 'car accident', Copilot Designer could potentially generate unsuitable, sexually suggestive images of women.

Jones stressed to FTC Chair Lina Khan that Copilot Designer could potentially produce harmful content even with benign user prompts. For example, a simple prompt like 'car accident' might sometimes result in sexually suggestive images of women. He also pointed out other worrisome content such as violent themes, partisan bias, underage alcohol consumption and drug use, violations of copyright laws, unfounded conspiracy theories, and religious symbolism.

In the past, Jones has voiced his worries openly. He was initially guided by Microsoft to bring his discoveries to OpenAI, which he did. In December, he put up a message on LinkedIn directed towards OpenAI, causing Microsoft's legal department to insist on its removal. Despite these obstacles, Jones has remained determined, presenting his issues to both the US Senate's Commerce Committee and the Attorney General's office in Washington State.

Jones emphasized that even though the primary problem is with OpenAI's DALL-E model, those using OpenAI's ChatGPT for AI image creation have a lower risk of encountering damaging results because of distinct safety measures put in place by both firms.

"He communicated through a message that numerous worries regarding Copilot Designer are already addressed by the inherent protective measures of ChatGPT."

In 2022, remarkable AI image generators such as OpenAI's DALL-E 2 and the later launch of ChatGPT created a lot of buzz among the public. This led to tech powerhouses like Microsoft and Google creating their own variants.

Nonetheless, in the absence of strong protective measures, this technology poses threats, allowing users to generate damaging "deepfake" pictures portraying political personalities, war scenarios, or involuntary nudity, wrongly associating them with actual persons.

Due to worries raised, Google has temporarily halted the image creation ability of the Gemini chatbot. This decision came about mainly because of disputes related to portrayals of race and ethnicity, like showing individuals of color in Nazi-era military uniforms.

(Incorporating information from various sources)

Search for us on YouTube

Highlighted Programs

Associated Reports

Artificial Intelligence (AI) hallucinations can be fixed and general AI is approximately 5 years from now, according to NVIDIA's Jensen Huang.

Sora, from OpenAI, is capable of creating lifelike nude videos, prompting developers to quickly implement a solution.

Apple has at last rolled out MM1, its new AI model that can generate both text and images.

Microsoft has recruited the cofounder of DeepMind, Mustafa Suleyman, to head their new AI team for consumers.

Artificial Intelligence (AI) hallucinations can be fixed and general AI is approximately 5 years from now, according to NVIDIA's Jensen Huang.

Sora, from OpenAI, is capable of creating lifelike nude videos, prompting developers to quickly implement a solution.

Apple has at last rolled out MM1, its new AI model that can generate both text and images.

Microsoft has recruited the cofounder of DeepMind, Mustafa Suleyman, to head their new AI team for consumers.

Available on YouTube.

Firstpost holds all rights and is protected by copyright in 2024.

You May Also Like

More From Author

+ There are no comments

Add yours