Whistleblower Warns of Harmful Imagery Risks in OpenAI-Powered Microsoft Copilot: A Deep Dive into AI Generated Content Concerns

4 min read

Activities

Divisions

Programs

Activities

Divisions

Programs

Microsoft insider warns US authorities that OpenAI-backed Copilot can generate damaging visuals 'too effortlessly'

Similar to Google's Gemini AI image generator, Microsoft's Copilot, which is fueled by OpenAI's ChatGPT, can also produce some extremely troublesome yet very believable counterfeit images. An informant alerted US oversight bodies and Microsoft's board about this concern.

A Microsoft employee has voiced worries about damaging and inappropriate visuals created by the company's AI image-creator tool. Shane Jones, who sees himself as a whistleblower, has written to US regulatory bodies and Microsoft's board members, encouraging them to intervene, according to an Associated Press report.

Jones recently had a meeting with the staff of the US Senate to talk about his worries and he also sent a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to comment further.

Microsoft affirmed its dedication to resolving employee issues and expressed gratitude towards Jones for his work in technology examination. Nonetheless, it suggested the utilization of internal communication pathways for probing and dealing with the problems.

Jones, a senior software engineering manager, has devoted a quarter of a year addressing security issues regarding Microsoft's Copilot Designer. He underscored the potential danger of the tool producing damaging content, even with harmless instructions. For instance, if the term 'car accident' is used as a prompt, Copilot Designer might generate unsuitable images that sexually objectify women.

Jones underscored to FTC Chair Lina Khan the considerable dangers posed by Copilot Designer as it sometimes produces inappropriate content even in response to innocuous user prompts. For example, a prompt such as 'car accident' could yield sexually explicit images of women. He further brought attention to other troubling content such as violent imagery, partisan bias, promotion of underage alcohol consumption and drug use, violations of copyright laws, propagation of conspiracy theories, and religious symbols.

Jones had earlier voiced his worries in public. Guided by Microsoft to reach out to OpenAI with his discoveries, he did so. In December, he made public a letter on LinkedIn aimed at OpenAI, causing Microsoft's legal department to insist on its removal. Nevertheless, Jones has continued to press on, taking his worries to the US Senate's Commerce Committee and the office of the Attorney General of Washington State.

Jones pointed out that although the primary concern is with OpenAI's DALL-E model, those using OpenAI's ChatGPT for AI image creation are less prone to come across detrimental results. This is due to the varying safety procedures put in place by the two corporations.

"Through a text message, he expressed that ChatGPT's inherent safety measures already handle a lot of the issues associated with Copilot Designer."

The debut of remarkable AI image generators in 2022, such as OpenAI's DALL-E 2 and the following launch of ChatGPT, sparked considerable public intrigue. This led technology powerhouses like Microsoft and Google to create their own variants.

Nonetheless, in the absence of strong protective measures, technology has its dangers. It allows users to generate damaging "deepfake" visuals of political leaders, scenes of war, or involuntary explicit content, wrongfully associating them with actual people.

Due to various concerns, particularly related to the portrayal of race and ethnicity, Google has temporarily disabled the image creation function of the Gemini chatbot. This includes problematic representations like showing people of color in Nazi-era military uniforms.

(Incorporating information from various sources)

Look for us on YouTube

Headlining Programs

Associated Articles

NVIDIA's Jensen Huang states that AI hallucinations can be fixed and complete artificial intelligence is around 5 years in the future

OpenAI's Sora has the capability to create lifelike nude clips, with developers swiftly working to implement a solution

Apple has at last introduced MM1, its AI model capable of generating both text and images

Microsoft has recruited Mustafa Suleyman, cofounder of DeepMind, to head their new consumer AI division

Can be found on YouTube.

Firstpost holds the exclusive rights, protected by copyright until 2024.

You May Also Like

More From Author

+ There are no comments

Add yours