Whistleblower Alerts: How Microsoft’s OpenAI-Powered Copilot Could Generate Harmful Imagery

4 min read

Activities

Divisions

Programs

Activities

Divisions

Programs

Microsoft informant alerts US authorities that OpenAI-driven Copilot can generate damaging visuals 'too effortlessly'

Similar to Google's Gemini AI photo generator, Microsoft's Copilot, backed by OpenAI’s ChatGPT, too can create some seriously troubling yet highly believable counterfeit images. An insider tipped off US officials and Microsoft’s board about this concern through letters.

A Microsoft employee has voiced worries regarding harmful and offensive visuals created by the firm's AI image-creation technology. Shane Jones, who labels himself as a whistleblower, has written to both US regulatory bodies and Microsoft's board, encouraging them to intervene, according to an Associated Press report.

Jones had a recent meeting with staff members of the US Senate to talk about his worries and also wrote a letter to the Federal Trade Commission (FTC). Although the FTC acknowledged receiving the letter, they chose not to provide any additional comments.

Microsoft expressed its dedication towards dealing with staff issues and acknowledged Jones' contributions in examining the technology. Nonetheless, it suggested utilizing in-house reporting systems for probing and tackling the problems.

Jones, a senior software engineering manager, has dedicated the last quarter to mitigating security issues with Microsoft's Copilot Designer. He emphasized the possibility of the tool creating damaging content even from harmless instructions. For instance, when given 'car accident' as a prompt, Copilot Designer might display unsuitable, sexually suggestive images of women.

Jones underscored to FTC Chair Lina Khan that Copilot Designer can potentially create damaging material even when the user's prompts are benign. For example, when the term 'car accident' is used, it occasionally produces images that sexually exploit women. He also drew attention to additional worrying content such as violent scenes, partisan bias, underage alcohol consumption and drug usage, violation of copyright laws, conspiracy-related theories, and religious symbols.

Jones has earlier voiced these worries in the public domain. Originally guided by Microsoft to reach out to OpenAI with his discoveries, he did just that. He wrote a letter to OpenAI in December and shared it on LinkedIn, leading to Microsoft's legal department insisting on its removal. Regardless, Jones has remained relentless, presenting his issues to the US Senate's Commerce Committee and the Attorney General's office in Washington State.

Jones emphasized that although the primary problem is associated with OpenAI's DALL-E model, those who use OpenAI's ChatGPT for AI image generation are less prone to come across damaging outputs. This is because the two firms have put into place varying safeguarding measures.

"He communicated through text that a lot of the issues associated with Copilot Designer are already handled by the built-in protective measures of ChatGPT."

The appearance of notable AI image creators in 2022, such as OpenAI's DALL-E 2 and the later launch of ChatGPT, sparked considerable public curiosity. This led major technology companies like Microsoft and Google to start creating their own variants.

Nonetheless, without solid protective measures, the technology poses threats, allowing users to generate damaging "deepfake" visuals portraying political personalities, scenes of war, or unapproved nudity, incorrectly associating them with actual individuals.

Following apprehensions, Google temporarily halted the image creation function of the Gemini chatbot, primarily because of the controversy related to portrayals of race and ethnicity, like showing individuals of color in military uniforms from the Nazi era.

(Incorporating information from various sources)

Search for us on YouTube

Highlighted Programs

Associated Articles

NVIDIA's Jensen Huang believes AI illusions can be resolved, with broad AI intelligence expected in about 5 years

OpenAI's Sora has the capability to produce authentic-looking explicit videos, with developers hastening to implement a solution

Apple has at last unveiled its MM1, a multimodal AI model designed for creating text and images

Microsoft has brought on board DeepMind's cofounder, Mustafa Suleyman, to head its new consumer-focused AI group

NVIDIA's Jensen Huang asserts that AI misconceptions can be tackled, forecasting general AI intelligence to be possible in roughly 5 years

OpenAI's Sora possesses the ability to create lifelike explicit videos, with developers scrambling to provide a remedy

Apple has ultimately released its MM1, a multimodal AI model purposed for generating text and images

Microsoft has employed DeepMind's cofounder, Mustafa Suleyman, to steer its new AI team aimed at consumers

Can be found on YouTube.

Firstpost holds the copyright, with all rights reserved, as of 202

You May Also Like

More From Author

+ There are no comments

Add yours