Unmasking Vulnerabilities: How Hackers Can Intercept Conversations with AI Chatbots like ChatGPT – A Study by Ben-Gurion University

3 min read

Happenings

Divisions

Performances

Happenings

Divisions

Performances

Study reveals hackers can effortlessly intercept your conversations with ChatGPT and other AI services

Researchers from Israel have identified serious privacy risks present in multiple AI chatbots, including ChatGPT. Despite developers employing encryption methods, hackers still manage to infiltrate using side-channel attacks.

Recent research from Ben-Gurion University in Israel has highlighted substantial privacy weaknesses in several AI chatbots. This has caused worries about the safety of personal discussions.

Yisroel Mirsky, who leads the Offensive AI Research Lab at Ben-Gurion University, suggests that these weaknesses can be used by harmful individuals to spy on conversations happening on platforms such as ChatGPT.

Mirsky emphasized that people using the same Wi-Fi or LAN as those involved in a chat, or even distant harmful individuals, have the ability to secretly eavesdrop and keep track of conversations.

The study recognizes these vulnerabilities as "side-channel attacks," a technique where external entities collect information indirectly via metadata or other non-direct methods, instead of breaking through security defenses.

Instead of the usual method of hacking which involves breaking through firewalls, side-channel attacks exploit vulnerabilities in encryption systems. Even with the encryption measures put in place by AI developers such as OpenAI, Mirsky's group found imperfections in their encryption execution, making the content of the messages vulnerable to being intercepted.

Although attacks through secondary routes are usually less intrusive, they can be quite dangerous, as evidenced by the scientists' capacity to deduce chat cues with an accuracy of 55 per cent. This vulnerability allows harmful individuals to detect sensitive subjects with ease.

The study mainly examines the encryption methods of OpenAI, indicating that, with the exception of Google's Gemini, most chatbots are likely to be vulnerable to similar security breaches.

The core of these weaknesses revolves around chatbots' utilization of "tokens" which enable smooth interaction between users and AI models. Even though the messages sent by chatbots are generally encrypted, the tokens present an unnoticed security risk.

Having real-time access to token information allows harmful individuals to deduce conversation cues, similar to eavesdropping on a discussion through a shut door.

In order to validate their results, Mirsky's group utilized another AI model to scrutinize the raw data collected from the side-channel. Their tests showed a high level of accuracy in forecasting conversation prompts, highlighting the serious nature of the security weakness.

In response to these worries, Microsoft reassured users that there's a low chance their personal information would be at risk due to the vulnerability in its Copilot AI. Despite this, the corporation committed to swiftly resolving the problem with updates to protect its clients.

The effects of these weak points are significant, especially when dealing with delicate matters like abortion and LGBTQ topics, where confidentiality is a top priority. Taking advantage of these weak points could lead to severe outcomes, possibly putting at risk those seeking details about these issues.

The ongoing discussions about AI ethics and privacy emphasize the immediate necessity for strong security safeguards to shield users' privacy during AI-based interactions.

Look for us on YouTube

Prime Programs

Connected Narratives

are available on YouTube

Firstpost holds all rights and protections under copyright law as of 2024

You May Also Like

More From Author

+ There are no comments

Add yours