Privacy at Risk: How Hackers Can Infiltrate AI Chatbots Like ChatGPT – A Revealing Study

3 min read

Activities

Divisions

Programs

Activities

Divisions

Programs

Study reveals that hackers can effortlessly intercept communications with ChatGPT and other AI services

Research conducted by a group of scientists in Israel has identified serious privacy weaknesses in several AI conversational agents, such as ChatGPT. Despite attempts by developers to secure their platforms through encryption, hackers are still able to bypass these measures using side-channel attacks.

A new research conducted by scholars at Israel's Ben-Gurion University has highlighted substantial privacy weaknesses in multiple AI chatbots. This has sparked worries about the safety of personal discussions.

Yisroel Mirsky, who leads the Offensive AI Research Lab at Ben-Gurion University, suggests that bad actors could potentially take advantage of these weaknesses to spy on conversations happening on platforms such as ChatGPT.

Mirsky pointed out that people who are using the same Wi-Fi or local area network (LAN) as those involved in a chat, or even harmful individuals from a distance, have the ability to secretly tap into and keep track of discussions.

The study reveals that these vulnerabilities are referred to as "side-channel attacks," a technique where third parties collect information passively via metadata or other indirect methods, instead of breaking through security defenses.

Instead of the usual hacks that break through firewalls directly, side-channel attacks exploit vulnerabilities in encryption methods. Even with the encryption measures put in place by AI creators such as OpenAI, Mirsky's group found issues in how they had set up their encryption, which made message content prone to being intercepted.

Although attacks through side-channels are typically less intrusive, they present serious threats, as shown by the researchers' capacity to deduce chat cues with a precision of 55 per cent. This vulnerability makes it simple for harmful entities to detect sensitive subjects.

While the research mainly examines OpenAI's encryption methods, it indicates that most chatbots, with the exception of Google's Gemini, are vulnerable to comparable breaches.

The core of these weaknesses lies in the "tokens" used by chatbots, which enable effective interaction between users and AI systems. Even though chatbot communications are usually secured, the tokens introduce a risk that was not previously considered.

Having real-time access to token information allows harmful individuals to deduce dialogue cues, similar to eavesdropping on a conversation behind a closed door.

In order to validate their results, Mirsky's group utilized another AI model to evaluate unprocessed information gathered via the side-channel. Their tests demonstrated a strong probability of accurately forecasting conversation triggers, highlighting the seriousness of the security weakness.

Addressing these worries, Microsoft guaranteed users that the vulnerability impacting its Copilot AI is less likely to jeopardize their personal information. Nonetheless, the firm committed to resolving the problem swiftly with updates to protect its clients.

The ramifications of these weaknesses are significant, especially regarding delicate matters like abortion and LGBTQ problems, where confidentiality is crucial. Taking advantage of these weaknesses could lead to severe repercussions, possibly putting at risk individuals looking for information on these subjects.

The escalating discussions around the ethical implications and privacy issues of AI highlight the critical demand for strong safety protocols to safeguard user privacy during AI-based engagements.

Look for us on YouTube

Leading Programs

Associated Narratives

can be found on YouTube

All content is the property of Firstpost and is protected by copyright laws as

You May Also Like

More From Author

+ There are no comments

Add yours