Exposing the Silent Threat: How Hackers Can Intercept Your Private Conversations with AI Chatbots

3 min read

Activities

Divisions

Programs

Activities

Divisions

Programs

Study reveals hackers can effortlessly decipher your conversations with ChatGPT and other AI services

A group of researchers from Israel discovered significant privacy flaws in numerous AI chatbots, including ChatGPT. Despite developers' efforts to secure their services through encryption, hackers manage to infiltrate using side-channel attacks.

A recent research conducted at Ben-Gurion University in Israel has revealed considerable privacy risks associated with various AI chatbots, leading to worries about the safety of private discussions.

Yisroel Mirsky, who leads the Offensive AI Research Lab at Ben-Gurion University, says that cybercriminals can take advantage of these weak points to intercept conversations carried out on platforms such as ChatGPT.

Mirsky emphasized that anyone who uses the same Wi-Fi or local area network (LAN) as those involved in a chat, or even distant harmful individuals, can secretly eavesdrop and control discussions without being noticed.

The study paper labels these manipulations as "side-channel assaults," a technique where outsiders collect information passively via metadata or other secondary channels instead of breaking through security measures.

Rather than breaching firewalls as conventional hacks do, side-channel attacks exploit vulnerabilities in encryption systems. Even with encryption measures put forward by AI developers such as OpenAI, Mirsky's group found vulnerabilities in their encryption execution, making the content of the messages prone to eavesdropping.

Although side-channel attacks are typically less intrusive, they present substantial threats, evidenced by the researchers' capability to predict chat cues with a 55 per cent precision. This vulnerability makes confidential subjects readily identifiable to harmful entities.

The research mainly examines OpenAI's encryption methods, implying that, apart from Google's Gemini, most chatbots could be vulnerable to comparable security breaches.

These susceptibilities primarily involve the employment of "tokens" by chatbots, which support effective interaction between users and AI systems. Even though chatbot communications are generally secured through encryption, the tokens introduce a security flaw that was not considered earlier.

Having real-time access to token data allows harmful individuals to deduce conversation cues, similar to eavesdropping a discussion behind a closed door.

To validate their results, Mirsky's group used another AI system to scrutinize the raw data obtained from the side-channel. Their tests showed a significant accuracy in guessing conversation cues, highlighting the seriousness of the security flaw.

Addressing these worries, Microsoft reassured users that their personal information is not likely to be jeopardized by the vulnerability impacting its Copilot AI. Nonetheless, the firm committed to resolving the problem swiftly with updates to protect its clients.

The significance of these weaknesses is substantial, especially regarding delicate subjects like abortion and LGBTQ matters, where confidentiality is crucial. Misuse of these weaknesses could lead to severe repercussions, possibly putting individuals at risk who are inquiring about these topics.

The growing discussions about AI ethics and privacy highlight the pressing necessity for strong security steps to safeguard user privacy in interactions driven by AI.

Search for us on YouTube

Prime Programs

Associated Narratives

are available on YouTube

Firstpost holds all rights, protected by copyright, as of 2024

You May Also Like

More From Author

+ There are no comments

Add yours