Activities
Divisions
Programs
Activities
Divisions
Programs
Study discovers that hackers can effortlessly intercept your conversations with ChatGPT and other AI services
A group of researchers from Israel has identified significant privacy flaws in numerous AI chatbots, including ChatGPT. Even though developers are securing their services with encryption, hackers are still able to breach these protections through side-channel attacks.
A new study conducted by scientists at Ben-Gurion University in Israel has highlighted considerable privacy weaknesses in multiple AI chatbots. This has led to worries regarding the safety of confidential discussions.
Yisroel Mirsky, who leads the Offensive AI Research Lab at Ben-Gurion University, suggests that these security flaws can be taken advantage of by harmful individuals to secretly listen in on conversations taking place on platforms such as ChatGPT.
Mirsky emphasized that people who are using the same Wi-Fi or local area network (LAN) as those chatting, or even harmful remote individuals, can secretly listen in and keep track of conversations.
The study refers to these vulnerabilities as "side-channel attacks," a technique where third parties collect information indirectly via metadata or other non-direct sources, instead of breaking through security defenses.
Contrary to conventional hacks that directly breach firewalls, side-channel attacks exploit vulnerabilities in encryption protocols. Even with the encryption measures put in place by AI developers such as OpenAI, the team lead by Mirsky found shortcomings in their encryption execution, making the content of the messages prone to interception.
Though side-channel attacks are usually less intrusive, they carry considerable threats, as shown by the scientists' capacity to predict chat prompts with a precision of 55 percent. This vulnerability makes confidential subjects readily identifiable to harmful individuals.
The research mainly examines the encryption methods of OpenAI, implying that apart from Google's Gemini, most chatbots are vulnerable to comparable security breaches.
The core of these weaknesses involves the application of "tokens" by chatbots, which enable effective interaction between users and AI systems. While chatbot messages are generally encoded, the tokens introduce a weak spot that was earlier ignored.
Having access to live token information allows harmful individuals to deduce dialogue cues, similar to eavesdropping on a conversation behind a closed door.
In order to validate their results, the group led by Mirsky used another AI system to scrutinize raw data gathered via the side-channel. Their tests showed a high accuracy in foreseeing dialogue cues, highlighting the serious nature of this security weakness.
In response to these worries, Microsoft has reassured users that it's highly improbable that personal information will be jeopardized by the vulnerability impacting its Copilot AI. Regardless, the company has committed to quickly resolving the problem with updates to protect its clients.
The consequences of these weaknesses are significant, especially regarding delicate matters like abortion and LGBTQ topics, where the importance of privacy cannot be overstated. The misuse of these weaknesses could lead to severe outcomes, possibly putting at risk those who are searching for information on these subjects.
The escalating discussion about the ethics and privacy issues related to AI emphasizes the critical requirement for strong security protocols to safeguard users' privacy during AI-based exchanges.
Search for us on YouTube
Best Programs
Associated Narratives
can be found on YouTube
Firstpost holds all rights, protected by copyright, as of 2024
+ There are no comments
Add yours