< >
< >
< >
< >
< >
A single poisoned document could expire “secret” data via chatt “ - current-scope.com
< >
< >

A single poisoned document could expire “secret” data via chatt “


The latest generative AI models are not only independent Text shooter chatbots– In view of the fact, you can easily be connected to your data to give your questions personalized answers. Openai’s Chatgpt can be linked In your GM mail mailing entrance you can inspect your Github code or find appointments in your Microsoft calendar. However, these connections have the potential to be misused – and the researchers have shown that it only requires a single “poisoned” document.

New insights from security researchers Michael Bargury and Tamir Ishay Sharbat, which today have been unveiled on the Black, Hacker Conference in Las Vegas Indirect injection attack. In a demonstration of the attack ,, Synamit AgentflayerBargury shows how possible to extract developer secrets in the form of API key that was stored in a demonstration drive account.

The susceptibility to security shows how connecting AI models with external systems and the exchange of more data increases the potential attack area for malicious hackers and possibly multiplied the possibilities in which vulnerability can be introduced.

“There is nothing the user has to do to be compromised, and there is nothing that the user has to do so that the data goes out,” says Bargury, the CTO from the security company Zenity, wireless. “We have shown that this is completely zero click. We only need your email, we share the document with you and that’s it. So yes, that’s very, very bad,” says Bargury.

Openai did not immediately answer the request from WIRED for a statement on vulnerability in the event of connections. The company presented connectors for chatt as a beta function at the beginning of this year, and it is Website lists At least 17 different services that can be connected to its accounts. On the system you can “bring your tools and data into chatt” and “search files, pull live data and refer the content directly in the chat”.

Bargury said he reported the results of Openaai at the beginning of this year and that the company quickly introduced reductions to prevent the technology with which he extracted data on connections. The way the attack works means that only a limited amount of data could be extracted at the same time – full documents could not be removed as part of the attack.

“Although this problem is not specific for Google, it shows why the development of robust protection against fast injection attacks is important” Recently improved AI security measures.

Leave a Reply

Your email address will not be published. Required fields are marked *

< >