Tag: prompt-injection
-
Embrace The Red: Spyware Injection Into Your ChatGPT’s Long-Term Memory (SpAIware)
Source URL: https://embracethered.com/blog/posts/2024/chatgpt-macos-app-persistent-data-exfiltration/ Source: Embrace The Red Title: Spyware Injection Into Your ChatGPT’s Long-Term Memory (SpAIware) Feedly Summary: This post explains an attack chain for the ChatGPT macOS application. Through prompt injection from untrusted data, attackers could insert long-term persistent spyware into ChatGPT’s memory. This led to continuous data exfiltration of any information the user…
-
The Register: From Copilot to Copirate: How data thieves could hijack Microsoft’s chatbot
Source URL: https://www.theregister.com/2024/08/28/microsoft_copilot_copirate/ Source: The Register Title: From Copilot to Copirate: How data thieves could hijack Microsoft’s chatbot Feedly Summary: Prompt injection, ASCII smuggling, and other swashbuckling attacks on the horizon Microsoft has fixed flaws in Copilot that allowed attackers to steal users’ emails and other personal data by chaining together a series of LLM-specific…
-
Embrace The Red: Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information
Source URL: https://embracethered.com/blog/posts/2024/m365-copilot-prompt-injection-tool-invocation-and-data-exfil-using-ascii-smuggling/ Source: Embrace The Red Title: Microsoft Copilot: From Prompt Injection to Exfiltration of Personal Information Feedly Summary: This post describes vulnerability in Microsoft 365 Copilot that allowed the theft of a user’s emails and other personal information. This vulnerability warrants a deep dive, because it combines a variety of novel attack techniques…
-
Embrace The Red: Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Source URL: https://embracethered.com/blog/posts/2024/google-ai-studio-data-exfiltration-now-fixed/ Source: Embrace The Red Title: Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed. Feedly Summary: Recently, I found what appeared to be a regression or bypass that again allowed data exfiltration via image rendering during prompt injection. See the previous post here. Data Exfiltration via Rendering HTML Image Tags During…
-
Slashdot: Slack AI Can Be Tricked Into Leaking Data From Private Channels
Source URL: https://yro.slashdot.org/story/24/08/21/1448249/slack-ai-can-be-tricked-into-leaking-data-from-private-channels?utm_source=rss1.0mainlinkanon&utm_medium=feed Source: Slashdot Title: Slack AI Can Be Tricked Into Leaking Data From Private Channels Feedly Summary: AI Summary and Description: Yes Summary: The text discusses a significant security vulnerability in Slack AI, a service integrated with Salesforce, which is susceptible to prompt injection attacks. This issue raises concerns about data security and…
-
The Register: Slack AI can be tricked into leaking data from private channels via prompt injection
Source URL: https://www.theregister.com/2024/08/21/slack_ai_prompt_injection/ Source: The Register Title: Slack AI can be tricked into leaking data from private channels via prompt injection Feedly Summary: Whack yakety-yak app chaps rapped for security crack Slack AI, an add-on assistive service available to users of Salesforce’s team messaging service, is vulnerable to prompt injection, according to security firm PromptArmor.……
-
Simon Willison’s Weblog: The dangers of AI agents unfurling hyperlinks and what to do about it
Source URL: https://simonwillison.net/2024/Aug/21/dangers-of-ai-agents-unfurling/#atom-everything Source: Simon Willison’s Weblog Title: The dangers of AI agents unfurling hyperlinks and what to do about it Feedly Summary: The dangers of AI agents unfurling hyperlinks and what to do about it Here’s a prompt injection exfiltration vulnerability I hadn’t thought about before: chat systems such as Slack and Discord implement…
-
Simon Willison’s Weblog: SQL injection-like attack on LLMs with special tokens
Source URL: https://simonwillison.net/2024/Aug/20/sql-injection-like-attack-on-llms-with-special-tokens/#atom-everything Source: Simon Willison’s Weblog Title: SQL injection-like attack on LLMs with special tokens Feedly Summary: SQL injection-like attack on LLMs with special tokens Andrej Karpathy explains something that’s been confusing me for the best part of a year: The decision by LLM tokenizers to parse special tokens in the input string (,…
-
Simon Willison’s Weblog: Data Exfiltration from Slack AI via indirect prompt injection
Source URL: https://simonwillison.net/2024/Aug/20/data-exfiltration-from-slack-ai/ Source: Simon Willison’s Weblog Title: Data Exfiltration from Slack AI via indirect prompt injection Feedly Summary: Data Exfiltration from Slack AI via indirect prompt injection Today’s prompt injection data exfiltration vulnerability affects Slack. Slack AI implements a RAG-style chat search interface against public and private data that the user has access to,…
-
Hacker News: Attackers can exfil data with Slack AI
Source URL: https://promptarmor.substack.com/p/data-exfiltration-from-slack-ai-via Source: Hacker News Title: Attackers can exfil data with Slack AI Feedly Summary: Comments AI Summary and Description: Yes Summary: The text describes a critical vulnerability in Slack AI that allows attackers to exfiltrate sensitive information from private channels through prompt injection, specifically indirect prompt injection. This security issue is particularly relevant…