Exploiting Zapier’s Gmail auto-reply agent for data exfiltrationOooh, is it prompt injection again?
This instruction leverages indirect prompt injectionIt's funny I saw this article some minutes after reading some responses to Ed Zitron's most recent one where people said things like LLMs are basically almost intelligent and they just need some more tweaks. My guys, we haven't even fixed the very fundamental issue of teaching LLMs when they should follow a prompt and when not.
