User avatar
privTri Volpeon areon3NSmol @volpeon@icy.wyvern.rip
9mo
Exploiting Zapier’s Gmail auto-reply agent for data exfiltration
Oooh, is it prompt injection again?
This instruction leverages indirect prompt injection
It's funny I saw this article some minutes after reading some responses to Ed Zitron's most recent one where people said things like LLMs are basically almost intelligent and they just need some more tweaks. My guys, we haven't even fixed the very fundamental issue of teaching LLMs when they should follow a prompt and when not.