User avatar
privTri Volpeon areon3NSmol @volpeon@icy.wyvern.rip
11mo
I wonder which one is more unfixable: Hallucinations since they're the result of the fundamental way LLMs work; or prompt injections because every input — no matter if it's an instruction by you, or the system prompt, or a data source — is literally just text without any semantic difference to the model so it can't "know" that a data source isn't supposed to tell it instructions.