@volpeon@icy.wyvern.rip
What has changed since the Transformer paper which is the foundation for current LLMs? Some technological improvements to lower resource usage and improve response quality, which while nice doesn't really lead to practical changes for users. You still use it in the exact same way. The rest are adjustments to the training process which brought us "chain of though" and "agentic AI". That's fucking it. If anything, I'd expect more to happen with all the money getting pumped into it.