We recently got a phishing mail at work which would've been incredibly convincing if they had spoofed a proper sender and made the link target domains more plausible. There were no weird grammar errors and they used your full name.
Context is that GMail crippled newsletter emails from that website. It took German emails, interpreted them as English and "translated" them into German.
I remember when a teacher played a game with us in 5th grade where we had to send a text through google translate over and over again to get hilarious results. GMail gives you that automatically now.
When you run an LLM, and then another one for a different user, they will use twice the amount of VRAM and twice the number of cores to get the same performance as the original single run.
Let's say you have a database server used by one application, and then you add another application. How much do the resource requirements increase? Not by another 100%, that's for sure.
The problem tools like Cursor have is that unlike classic software, AI is horrible to run at scale. With something like a social network, the cost per user goes down as the number of user increases. With AI, you can't have this kind of parallelism that brings the cost down and that means there's linear growth. Computations on the GPU are specific to one model invocation, and a model invocation can't handle multiple requests at once.
The coding applications built on those models, like Cursor, are going to keep improving to make better use of the models
This part is funny, though. Just as Cursor is forced to enshittify because Anthropic upped their prices for enterprise customers (which is most likely because they're in trouble themselves).
@sun Yeah, from what I've read in comments AI tools help people with getting started with things you aren't familiar with, but as you gain experience (provided you're willing to learn from what the AI produced) you may be better off writing things yourself. Makes sense to me
Note that the takeaway isn't "AI sucks" but rather that developers felt it made them faster even though the numbers showed the exact opposite. That may be due to the output quality, but also due to inexperience with using these tools.