@araly It's a strategy other companies use as well. Using these fearful images is supposed to give people the feeling that AI is incredibly powerful so the shareholders will invest more money. I guess this effect outweighs the negative associations in a "we will harness its power so it won't harm us" kind of way.
@meow64 I'm judging the product as a whole, not the specific model. The product is GPT-5, with different variants to handle different tasks. People use this product expecting it to do math correctly, and it returns a wrong result because the routing mechanism selected an inadequate model. This means the product doesn't look very smart to them. Users don't give a single shit about these technical details.
It's funny that there are often people who say "this test is stupid because it's a problem with the tokenizer" or "OpenAI intransparently sends your queries to smaller models" or "it would make sense with version numbers", and yeah. Sure. But Sam promised an LLM that's so intelligent it's scary, and what people see instead is a model that gets the answer wrong because it's not even smart enough to understand that you're talking about decimal numbers instead of versions.
Who would win: A PhD-level model backed by a billion-dollar valued company whose CEO is scared of its capabilities or A tiny 0.6B boi that's good at math
Poisoning alt text? This isn't going far enough. I propose we poison everything we write by inserting random words and fucking up the grammar. Will it be harder to read everything? Sure, but who cares, I want to mess up companies data scraping efforts.