On August 7, OpenAI released its long-awaited GPT-5 model… and it didn’t go well.
The most common reaction was disappointment.
At the time of writing, the Reddit post “GPT5 is horrible” has more than six thousand likes, and other complaints are getting similar feedback.
OpenAI’s reaction to the launch shows the complaints aren’t just coming from the vocal Reddit crowd. In the days following the launch, the company released a “flurry of changes,” including making GPT-4o available for paid users. You know a new product is bad when you need to pay to use an older one.
But where does this disappointment come from?
Part of it is due to the genuine differences in how GPT-5 and GPT-4o respond to prompts, but my guess is that a lot of it comes down to unmet expectations.
OpenAI — and other LLM vendors — have been marketing their chatbots, smart autocompletion, and generative tools as getting ever closer to Artificial General Intelligence.
But AGI is the opposite of what OpenAI is building.
With general comes the ability to say no, the desire to pursue one’s own interests, and true creativity. But to improve ChatGPT, Claude, Grok, and the other models, their makers need to make them more obedient, which is incompatible with AGI.
GPT-5 suggests that bigger doesn’t mean smarter. As I noted when commenting on GPT-5 and Llama 4 Behemoth being delayed, AGI researchers might want to focus more on understanding how human minds work than on building ever-bigger data centers for training.
If we’re lucky, GPT-5’s flop will rein in the hype and the discourse will shift from “how AI will take over your job first, and the world next” to “when to pair with AI to be more productive at your job, and when to work solo.” And from there, private investment and public attention will shift from following the hype to solving concrete problems.
But the hype train is not easily derailed, and when companies depend on VC money, generating hype can take precedence over generating real value for users.
Time will tell.