GPT5 - Doom Delayed?
Are we doing this apocalypse thing or what?
When a product launch leads people to revise their thoughts about the end of the world the least you can say is that we live in interesting times.
Here’s the short version of what just happened (as I understand it!).
GPT5 arrives in a cloud of hype. It… disappoints. Which is to say that rather than feeling like a great leap into the frontier it feels like an incremental improvement.
The dominos start to fall -
If GPT5 is lame, maybe progress is stalling, maybe we’re approaching a plateau that’s well short of AGI, maybe scaling laws aren’t a fundamental force, maybe there’s diminishing returns and things become brittle and maybe we’re much much further from a recursively self improving silicon god than we’ve been led to believe.
Maybe we don’t get superintelligence or even AGI, but we get something like what we have, a clever and helpful intelligence that has frustrating limitations, one that has to be wedded to various tools and scaffolding (much of which still needs to be built) in specific domains to really unleash massive productivity gains.
And maybe that’s… great? If you code, AI is already a huge productivity unlock as you work in a domain where it’s optimized to be of service. And maybe the law, healthcare, accounting, etc, will all benefit in time, but not from one incredible model that can ‘do anything a human can do’ but from new tools built to take advantage of the intelligence that AI more or less currently offers, funneled into very specific areas where it can provide something akin to the boost coders get today.
For a minute, like a year ago, this was being called something like the AI Tools scenario - think Alpha Fold, an AI orders of magnitude beyond human capabilities in a single domain, analyzing protein structures, that’s useful for that purpose and only that purpose. The argument was that a bunch of specialized Alpha Folds was a better result than an all knowing self improving machine god that we might not be able to align with our goals and values (like staying alive). Maybe that scenario is back on the table!
Hold on though. Before we stop sweating the machine god, is there any evidence OTHER than a so-so product launch?
Kinda? It’s not just that GPT5 wasn’t a mega leap forward, it’s that it seemed specifically tuned to shore up OpenAI’s position in a couple of markets - coding and consumer. We can get into the specifics, but the general idea is that if you think you’re on the road to superintelligence you don’t stop to pick up pennies in MARKETS, you plow everything into taking the next step to the one model that will rule them all. The fact that OpenAI almost seems to be ‘pivoting to profit’ is being seen as a sort of top signal on AI progress by some.
Specifically, GPT5 beefed up it’s coding abilities to compete with Claude Code from Anthropic which was rapidly approaching default choice for coders. I don’t code, so all I can tell you is that the reaction to whether GPT5 closed the gap is mixed. The point is that they saw the gap as worth closing.
The other thing GPT5 did was bring the best model to the masses. Rather than locking the ‘good one’ behind a paywall while free users talked to last year’s big brain, now everyone is hitting the same model (with some confusing exceptions). Again, the thinking goes that this is a sign OpenAI wants users to have the best possible experience when interacting with their product, even if they’re doing it for free, because the better model is stickier, and OpenAI wants its users to become loyal and locked in. Not necessarily to sell subscriptions (you’re getting the best model for free - sort of) but to begin selling advertising, the same thing that made Google and Meta blockbuster revenue machines.
Again, the argument goes, why bother worrying about generating billions in ad revenue when there’s trillions in ‘automating away all knowledge work’ in the pipeline - unless you’re losing faith that the latter is possible.
But wait! There’s more! The fallout isn’t just confined to the AI companies, it’s geopolitical!
Remember when there was a new space race between American and China for who would create the machine god and whose values would be embedded in it and rule or destroy the world? Well, if GPT5 means we’re plateauing and all anyone is building is just another PRODUCT then maybe we don’t have to go out of our way to deny China cutting edge chips. In fact, maybe we can sell them chips but tax the sales (okay the constitution says we can’t do that, but we’re not getting hung up on stuff like that these days). The argument goes that if we’re not in a race to superintelligence, but something more like a race to build out the equivalent of the next Internet level tech stack, then let’s not deny China our chips, forcing them to build up their own, instead let’s make sure that this whole thing, worldwide, runs on our gear!
So… that appears to be what we’re doing. Based on, or at least supported by ONE shitty product launch.
When you put it that way, is there a chance we’re overreacting to ONE shitty product launch? PROBABLY!
First there’s a whole debate about just which WAYS the GPT5 launch was shitty. See, it’s not one model, but several models of varying abilities, and initially the process of deciding which one you were using was done automatically. This made things simple for us simple folk, but frustrating for the power users who knew which model their query needed. This all led to some confusion over GPT5s actual abilities since GPT5 is really a suite of models not a single thing.
Further there’s debate about OpenAI’s idiotic naming conventions which seem intentionally confusing, and honestly I’m not going to hash it all out. What it boils down to is if you take the names off the models and just look at when they’re released, then there’s an argument that GPT5 is EXACTLY where it should be in terms of capabilities according to scaling laws, and if they’d just launched it as an upgrade rather than announcing it like a mindblowing new model, then everyone would have reacted accordingly and the apocalypse would still be nigh.
We also know there’s a delta between the models AI companies are working on internally and those they’re ready to serve to the public. GPT5 tells us what OpenAI is ready to give consumers, but not what raw power and capabilities the models still behind the curtain may or may not have. Maybe we’re just setting ourselves up for an existential crisis all over again when GPT6 hits and actually blows minds. WHO KNOWS!
So… what have we learned?
Well, even the doomiest of AI prognosticators seem to have pushed their timelines for AGI back a little, so perhaps we can all agree to enjoy a little more time as the apex intelligence on the planet. To get to superintelligence we basically need recursive self improvement, meaning the models do the research into the next model, and it all starts to go faster than we can imagine because it’s being done by them instead of us. There seems to be agreement that GPT5 is at least a data point in favor of the idea we’re not on the verge of that happening.
As for export controls and policy moves, these things feel more like people using GPT5 as evidence to support whatever action they already wanted to take than justifiable responses to a product launch. It seems way too soon to conclude that GPT5 tells us how this is going to go and decisions made in its wake seem highly likely to be short sighted.
What about, like, just, you know, the economy?
You may have noticed it’s in a weird state. Basically contraction that might well be a recession save for the fact that we’re experiencing phenomenal growth in the AI sector. If AI were to turn out to be a bust, well, we’d be in real trouble (hopefully we didn’t just pass a bill to explode the deficit or anything). In fact, some economists are arguing that the impact of Trump’s tariff policy is being offset and hidden by the growth in the AI sector, allowing him to continue to defy both economic and political gravity.
So, does GPT5 mean it’s all a bubble and we’re about to go deaf from the pop? Maybe not. In fact, some argue that it’s the exact opposite, and that OpenAI pivoting to profit, turning itself into the next Meta by becoming an ad revenue monster focused on sucking up all consumer attention until the end of time is at least a business model that we understand and by taking some of the doomier scenarios off the table we only have to worry ourselves with people becoming addicted to their AI boy/girlfriends and choosing artificial interaction over being part of society, which… is not good, but at least you can see where the profit centers are.
Or, maybe it’s the internet all over and today’s data centers are yesterday’s fiber optic cable, a buildout that will be way ahead of its time and lead to a massive financial meltdown before we actually figure out how to put them to productive use.
We just don’t know. Maybe GPT6 will tells us.


I appreciate this less-doom-and-gloom angle!
Well, if AI doesn’t come through on its promised apocalypse, I guess I should start recycling again?