Is This The End Of The Superintelligence Bandwagon?
AI scaling laws fail to deliver, but is this enough to stop prophets and preachers?
The Wall Street Journal recently reported on how both Meta and OpenAI have delayed their new flagship models. Reacting to the piece, Cal Newport remarked that the scaling laws AI companies have been banking on no longer seem to hold.
If you’ve been worrying about AGI, perhaps after listening to the many prophets of doom, I hope this news gives you a reason to reconsider.
I don’t claim to be an AI expert, but this looks like further evidence that general intelligence can’t be brute-forced with data and compute alone.
As I wrote back in 2023, in Bigger Doesn’t Mean Smarter:
It’s reasonable to expect [LLMs will] get even more refined as the datasets grow. But that is not grounds for expecting them to give off a spark of Artificial General Intelligence.
[…]
Whether instantiated on grey matter or silicon, there’s more to a mind than the number of neurons. It’s possible we’ll stumble our way into AGI by throwing more hardware at the problem. But I’d like to think that if we ever get there, it’ll be thanks to having cracked the code of our own minds.
AGI’s failure to materialize likely has less to do with dataset size, and more to do with the absence of a good explanation for how intelligence works.
As disappointing as a product development setback might be, maybe that’s just the reality check commentators need to take a step back and stop worrying about AI alignment.
And it would do the industry good!
Without the unfounded fear of “AI stealing people’s jobs,” eager regulators might get off developers’ backs, and we’ll get more competition and innovation.
Alas, I fear that’s just wishful thinking on my part.
Regulators will regulate.
Plus, there’s a buck to be made—and plenty of podcasts to guest on—by hyping up AGI, whether as the savior or destroyer of humanity.
But one shall not prophesy. We’ll wait and see…