Ezra Klein Keeps Asking The Wrong AI Questions
Never ask a barber if you need a haircut
In the recent How Fast Will AI Agents Rip Through the Economy?, New York Times journalist and podcast host Ezra Klein demonstrates, once again, the baffling lack of criticism that afflicts so many reporters when it comes to artificial intelligence. They ask AI insiders for predictions on the future of the industry, then take them as gospel without asking for biases or implementation.
The crucial why? and how? questions are seldom asked. And when they are, they are in the form of “why is this bad?” and “how bad will this be?” Never “why are you saying this?” or “how is this going to work?”
Granted, someone working at Anthropic can provide better insight on upcoming innovations than your average Gio writing from his kitchen table. But there is a glaring conflict of interest at play that keeps being conveniently ignored in favor of sensationalist reporting.
Anthropic, OpenAI, and all the other players in the AI field are yet to turn a profit. These companies depend on massive influxes of cash from investors. You know what’s a great way to get investment? Hype!
Whenever a staffer in one of these companies makes grand declarations on the future of the industry, one cannot help but wonder how much of it is informed guess and how much is PR work.
But if one can forgive podcast hosts for inviting partisan or biased voices—after all, we are all biased in one way or another—what keeps happening is that they don’t push back. They don’t ask how does it work?
Take the claim that developers at Anthropic use Claude to improve Claude, and that this will result in the much dreaded exponential recursive self-improvement loop.
Ezra takes this as a given, without asking how that might happen.
When you dig beneath the surface, you find the claim of imminent AI takeover rests on flimsy foundations.
Claude Code, the tool, uses Claude, the model, let’s say Opus, to write code. Recently, the code Opus produces and the autonomy with which Claude Code integrates it have become so good that many developers use it every day, myself included.
But Opus, GPT, Gemini, and all the other large language models are not code. LLMs are mathematics.
For sure, Anthropic employees may use Claude Code to improve Claude, but only at the margins. They can make the apps we use to interact with their models faster or more secure. Likewise, they can improve all the tooling, infrastructure, and scaffolding that goes into training new models. All of that is welcome and remarkable, but comes from existing solutions already present in the models’ gargantuan training sets. Code writing agents cannot create the new explanations that are necessary for fundamentally better models and alternative AI architectures—see Vishal Misra’s explanation and Brett Hall’s commentary if you want to dig deeper.
I wish Ezra and his colleagues deployed the same relentless criticism toward AI as they do with politicians they disagree with. All this handwaving press about AI doom distracts from the real conversations we need to have: How to use these tools well, how to prevent their makers from harvesting our data in their chase for growth, and how to make the underlying economics scalable.

