In episode 352 of Deep Questions, Cal Newport ran a segment on a recent New York Times article about “AI welfare”.
His conclusion: AI welfare is a marketing stunt.
I bring this up because it’s a good example of how sensationalist claims get amplified by the media. But as soon as you peer behind the headline in search for an explanation, the whole thing immediately falls apart.
The article, by Kevin Roose, is a profile of Anthropic and their first AI welfare researcher, Kyle Fish. “Mr. Fish told me,” Roose writes, “that his work at Anthropic focused on two basic questions: First, is it possible that Claude or other A.I. systems will become conscious in the near future? And second, if that happens, what should Anthropic do about it?”
Cal’s reaction:
I can answer your question Mr. Fish.
No.
One, no Claude will not become conscious in the near future and two, I think your job is made up.
I think this is a PR stunt from Anthropic. I think the more they are talking about these issues, the magical ideas like the AI is alive […] the more it distracts from actual pragmatic questions about actual pragmatic uses and harms and also inconvenient things like how much money are you making or what’s your plan for profitability. The more the technology seems magical, the more runway they get to keep attracting money and move forward.
Cal then points out that the right journalistic response would have been to ask Mr. Fish, “Explain to me how a feed-forward, transformer-based neural network can be conscious. Walk me through this. All it can do is: data moves through, and tokens come out.”
What is your explanation?
That’s it! Ask a single question and the whole charade unravels.
Asking about chatbot’s welfare is as pointless as wondering whether the toaster is tired of seeing the same multi-grain slices and needs to toast some rye for a change.
Cal has a firm grasp of how large language models work, so he doesn’t get swept up, unlike eager tech reporters.
As I’ve been arguing for a while, we could all benefit from developing AI literacy. LLMs can bring massive benefits to various areas of knowledge work. But it’s crucial that we understand what they can and cannot do, both to get the most out of them and not to be fooled by marketing gimmicks.
Cal jokes that Kyle Fish does not have a job. I’m not sure what’s worse: that this is a cynical PR move, or that Anthropic is investing resources looking for something that cannot be there.