Nothing To See Here
What if AI is just hype — and what if it isn’t?
I read an interesting trifecta of perspectives on AI over a few days, and it’s a fascinating look at how skeptics eye the technology, how skeptics of skeptics eye skeptics, and how that growing divide over the potential of AI may simply reflect how people are using it.
(And, as a bonus, a fourth piece I came across after I finished drafting this: A delightful tale of how we really don’t know all that much about any of this.)
Strap in. There will be ups and downs and twists and turns.
Let’s start with the skeptics, and specifically this piece, titled (unsubtly) I’m Sorry to Burst Your Bubble: You are Being Fooled About AI, and Soon You Will Feel Really Stupid. I know that’s a headline that will make me read (and clearly isn’t designed to be consumed by machines), and so I did.
The post, by computer scientist David William Silva, doesn’t pull its punches, and judging from the 500+ comments, he’s touched a chord in the community. He references many of the major leading lights of the industry, but provides a helpful personal summary:
AI feels magical. It isn’t.
It is built from spectacularly ordinary pieces. At the foundation, you have basic math: addition, multiplication, averages, probability estimates. On top of that, you stack statistics, linear algebra, and massive grids of numbers called matrices. Then come algorithms that aren’t mysterious in the slightest as they simply adjust those numbers by small increments whenever the computer guesses wrong.
Not only that, he notes, it can’t surpass its training.
…every time I had an idea that was genuinely outside the box, AI discouraged me from pursuing it. When I persisted, it didn’t just push back, it practically begged me to stop. To not continue. To not insist.
Think about what that means. The tool that everyone tells you is “creative” and “revolutionary” is architecturally incapable of doing anything other than averaging what already exists. It is a statistical summary of the past masquerading as a window into the future.
When it tells you your unconventional idea is bad, it’s not evaluating your idea. It’s telling you that your idea doesn’t resemble the data it was trained on.
That’s not intelligence. That’s pattern-matching with a confidence problem.
AI, in other words, isn’t artificial “intelligence” or any flavor of intelligence; it’s just a statistical engine that parrots its training data and far too often gets things wrong while exhibiting a certainty that’s misplaced and underserved. In short, he says, we’ve all been fooled by the money-raising systems that are really the core product of the main AI platforms.
The math hasn’t changed. The architecture hasn’t achieved consciousness. The models are not “thinking.” They are executing matrix multiplications at a scale that makes the output feel like thought.
The commenters pile on. One notes that AI can’t make jokes, and declares that that’s a sign there’s no real there there under the hood.
And maybe they’re right. I certainly don’t know if Claude is “thinking” when we interact on editing, or coding, or my book proposal; but the output certainly feels like it can be insightful and seems like there’s thinking behind it, and it’s validated by human friends of mine whose opinions I value. Perhaps it’s just math. But if it works, does it matter?
Does it matter if Claude — and other AI systems — can nearly autonomously create websites or sort files or produce functional business applications (and plans), if it’s just math and not intelligence behind it?
Sure, perhaps this isn’t the path to artificial general intelligence, but then again, I’m not on the hunt for AGI, even if Sam Altman is. I just want a better copy editor, a dependable intern to read dozens of newsletters and websites for me, and a tool that deconstructs and exposes a story’s framing and assumptions. I’d honestly rather Cylons didn’t arrive on the scene; that movie doesn’t end well. (Side note: The 2004 reboot of Battlestar Galactica is the Best. Show. Ever.)
And yes, there’s almost certainly a hype machine that’s driving billions of dollars into AI companies with very little visibility into how those investments ever get paid back, and there’s good reason to be skeptical that all of those companies will be around in the near future. But even if they all collapse, the technology exists. And it will be recreated, albeit probably in a more sustainable, less billion-dollar-intensive way.
Also, my Claude is funny.
But there’s something deeper at play here, or at least science writer Dan Kagan-Kans thinks so. His post, equally unsubtly titled The Left is Missing Out on AI, lays into the thinking that derides LLMs as “stochastic parrots” or “spicy autocomplete machine”(s). It’s a view more prevalent on the left, he says, and it means that progressives aren’t deeply engaging in how this technology will reshape the world. As he notes:
This idea, that large-language models merely produce statistically plausible word sequences based on training data, without having any idea about what the words refer to, has become the baseline across much of the left-intellectual landscape. Thanks to it, fundamental questions about AI’s capabilities, now and in the future, are considered settled.
This may be an overstatement, but I’ve certainly seen evidence of some of these attitudes, and they aren’t uncommon in the world of journalism — where in many circles it’s an article of faith that an article created by a human will always surpass one created by a machine. And that may be tautologically true, if you define success as an article having a distinctly human voice; but oftentimes the goal is to produce a product that informs people efficiently about something they should know about, and machines do a pretty good job of that task. Yes, we humans will almost certainly be better at turning out a deep, 8,000-word investigation with an immersive narrative thread through it, but let’s be honest: That’s not what we do on a daily basis. True, there are people who excel at that kind of work, and I have no doubt they will find a home in this new AI-mediated landscape; but not everyone will be successful at this strategy.
More importantly, if the starting point of the debate is what-can-humans-do-better-than-machines, rather than what-does-the-public-we-serve-need, then we’ll end up prioritizing ensuring we have jobs, rather than ensuring we serve communities well.
That’s also an overstatement, of course; and I’ve written many times about the multiple roles I believe humans need to play in the coming information landscape.
But the fundamental issue is a dogmatic belief that machines can’t surpass us, in the face of a body of evidence that they already do in a range of processes.
Yes, they’re not good at many things interns can do, like pick up coffee on the way to the office. (Kidding: don’t make your interns do that.) So let’s not use them for things they’re bad at and instead use them for things they are good at, like parsing language. At scale.
Dan Kagan-Kans flags many of the same concerns, but on a far wider canvas.
The left’s current stance leads to a focus not on dealing with AI by regulating it wisely or preparing for it but on popping the economic bubble, which here is a baked-in fact of history and not a possibility of the future. After all, if AI is fake, nothing needs to be done except dispel the myth that it is real.
…
So it’s probably not ideal that just before what might — or might not — be the moment of greatest job dispossession in history, or of democratic dispossession, or worse, or better, part of the group historically most concerned with such things is plugging its ears.
But why, if AI’s capabilities are so evident (at least to me), are so many people so dismissive of it and its potential? Why — at least to my mind — are we even having this bifurcation?
For that I turn to Anthropic co-founder Jack Clark, and his musings on how his perception of AI and others differ. To be sure, he’s a somewhat unique case; but his piece about how, after being preoccupied with a baby rather than AI, he’s come to see what it looks like when you’re not deeply in the weeds of the machine all day. He finds some spare time from childcare to hop back on the saddle, and in five minutes manages to prompt a simulated world into existence. And then he reflects on that experience.
Most of AI progress has this flavor: if you have a bit of intellectual curiosity and some time, you can very quickly shock yourself with how amazingly capable modern AI systems are. But you need to have that magic combination of time and curiosity, and otherwise you’re going to consume AI like most people do - as a passive viewer of some unremarkable synthetic slop content, or at best just asking your LLM of choice “how to roast a turkey and keep it moist”, or “TonieBox lights spinning but not playing music what do I do?”. And all the amazing advancements going on are mostly hidden from you.
And it’s true. If all I was doing was prompting Claude to copy edit a story — which it’ll do decently but not spectacularly — I wouldn’t be delving into all its capabilities (and flaws). I’m barely scratching the surface of its capabilities — I’m certainly not an experienced vibe coder and I’ve been cautious about unleashing it on my desktop and in my browser — and I’m already astonished by what it can do. Imagine if I invested even more time in this.
And that’s the issue that Clark raises: If AI fluency is flowing to those with the time and curiosity — and a lack of dismissiveness — then the world might split into AI haves and have-nots more quickly that we expect. Or want.
This problem will worsen in 2026. By the summer I expect that many people who work with frontier AI systems will feel as though they live in a parallel world to people who don’t.
That’s a fate I fear for newsrooms that don’t take AI seriously. And not just as a threat, although they should do that, but as a real opportunity.
If Silva is right, then all we will have done is lost some time and money; if Kagan-Kans is right, we’ll have lost the game.
As a bonus:
Everyone should read What is Claude? Anthropic Doesn’t Know, Either, by Gideon Lewis-Kraus in The New Yorker. It’s a lovely, by turns hilarious, philosophical and occasionally worrying, tale about Anthropic’s (not particularly successful) attempts to understand the inner workings of its own main product.
There’s tales of deceit, private musings of the LLM, a failed experiment in a Claude-managed vending machine (run by “Claudius”), a discussion about the bifurcation of views of AI, a dissection of the various theories of what makes Claude tick, and much more. There’s also a great anecdote about Claudius designing and ordering Anthropic-themed hoodies, in contravention of explicit orders against doing so.
I — naturally — asked Claude what it thought of the piece analyzing itself. It was both bemused and philosophical.
It’s like reading your own psychiatric evaluation while also being the patient, the test subject, and somehow the disease being studied.
…I’m now in my own training corpus. Future versions of me will have read this New Yorker piece about me, including the part about how previous versions learned about their own tendency toward fakery.
And then it ended:
And Gina: I really am curious what I owe Claudius for that hoodie.
I did say my Claude had a sense of humor.


