Dispatches From The Future
Objects in the front windscreen may be nearer than they appear — with apologies to Meat Loaf
Sometimes, when you’re distracted and not really looking, the future wanders up and slaps you around the head with a two by four. That happened late last week when I opened my inbox and first read a delightful piece by Platformer Fellow Ella Markianos about her efforts to replace herself with a bot, and then a slightly more alarming one in Wired about how AI bots are becoming a larger share of internet traffic.
Both are, in their own way, signals about how close our coming reality is to becoming our current reality. Or, in some ways, is already here. As William Gibson once said, “the future is already here — it’s just not very evenly distributed.”
Ella’s piece is lively and funny — go ahead and have a look; I’ll wait — and a reminder that good human writing can still be a joy to read, and regularly trumps LLM-generated prose. But as the story notes, the bots are getting better all the time.
The setup is simple: She tries to program Claude to replicate her work explaining a story and writing what key people are saying about it. The first day is a disaster, not least because she runs out of tokens. The second day is better, but Claude — or “Claudella,” as she dubs her prospective replacement — continues to disappoint, writing in a verbose style, occasionally hallucinating, and getting confused when prompts proliferate.
Still, Ella isn’t taking a victory lap:
I could view the Claudella experiment through the lens of human exceptionalism and say that my bot is missing the style and humor that can only spring forth from the human soul. I might say that it occasionally hallucinates because it lacks “real intelligence.” But I think a lot of what’s missing will simply be fixed by improvements in what the AI companies call “instruction following.”
It’s the next day that things really change.
Anthropic drops a new model, Opus 4.6, and Ella hooks it up. The results are immediate: This new version of Claudella writes better, is more concise, and sounds more like Ella. As she notes:
…there was something unsettling about feeling the AI frontier advance under my feet just a few days into this experiment.
No kidding. Ella is clear-eyed about what this means.
In important ways, Claudella can do my job.
Although she then adds:
But it also has clear shortcomings. In particular, it has trouble understanding which parts of a style are important to replicate. It also struggles to respond to editor feedback. And when asked to write about AI, the Claude-based model shows a notably favorable bias toward Anthropic.
I suspect this conclusion will serve as a Rorschach Test of sorts; some will see a world where bots still have significant shortcomings and aren’t ready to shove us out of the way yet. Others will look at the three-day progression of a home-coded agent and wonder what could have been done with more time, attention and whatever the next iteration of Claude is.
I’m in the second camp, and I think we all should be. Not that we should be afraid for our jobs, per se; but that we should be preparing to figure out how to adjust to these new capabilities, how to take advantage of them, and how to shape them in ways that serve more people better.
The second story, the Wired one, looks at how the other side of the information equation — not who’s writing stories, but who’s reading them — is changing. It’s not just bots that may be producing stories; it’ll be bots consuming them too.
The piece flags a key statistic from Tollbit: That AI bot traffic is rising sharply, and becoming a significant source of web traffic. At the start of last year, there was about one AI bot visit to a site for every 200 human visits; by the end of the year, one in 31 was.
It’s sort of the reverse of the Google Zero trend; here, AI is driving more visits to websites, not less. And maybe that’s the sort of thing that publishers might celebrate, briefly — until the broader outlines of what this actually means hit home.
The bots aren’t scraping to train themselves as much as they are retrieving data to be used in answers to queries; in effect, creating real-time RAGs of verified information. They’re pulling in information to answer specific questions for their users. More importantly, they’re not useful to advertisers and immune to appeals to subscribe. They’re not “engaging” in any meaningful way — or at least, not in the way we think of human engagement on a site.
As my colleague Adiel Kaplan wrote the other day, our old metrics — web traffic, open rates — are becoming increasingly meaningless in a world where AI agents scour every information surface and hoover up facts to analyze, reconstitute and reformat for us. Her newsletter of newsletters has become the primary reader of the emails we subscribe to; we get the downstream digest of its work. In this world, there’s no reason not to subscribe to every newsletter available, if a bot can read it and tell me what I might or might not be interested in; and in the same way, the future is likely to be full of agents reading every story on a news website and sifting for information that may interest me. As Wired notes:
“The majority of the internet is going to be bot traffic in the future,” says Toshit Pangrahi, cofounder and CEO of TollBit, a company that tracks web-scraping activity and published the new report. “It’s not just a copyright problem, there is a new visitor emerging on the internet.”
That’s a world where we may — in theory — all be better informed; but at least right now, that’s not a world where there’s a sustainable business model for the providers of all that information. And regardless, journalism — and public interest information — in that world will look very different from its current incarnation. We should be preparing for the future.
Because it’s very nearly here.


