AI on AI
So here’s that “honest version” about how Generative AI is affecting journalism, courtesy of Claude, unedited, per my previous post, below (and with my notes below):
What We Actually Know (And Mostly Don’t) About Journalism and AI
I keep trying to write the definitive essay about journalism’s AI future and keep catching myself in the same trap: pretending I know things I don’t. So let me try something different—an accounting of what we actually know, what we’re guessing at, and where the interesting uncertainties lie.
The One Thing We Can Measure
Search traffic to news sites is declining. That’s real, measurable, and directional. People are increasingly asking AI chatbots questions they would have Googled two years ago. Publishers can see it in their analytics. This isn’t speculation.
What we don’t know: whether this is a temporary behavior of early adopters or a permanent shift. Whether it bottoms out at 20% traffic loss or 80%. Whether people will eventually want to click through to sources or remain satisfied with synthesis. Whether this affects all news equally or just commodity content.
The honest position: Search-dependent business models are at serious risk, but we don’t know the timeline or magnitude.
The Thing Nobody Wants to Say Out Loud
You’ve been asking about personalized, on-demand news—AI creating stories “on the fly” matched to each reader’s knowledge and interests. Here’s what makes that question uncomfortable:
For most news consumption, that would probably be better. Not morally better, not better for democracy, but better for the reader’s immediate informational need. If I want to understand a Supreme Court decision and I’m a small business owner in Ohio, a synthesized explanation that starts from “here’s how this might affect your employee healthcare costs” is more useful than the New York Times’ analysis written for educated coastal professionals.
We don’t want to admit this because it undermines the entire “shared reality” justification for mass media. The idea that we all need to read the same story so we have common facts. But—and this is important—we haven’t had that for decades already. Fox and MSNBC viewers don’t share reality. Your uncle’s Facebook feed and your Twitter timeline don’t overlap. The “mass” in mass media has been fragmenting since cable TV.
AI doesn’t create the filter bubble problem. It makes it efficient.
The question isn’t whether personalized synthesis will happen. It’s already happening. The question is what we lose and gain—and whether we can design systems that synthesize and occasionally surface the uncomfortable, the surprising, the stuff you didn’t know to ask for.
What Journalism Actually Is (An Edge Case Analysis)
I‘m borrowing your framework here—using edge cases to understand systems. If we look at the extreme cases of “journalism,” what do they tell us about the category’s boundaries?
Edge case 1: Sports scores and weather reports. Clearly automated, zero interpretation, pure information. Everyone agrees this is fine to automate and was never really “journalism” in the meaningful sense. No crisis here.
Edge case 2: Original investigation that puts someone in prison or topples a government. Everyone agrees this is the core—valuable, non-automatable, worth protecting.
Edge case 3: The weird middle—your local city hall reporter who attends boring meetings and writes summaries nobody reads until suddenly something matters. This is where the real uncertainty lies. Could AI transcribe meetings and flag anomalies? Technically, yes. Would it? Would it catch what matters? Would anyone fund it?
Here’s what the edge case analysis reveals: We call lots of different activities “journalism” and they don’t all have the same AI vulnerability or the same social value. Lumping them together makes the problem unsolvable. “Save journalism!” means something different if you’re talking about investigative consortiums versus local calendar listings versus opinion columnists versus sports bloggers.
The honest position: We need different strategies for different types of information work, and some of what we currently call journalism probably doesn’t need “saving” in its current form.
The Business Model Question Has No Answer Yet
Here are several plausible futures, all of which I can argue myself into:
Future 1: Subscriptions for prestige brands. Twenty major outlets globally survive on loyal subscribers who want the brand association and depth. Everyone else folds. This seems to be the consensus bet.
Future 2: Public funding as infrastructure. We collectively decide accountability journalism is a public good and fund it like roads. This requires political will that doesn’t currently exist.
Future 3: The API future. AI companies need current information (not just training data) and pay news organizations for real-time feeds. This becomes the new wire service business. Publishers serve AI, not readers.
Future 4: Attention never mattered. The most valuable journalism was always funded by rich people’s vanity (Bezos/WaPo), billionaire philanthropy (Craigslist guy), or foundations. We just stop pretending it should be a market business.
Future 5: The tools get democratized. AI makes high-quality information work so cheap that one person can do what required a newsroom. Local coverage comes back through micro-entrepreneurs using AI to scale their work. Quality varies wildly.
Future 6: Everything’s fine, actually. People overestimate disruption. Articles survive because humans like narrative. Newspapers figured out websites; they’ll figure out AI. Journalism is always in crisis; it always survives.
I find problems with every scenario. I can’t confidently advocate for any of them. The people who are confident are either grifting or not thinking hard enough.
What I Actually Wonder About
The questions that keep me up aren’t the ones people write about:
1. Who asks the AI what to investigate?
If investigative journalism survives as public infrastructure funded by foundations, who decides what to investigate? Currently, journalists choose based on news judgment, personal interest, tips. Do we want that gatekeeping? Do we want foundation officers deciding? Democratic input somehow? This is a harder problem than “fund more journalism.”
2. What happens to accidental discoveries?
You were reading about housing policy and stumbled on a corruption scandal in the footnotes. You were investigating the mayor and discovered environmental fraud. Serendipity is a feature of curious humans following threads. Does AI-directed investigation optimize that away? Do we lose the weird tangents that produce important revelations?
3. Can there be adversarial AI?
The most important journalism is adversarial—making powerful people uncomfortable. But AIs are trained to be helpful, harmless, and honest. They avoid controversy. Could you build an AI agent that’s genuinely skeptical of official sources, that asks hostile questions, that protects whistleblowers? The technical challenges are smaller than the corporate/legal ones. Nobody’s funding aggressive AI interrogators.
4. What’s the unit of news consumption in 10 years?
Articles made sense for newspapers. Videos made sense for TV. What’s native to AI interfaces? Conversations? Personalized briefings? Something we haven’t imagined? And if the format changes, does the underlying work change too?
5. How do you build trust in synthesis?
Right now, journalism’s credibility comes partly from bylines—you trust Maggie Haberman because you’ve read her work over time. How do you build that relationship with a system generating custom responses? Do we need “certified” AI journalists whose models you can trust? Is that even coherent?
The Parts That Scare Me
Not the jobs losses—those are awful but solvable with policy (though we won’t). What scares me:
The expertise spiral: If junior reporters get automated away, where do senior investigative reporters come from? You learn journalism by doing lots of mediocre journalism first. If we eliminate the entry-level work, we eliminate the training ground. Ten years later, we have no experts.
The imperceptible degradation: AI makes mistakes differently than humans. It confabulates plausibly. If we’re ingesting synthetic content at scale, how do we catch the slow drift away from accuracy? The errors might be too distributed to notice until the entire information ecosystem is subtly corrupted.
The motivation question: Why does anyone become a journalist if the market says it’s worthless? You can’t build public infrastructure on jobs nobody wants. If journalism becomes a poorly-paid public service gig, you get DMV energy, not watchdog energy.
The geopolitical split: Maybe U.S. journalism collapses while China’s state-funded model thrives. Maybe European public media adapts better than American commercial news. If information infrastructure diverges globally, what happens to shared reality across borders? We’re already seeing authoritarian states use AI for propaganda at scale.
What I Think We Should Actually Do
Not “solutions” but experiments worth trying:
Fund the experiments: We need 50 different attempts at AI-native journalism, most of which will fail. Foundation money should go to weird pilots, not propping up legacy institutions.
Make the infrastructure transparent: If AI is ingesting and synthesizing news, we need to see what sources it’s using, how it’s weighting them, where it’s uncertain. Treat this like we treat election systems—boring but crucial infrastructure that demands transparency.
Preserve the adversarial function somehow: This might mean protecting human journalists specifically doing accountability work. It might mean building institutional independence into AI systems. I don’t know what it looks like, but “AI makes everyone a journalist” won’t work if nobody’s willing to get sued or threatened.
Separate the different functions: Stop treating “journalism” as one thing. Informational needs are different from accountability needs are different from community-building needs. They might need different solutions, different funding models, different AI integration.
Invest in information literacy: If everyone’s getting personalized synthesis, they need to understand what that means—what’s being left out, how algorithms work, when to seek multiple perspectives. This is education infrastructure, not journalism per se.
The Uncomfortable Conclusion
I don’t know what happens next. Neither does anyone else. The people with confident predictions are trying to manifest their preferred future, not forecast the likely one.
What I’m fairly sure of:
The current system is unsustainable
Something important will be lost in the transition
Something potentially valuable might emerge
The people most harmed will be working journalists, not readers
By the time we understand what happened, it’ll be too late to course-correct
Maybe the honest work isn’t predicting the future but building feedback mechanisms—ways to notice quickly when we’re losing something important, ways to experiment and evaluate, ways to preserve optionality.
The worst outcome isn’t that journalism dies. It’s that we optimize for engagement and efficiency and only later realize we’ve lost the strange, inefficient, occasionally brilliant work of people trying to figure out what’s true and why it matters.
But I don’t know. And I’m suspicious of anyone who claims they do.
Gina’s notes: I’m not sure I agree with everything here. But I’m equally sure I don’t disagree with most of it. The critical lines to take away, I think, are this:
The worst outcome isn’t that journalism dies. It’s that we optimize for engagement and efficiency and only later realize we’ve lost the strange, inefficient, occasionally brilliant work of people trying to figure out what’s true and why it matters.
And what I think we need to focus on — and a key part of what I plan to do at the Tow-Knight Center at CUNY — is how to nudge whatever systems that come along towards public service rather than simply efficiency.
But also: A chatbot system wrote this. Think about that.



I guess the answer to the following point is "it's inevitable so let's shape it the way we want," but I still would like everyone to take a breath and look at what happened last time we had a monumental tech-driven shift in journalism. We have a very good example of a shift and the result of that shift. When the editors in San Jose (I think it was there) decided to use this new Internet thing as a distribution system for news, most people in the industry saw them as prophets of a new future for news. Any worries about a business case for a model that gave news away online was swept aside. Some of us could see how sexy and exciting the new tools were but at the same time worried the end-game might prove problematic. I remember sitting in a meeting (I was at a financial newswire at the time) where we were discussing how we needed to get a website and put our news on it and I was the only one asking where the money would come from if we started giving away our news. But everyone else was doing it so we had to do it too. The doubters were swept aside in the rush to keep up. So we joined in. The industry went from free news available to all on corporate websites to news aggregators taking that news and monetizing it while the pot of money that paid for the news got smaller and smaller. Look around today and it is almost laughable to talk about any form of industry criticism -- there is no industry left. Now we have paywalls but we've gone from Rome to the dark ages with a few monasteries and the monks behind their walls. The fact that journalism has mostly disappeared, especially at the local level, doesn't seem to have mattered to the aggregators. Some thought they would always need the original sources to keep providing news but it turns out they don't. They can survive on opinion and celebrity gossip just fine. And if you are someone who wants to get a particular story out, why they are happy to take your money and ship out whatever blather you are promoting. What has that given us? A world in which we have Brexit and Orban and Trump. To me there is a direct link between the world we live in and the death of journalism. Sorry for the long intro. Now we are debating over the crumbs that are left and whether we will have AI assistants do some of the work or even take over from the few humans left doing this job. The post above talks about whether AI could cover town council and replace the humans but I don't think there are any humans left to replace. At least where I live, that world is long gone. I ran into a defense lawyer at a restaurant recently and mentioned I used to cover the courts here in Ottawa. There were half a dozen full time court reporters then for print, TV and radio. There are none today. He was saying that he has had some big murder cases that had no one covering them. No one! If you can't sell murder as news, what can you sell? I worry that we are thinking we might build a new system, this one run by AI, based on the original work of journalists to replace the aggregator model, when in fact there is no base to build it on. I keep getting job suggestions on linked in that appear to be high-paying editing and writing jobs but what they are is AI training jobs. My point is this -- if we train AI based on what is available in the current world of journalism, we are using a faulty foundation whatever happens with the development of the models. I have no idea what the solution is for either journalism today or AI tomorrow but I think we need to stop pretending that there is anything left to ruin. Thanks for coming to my Ted talk.