Money Matters
What business are we in, if not the content business? ‘
We definitely make content, and that’s central to how we see ourselves and our place in the world. But that’s not how we paid the bills historically.
And what if, in an AI age, the way we create value is through what we do, not what we make?
First, some quick history: Decades (literally) ago, when I was running The Asian Wall Street Journal, I met with a pair of BCG consultants to talk about the-then new-fangled internet and what it might do to (and for) companies and the economy. I listened to their analysis, then expressed relief that I was safely ensconced in the content business.
To which I got something between a condescending and pitying look. “You’re not in the content business,” one of them corrected me. “You’re in the eyeball business.”
Ouch. I bring up that encounter, not just because they were right — which they were — but as a reminder that clarity about what we create and derive value from is critical, especially when the world is being remade around you. And so yes, back then the Asian Journal did get about 20% of its revenues from people paying for subscriptions — our content business — but the vast bulk of our money came from renting out our reader’s eyeballs (and attention) to advertisers. I was justly proud of the newsroom’s work — our stories — but the truth was that the money we made directly from that output only covered a fraction of our costs.
The advent of the internet, as the consultants correctly predicted, changed a lot of those dynamics: it shifted the balance of power away from publishers and towards advertisers; it ripped up the newspaper “bundle” and the internal cross-subsidies between content types; and while it increased our reach, it lowered barriers to competition. These days, newsrooms probably get more than half their revenue from subscriptions, but that’s as much of a function of falling advertising dollars as rising subscription income.
And now Gen AI threatens to tear it all up again.
So what business are we in?
It’s tempting to say, once again, that we’re in the content business. That’s the view that drives the passionate discussions about getting paid by the AI platforms, either to license our news for training LLMs, or for access to it as a way to ensure models have a source of current information. And I’m all for extracting more money from AI companies. But neither path seems to me a sustainable source of revenue by itself.
News content as training data seems an edge case at best (at worst, a payoff to keep us from complaining too much.) Training is a now-and-then activity and isn’t intended to keep models current; it’s more concerned with formats and structures. Giving LLMs access to news to ground their responses, on the other hand, is a real thing. It’s functionally what Perplexity does when you ask it a question: It does a search, finds appropriate sites, and summarizes what it finds. So if you want to know what happened in the Gulf yesterday, and (say) Reuters has the best information on that, then Reuters can and should charge for that access. That could be a real source of ongoing and steady revenue.
Except. Much of what news organizations publish is commodity information; Reuters may well have some truly unique information — or analysis — about the Gulf, but it’s likely a fraction of the tens of thousands of words it produces every day. And for the information that’s more or less common, AI companies can access it from a plethora of sources (other news orgs, government sources, NGOs, independent journalists, etc) — which means, as a practical matter, that no one really has pricing power. And that means it’s unlikely that any license fee will cover anything more than a fraction of the cost of creating the information.
(To be sure, there will be people and organizations who can make a living out of unique information or analysis or voice. Those are the people who will occupy what Tony Haile calls “archipelagos of trust.” Think Joe Rogan, Ezra Klein, Fareed Zakaria. But it’s hard to scale those operations.)
Great reporting — deep investigations, impactful scoops — does command value, but often also costs a lot to produce. And much as we’d love to have more of it, those stories aren’t, by and large, daily occurrences at most publications. And, as Francesco Marconi noted in a much-discussed LinkedIn post, it’s nearly impossible to protect that information from being aggregated, matched or rewritten once it’s published — again, undercutting pricing power.
One more factor: News, unlike music, has little staying power; you might want to listen to the same Taylor Swift song dozens of times (well, I do), but it’s unlikely you want to read the same New York Times story about Donald Trump’s taxes more than once. It’s what made the micropayments model for news fraught; it’s hard to collect significant revenue when your customers are essentially one-time purchasers — at least compared to the relatively high fixed costs of news creation.
That’s not to say content — our reporting and stories — is unimportant. But it is to say it’s unlikely to be the revenue generator that we would like it to be. And, to be clear, it rarely was.
Back in the news-on-dead-trees era, the value we extracted was from the bundle of stories (the print paper) that aggregated the eyeballs we could rent. In the internet era, we sold the “container” — the individual story — that we tried to get into as many hands as possible, both to build subscription and advertising revenue. In the AI age, stories will become far less permanent fixtures; the information they contain will be extracted, combined with other facts, and reconstituted into new formats and products.
The bundle is gone, and the container is going.
So if content itself will cover only part of a news organization’s expenses, where will the rest of the money come from?
Perhaps not from what we produce, but what we do for people. From an earlier post:
If you’re covering a school system, you might not need world-beating scoops; you might just need to know enough about the inner workings of those schools — which schools are underfunded, which principals are underperforming, which special programs are being expanded — to create personalized stories for parents with children in that school system: what the cut in budgets means for their child’s cherished art program, for example.
That gives them content they want, and can’t get anywhere else — not because the information isn’t available elsewhere, but because the publisher knows what each reader is looking for.
In other words, the real value for publishers may be in truly understanding and engaging with a community, and reporting news about things that matter to them, rather than competing for a broad audience.
In this world, it isn’t our content, per se, that we derive value from; it’s how we use it to meet the specific — individual! — needs of the communities we serve. And that requires us to really understand their needs, at a personal level — to create value mainly from truly knowing them, rather than from writing for them.
Or, more boldly — or controversially — what if we took that idea of service a step further? Instead of just reporting on what we think a community cares about, and providing personalized news to them, what if we helped them find and understand information they want, when they want it? What if we could — and I grant I’m only just starting to explore the technology here — build systems that were avatars of journalist-researchers who could help users navigate the mass of data, news and information on specific topics?
In other words, what if, in effect, we tried to create value from the journalistic process? From the particular ways we’ve learned to question assertions, understand context, cross-reference information? What if we could embed that process into a system?
Imagine a trusted news site, say on education, offering personalized agents to their readers to help them dig through minutes of school board meetings, education budgets, tax proposals and the like, and married to the organization’s own (verified) reporting, to get answers to reader-posed queries. A parent could ask: How does the new proposed school budget affect the art classes my child loves?
And the system could interrogate school board minutes, dive into the budget, look at the site’s own reporting, and then engage the parent in a conversation about how best to understand all that information. It could ask what school the child went to; it might note a planned cut in art teachers but flag a resolution to bring on a part-time coordinator. Would it be able to come up with a pat story with simple answers? Likely not — but a human researcher probably won’t be able to either. And that’s not really the goal.
What it could do — in theory — is help the parent understand the forces at play and possible outcomes, not unlike if they could engage a reporter in a one-on-one conversation. The goal isn’t to build a magic answer machine, but to a trusted guide to how to think about and interrogate information.
It’s true, I haven’t actually tried to build this — yet. But we’ve experimented with similar interfaces — building specialized knowledge to guide users through a RAG — for another project at CUNY (more on this later). And I’ve been playing with extending my deconstruction bot to see if it could function more effectively as a news literacy tool; how much work would it take to turn it onto not just news stories, but press releases, minutes and other forms of information? It could help readers train a journalistically skeptical eye on the mayor’s latest pronouncements and the school board’s optimistic press release.
Will any of this pay the bills? I honestly don’t know; we’re entering uncharted territory. But I’d argue these are paths we should explore. In a world where machines can create content at scale (and are getting better at it all the time), we should be leaning into the areas where we have real advantages: in what we do, how we do it, and who we are — trusted guides who have our communities’ interests front and center.
Gen AI presents an existential business challenge to us; but there are opportunities, too, if we’re clear-eyed about what value we create.
And what business we’re in.



Fascinating paragraph and then the following "what ifs".
Or, more boldly — or controversially — what if we took that idea of service a step further? Instead of just reporting on what we think a community cares about, and providing personalized news to them, what if we helped them find and understand information they want, when they want it? What if we could — and I grant I’m only just starting to explore the technology here — build systems that were avatars of journalist-researchers who could help users navigate the mass of data, news and information on specific topics?