The Blind Spot Machine
Who is the news for?
Or rather, when we make decisions about whether something is a story or not, or when we decide that one angle on a story matters more than another — in other words, when we exercise “news judgment” — who do we imagine are our readers? And how varied are they?
And how we can serve more of them better?
Back in the day, when we printed the news on dead trees and trucked them to our readers, we were aiming to reach as broad an audience as possible — so we looked for issues that would be of interest to the largest subsection of our readership and often framed stories from their point of view. That made sense commercially, even if it ended up undeserving minorities and compressing perspectives.
(I have a lot of thoughts about how we’ve consistently failed to see so many members of our communities, about which I’ll get to at some point, but this post by Delano Massey covers some of them.)
But we don’t do dead trees any more, and while it’s critical to have a clear sense of the audience you serve, we’re in a much better place to think about how to think more broadly about what’s newsworthy, and for which communities. And how to serve more of them.
One of the core assumptions I have about the coming — or already here — age of AI-intermediated news is that audiences will come to expect to be superserved with personalized news, and that we’ll have to compete in that space. I’ve been focusing on how we could do that by deepening our engagement with them and understanding — at an individual level — what their needs and backgrounds are.
I still think we need to do that, but I recognize that’s a heavy lift, both technologically and logistically. And it struck me, when I was talking to another AI journalism nerd, that there is an in-between step that’s easier to take.
Enter the army of interns again. The last time we met them, they were busy reading Semafor’s news budget and picking out stories that might be of interest to different parts of the newsroom. If they do that well — and they can — why can’t they trawl through documents, story ideas and drafts, newsroom budgets and the like and suggest angles in that same information that might be of interest to selected subsets of that publication’s audience?
If a business desk reporter is writing about a businessman’s financial woes, can a bot programmed to be an avatar of arts-focused readership of that publication flag a question about whether that affects the funding of the museum he’s on the board of? That could open up a news story that might have been ignored in the past. Can a bot primed to think about particular neighborhoods ask the writer of a story on a new highway about the impact on minority communities near the planned roads, and not just the wealthy ones that are more likely to protest?
(Or, to pick on a particular bugbear of mine: Could stories about restrictions on gender care for minors be oriented to reflect the views of the children and their parents who want that care? Too many of those stories are framed about the possible harm to minors who make the “wrong” decision to access care; very few center the children who suffer harm because they can’t get the care they — and their parents — want. Those stories embed a notion of “normal” that unconsciously take the point of view of non-trans children and their parents.)
Those bots could serve two purposes: One is simply to remind reporters to think of the broader communities they’re writing for, which would be a good thing by itself. The other is to help generate fresh stories that matter to readers that might otherwise be overlooked. (True, we’ll need more reporters to write more stories; but this post is less about resources and more about blind spots.)
How hard would it be to program those avatars? I haven’t tried yet (hey, I’m busy!), so I don’t have any real-world experience. But we already do have avatars for other purposes, so we know it can be done. These wouldn’t be perfect; they’d be products of our assumptions about what our readers were interested in, and would doubtless incorporate some stereotypes about their interests and disinterests.
But we already do that; we just don’t put our biases — unconscious or otherwise — down in writing. This at least has the benefit of making us articulate what we think our readers care about, and allowing the disparate communities (or avatars of them) to advocate for their interests before we publish any stories.
Think of it as a blind spot machine.
None of this will be easy. Just articulating which communities we want to represent as avatars will require some soul searching; and then trying to program them will be even harder — how do you compress the viewpoints and interests of an entire group of people into a prompt? And we’d want to keep testing the results against real-world feedback from the people we’re trying to serve.
But it seems like a project worth trying, at least with a few avatars. We may learn they aren’t useful at all. Or we may learn we’ve failed to serve so many of our readers for so long. We may learn that the “news judgment” that we’ve all internalized was as much a product of the news product and the technology of the day as it was a reflection of the greater calling of our mission. And we may learn we can do news better for more people.
Meanwhile, hope everyone is having a good holiday season, at least those who mark the holiday season. I plan to eat a lot. (Also, catch up on work, which is really what the holiday season is for. )



Fascinating piece, Gina. I see you mention holiday eating and working, but not sleeping, which I suppose you never do.