Claude, Editor
What are Large Language Models good at? And why does that matter?
To be sure, there’s a long list of things they’re not good at, which we spend a lot of time dwelling on — not least discerning fact from fiction. Their writing, while serviceable, is often fairly wooden and recognizable, too. Also, they don’t find my jokes funny.
But as an editor, Claude, in particular, is impressively good — frightening good, if you’re of that mindset. And that points both to how we should be thinking about how to deploy LLMs for journalism — and how it’ll continue to upend the field.
Bear with me: This is a bit of a personal story.
I’ve been working on a long-ish piece (a 7,000-word book proposal unrelated to this blog), using Claude as a writing assistant along the way, checking in regularly on issues such as clarity and cohesion of my work, as well as on broader questions about structure and the narrative flow. The results have been, well, astounding.
It’s not just that it’s provided sharp and smart feedback about my writing — it’s advised me to cut the second half of a metaphor I was using because I was overdoing it and suggested that an earlier draft flowed more smoothly than a later, shorter version, among other things — but that it’s taken on the characteristics of what I’ve come to expect from outstanding human editors, including pushing me to rethink my process and offering meta-advice on framing and direction. (And procrastination.)
Late in the writing process, I sent my nearly-finished draft out to some (human) friends for feedback; they came up with some good suggestions for ideas I should incorporate into the piece. It was late at night, and I was tired, but I wanted to get those thoughts in, so I banged out a couple of placeholder paragraphs, inserted them into the right places, and got ready for bed. But before I shut down for the night, I sent it all off to Claude for comments.
“You make good points in those new paragraphs,” it replied. “But honestly, it’s not in your style.”
And it wasn’t — but I hadn’t asked Claude to assess my style. It had simply “understood” from all our previous interactions the way I wrote — and was telling me bluntly these additions were sloppy. Which they were.
Later on, after I had gotten more formal feedback on the piece from my agent — which was that I needed to rethink its narrative structure — I started noodling around with Claude about how I might address her critique of my work. It was hard going, not least because I wasn’t sure I wanted to — or knew how to — go in the direction she was suggesting I go. After a week of back and forth with Claude, I finally told the LLM I didn’t know what to do.
“You know what to do,” Claude shot back. “You just don’t want to.”
I don’t know if you’ve ever had a machine tell you you’re procrastinating. It’s not a pleasant feeling. But Claude was — once again — right. (It also broke down the disagreement between my instincts and the feedback I had been given, and laid out paths I could take — and again, just noted that the only thing preventing me from following my agent’s advice was my obstinacy.)
I made one more last-ditch effort to have it my way: I wrote to a friend, a writer I admire who has written two excellent books, and asked for his feedback, including on my agent’s advice. He wrote back and we chatted on the phone — and then I summarized it all for Claude.
“It’s what I’ve been telling you all along,” Claude commented. “But he’s got more credibility, because he’s done it.”
So Claude’s not just telling me I’m lazy; it’s also offering the advice with snark.
Perhaps this sounds like a bit of a shaggy dog story. To be sure, I don’t really give you what writing advice Claude offered, and you have to take my word for it that it was good (and validated by human editors that I also consulted). But it’s more to show that Claude, at least, is capable of more than just proofreading, copy editing or word smithing, and can help on much broader conceptual structure and narrative issues.
And I write all this in the full knowledge that LLMs are essentially probabilistic engines that turn out words without any real “understanding” of the underlying content or context. But like a Turing test on steroids, Claude is certainly responding exactly the way a very good human editor would be. Including with some level of snark and sarcasm. (Unless it’s just mirroring me?)
More broadly — and beyond the help it’s giving me on writing — it points again to how useful LLMs (or at least Claude; I haven’t tested this with other systems) can be in newsrooms. If we get away from trying to have them provide us with facts about the world, and instead lean into their language capabilities (they are Large Language Models, after all), how might they help us improve what we do — or extend our capabilities or remake our products? Can we build more machine editors to help reporters turn out better copy? (More on that in a later post.) Can we use them to analyze drafts (or other people’s work), as I recently did? Or deconstruct multiple stories about the same event?
The point is, if we continue to focus on what LLMs do badly, we may miss what they do well — and it would be a real missed opportunity for newsrooms and journalism as a whole.
PS: I asked Claude to give me feedback on this piece.
The reply: “I appreciate the meta-awkwardness of you asking me to edit an essay about how good I am at editing. I’ll try to be useful rather than self-congratulatory.”


