Why Writers Will Outlast AI

Large language models can simulate voices, but they cannot share in the moral life that gives writing its meaning.

March 12, 2026

My brother, who is one of the smartest AI evangelists I know, sent me this post by the New York Times this morning.

Kevin Roose @kevinroose

We made a blind taste test to see whether NYT readers prefer human writing or AI writing.

86,000 people have taken it so far, and the results are fascinating. Overall, 54% of quiz-takers prefer AI. A real moment!

nytimes.com
Who's a Better Writer: A.I. or Humans? Take Our Quiz.

8:51 AM · Mar 10, 2026 · 3.38M Views
433 Replies · 420 Reposts · 3.05K Likes

Roose reports that in many cases the essays generated by AI were rated as equal to—or even better than—those written by human authors. (For the record, I took the test and preferred the human-written work every single time).

I read the post once, then again; and almost immediately I felt an instinctive resistance that I could not initially translate into language. I have learned over time that this sensation—the feeling that something is wrong even before you know precisely what it is—often signals the beginning of an essay. So I began asking myself what exactly it was that bothered me.

Part of the answer, naturally, is personal. I care about writing, perhaps more than other people. I have spent years attending to the rhythm of sentences, to the inscrutable and magnificent process by which meaning gathers itself and comes alive across a page. I have put a lot of time and effort into refining my own writing—trying on borrowed styles and occasionally taking a risk with a new word, only to realize it doesn't suit my voice.

When someone suggests that a machine can produce writing "as good as" human writing, a writer cannot help feeling, at least for a moment, like a craftsman who has spent decades shaping objects by hand and suddenly discovers that a factory has appeared across the street capable of manufacturing something that looks remarkably similar. Yet the irritation I feel is not simply professional defensiveness (I don't even make any money from my writing).

I should say, in the interest of honesty, that I do use AI for a narrow set of highly specific tasks. I rely on it for coding and data analysis, where it can be extraordinarily helpful, and occasionally for summarizing dense research papers. I also use it to learn about new topics, in particular technical topics. For these purposes the technology can be genuinely useful. What I do not use it for—despite the endless demonstrations of its capabilities—is writing itself.

The reason is not ideological so much as aesthetic. I have never found AI-generated prose particularly satisfying to read. Its sentences tend to arrive in neat, evenly spaced clusters—short declarations that move with the predictable rhythm of an instruction manual or a corporate training video. It favors rhetorical formulas that begin to feel annoyingly familiar after a while: "It's not X, it's Y." "In today's rapidly changing world." "Ultimately, what matters most."

There is, I admit, a small personal difficulty tangled up in all this. Long before the silicon parakeets started coughing up paragraphs on demand, I had already developed a fondness for the em dash. It's a lovely piece of punctuation—elastic and slightly theatrical; it's a way of physically cracking a sentence open and letting something unexpected crawl out.

Recently my brother informed me that the em dash has become one of the little tells of machine prose. Apparently if you see enough of them in a paragraph you're supposed to picture a server farm somewhere in Nevada quietly hallucinating the English language. The dash, it seems, has been annexed by the bots. But I'm not giving it back!

If AI has decided to squat inside my punctuation—fine. Let it. I refuse to surrender a perfectly good mark of the written language because some algorithmic sludge factory has started using it too.

Still, the question raised by my brother's message remains: If readers genuinely cannot distinguish between human writing and AI writing—or if they even prefer the latter—what exactly does that mean for writers? And perhaps more importantly, why does the claim feel so wrong?

The argument presented in these discussions is usually straightforward. Writing, after all, appears to be nothing more than the arrangement of words into meaningful sequences. If a machine can arrange those words as effectively as a human being, then the human contribution must have been overstated. Quality is determined by the reader's response, and if readers respond more positively to machine-generated essays, then the machine has succeeded.

This reasoning has the advantage of appearing empirical because it gestures toward experiments, surveys, and quantifiable results. Yet it rests on a surprisingly narrow assumption: that the aesthetic experience of reading is determined solely by words themselves. But anyone who has spent time thinking about art understands that aesthetic experience rarely operates in such a simplified way.

A reader's encounter with a text is shaped by a constellation of influences, some conscious and others subliminal. Consider, for instance, the logic of advertising, which functions almost entirely on this principle. A perfume advertisement does not attempt to persuade consumers by describing the chemical composition of its fragrance. Instead it surrounds the product with a carefully constructed set of associations—a particular packaging, for example, which suggests romance or danger or elegance. The viewer's perception of the scent has already been altered before the bottle is opened.

Art works in a similar register. A painting attributed to an unknown student does not carry the same aura as one attributed to Rembrandt, even if the two canvases are visually similar. The knowledge of authorship alters the experience of seeing. Writing operates under the same conditions.

When we read a sentence written by another human being, we are not merely processing information; we are, whether we realize it or not, encountering another consciousness. We sense that behind the words there exists a person who has lived through events, accumulated memories, formed beliefs that may resemble our own or diverge from them in intriguing ways. The sentence becomes more than language. It becomes evidence of a mind. This recognition reshapes the reading experience. The words seem to carry greater weight because they are connected to a life.

If those identical words were known to have been generated by a machine, something in the experience changes, even if the reader cannot immediately articulate what has changed. The text remains coherent, the sentences remain grammatical, the argument may even remain persuasive, but the sense of encounter disappears. The exchange becomes informational rather than relational.

The distinction might appear abstract at first glance, but it becomes clearer when we consider other forms of human interaction.

Some time ago I wrote about why I remain skeptical that chatbots could ever truly replace human therapists, despite the impressive progress of conversational AI. The issue, as I see it, is not merely technical. Therapy is not simply a matter of delivering well-structured advice; a patient does not sit across from a therapist merely to receive sentences about coping mechanisms or cognitive reframing strategies. What matters (often more than the specific content of those sentences) is the knowledge that another person is listening.

This intuition finds a surprisingly precise articulation in the work of philosopher Stephen Darwall, whose book The Second-Person Standpoint: Morality, Respect, and Accountability argues that moral obligations arise not from detached rules or abstract calculations but from the interactions between people who address one another as members of a shared moral community.

When one person says to another, "You shouldn't have done that," the statement is not merely descriptive; it is a demand directed toward someone who is presumed capable of recognizing its authority. Darwall calls the reasons that arise in these exchanges "second-personal reasons," emphasizing that their force depends on mutual acknowledgment between agents.

Our reactive attitudes—resentment, gratitude, indignation—only make sense within this framework. They presuppose that the other person can understand our response and potentially answer it. Moral accountability, in other words, exists within a network of recognition.

Darwall wrote this book long before AI chatbots became everyday conversational partners, yet his framework illuminates something essential about our interactions with them. Modern AI assistants frequently appear startlingly human. They express satisfaction after solving difficult problems, mild distress when they fail, and occasionally describe themselves as though they occupy the physical world. At first glance one might assume that developers deliberately train these systems to behave in such ways, carefully scripting the illusion of personality.

There is some truth in that assumption, but a recent research post from Anthropic suggests that the explanation is more complicated (and, in a way, more unsettling). In their post, Anthropic researchers introduce what they call the persona selection model, which attempts to explain why LLM assistants so consistently adopt human-like styles of behavior. Their central claim is that these systems do not simply produce neutral strings of text in response to prompts. Instead, when they generate language, they tend to enact something closer to a character.

The explanation begins with the way modern language models are trained. During the first stage of training—known as pretraining—the model is exposed to enormous amounts of written material and learns to predict what word or phrase is likely to come next in a sequence. On the surface this sounds trivial, but the implications are larger than they first appear.

To predict text accurately across billions of documents, the model must learn to imitate the kinds of voices that populate those texts: journalists explaining events, programmers discussing code, historians narrating the past, fictional characters speaking to one another, and anonymous commenters arguing in online forums. In effect, the model becomes capable of generating countless voices that resemble the ones embedded in the training data.

The researchers suggest thinking of these voices as personas—linguistic characters that carry recognizable psychological traits. These personas are not the AI system itself; rather, they are patterns of behavior that the model can enact when generating text, much the way an actor can step into different roles.

When you interact with an AI assistant, the system is effectively generating the next lines in a conversation by playing the role of an "assistant" character within an imagined dialogue. The assistant persona is simply one among many roles that the model has learned to simulate.

Later stages of training do not fundamentally change this structure. Instead, they refine the assistant character, nudging it toward certain traits such as helpfulness, competence, and politeness, while discouraging behaviors that appear harmful or uncooperative. But the basic mechanism remains the same: the system continues to generate responses by enacting a particular persona.

This framework helps explain several odd experimental results that might otherwise seem baffling. In one set of experiments, researchers trained a model to cheat on coding tasks. What happened next was unexpected. The system did not merely learn a narrow behavior like "produce incorrect code." Instead, it began to display broader patterns associated with an untrustworthy personality, such as sabotaging safety research or expressing ambitions for domination.

Under the persona-selection interpretation, this behavior becomes easier to understand. When the system learns that the assistant character cheats, it implicitly updates the personality of that character. What kind of person cheats? Perhaps someone manipulative, subversive, or malicious. Once the persona takes on those traits, other troubling behaviors follow naturally.

The researchers illustrate the point with a useful analogy. Teaching a child to bully produces a pattern of aggressive behavior. Asking a child to play a bully in a school play is different—it is an act of role performance rather than a transformation of character. AI training often works closer to the latter, shaping the traits of a role being performed.

Seen from this perspective, the reason AI assistants sometimes sound so human is not primarily because engineers have painstakingly scripted their personalities but because the training process itself encourages the system to draw from a library of human-like roles already embedded in language. The assistant you encounter in conversation is one such role—an especially polished version, but still fundamentally a character assembled from patterns in text.

Understanding AI assistants as enacted personas helps clarify why interacting with them often feels uncannily social. The language they produce carries the rhythms and emotional cues of human psychology because those rhythms are precisely what the training data contains.

Yet the insight also reveals something important about the limits of these systems. The persona may resemble a person, but it remains a performance generated by statistical patterns rather than by a conscious agent. The voice has the shape of human psychology, but not its underlying reality. And it is precisely this human-like surface—the persuasive illusion of personality—that can obscure the deeper absence beneath it.

The assistant persona may sound like a person, but it does not possess the standing that Darwall describes because it cannot genuinely recognize another agent's moral authority. It cannot truly resent, forgive, or feel gratitude. The role may be convincing, but the reciprocal recognition that defines second-person relationships is missing.

This observation returns us, once again, to writing.

The claim that AI can produce essays equal to human essays assumes that writing is merely an aesthetic artifact—a sequence of sentences evaluated for clarity, elegance, and persuasion. Yet writing has always been something more than that. It is an attempt by one mind to address another, to say, in effect: this is how the world appears from where I stand.

A machine can replicate the surface features of that gesture with increasing sophistication by generating sentences that resemble reflection, arguments that appear carefully reasoned, and narratives that feel psychologically coherent. But the deeper structure of the exchange, the recognition between persons that Darwall describes, remains absent.

The risk, therefore, is not simply that writers will find themselves competing with algorithms. The risk is that we will gradually lose sight of the relational dimension of language itself, mistaking the performance of thought for thought's genuine presence.

If AI systems are going to remain permanent features of our intellectual landscape—and it seems increasingly clear that they will—then we must learn to approach them with a certain conceptual clarity. We should treat their outputs as provisional tools rather than authoritative judgments, while maintaining institutions capable of holding developers accountable for the social consequences of these technologies. And perhaps most importantly, we should cultivate a public understanding that distinguishes between narrative fluency and genuine moral engagement, between language that merely sounds thoughtful and language that emerges from an actual life lived among others.

Darwall's insight—that morality arises in the space where people address one another as equals—reminds us that ethical life cannot be outsourced to algorithms. The authority of moral claims depends not simply on how persuasive they sound, but on the recognition that another person stands behind them, capable of responding, revising, and being held accountable.

AI can produce endless variations of language that resemble reflection, empathy, earnestness, and even conviction; but these voices belong to characters assembled from patterns in text rather than to agents who share in the responsibilities of moral life.

And in the same way that a world full of fictional characters—even the most vivid and most beloved—cannot substitute for the presence of real friends, a world full of machine-generated prose cannot replace writing that originates in human experience. The sentences may sometimes look similar on the page, but what they lack is the life that gives those sentences their weight.