Narrating the Machine
AI and the Fictions of Fairness
April 25, 2025
Two roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;Then took the other, as just as fair,
And having perhaps the better claim,
Because it was grassy and wanted wear;
Though as for that the passing there
Had worn them really about the same,And both that morning equally lay
In leaves no step had trodden black.
Oh, I kept the first for another day!
Yet knowing how way leads on to way,
I doubted if I should ever come back.I shall be telling this with a sigh
—Robert Frost, "The Road Not Taken"
Somewhere ages and ages hence:
Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference.
Amanda Askell, Anthropic's "in-house philosopher"—tasked with training the company's large language model Claude to "be honest" and exhibit "good character traits"—argues in her essay "AI Bias and the Problems of Ethical Locality" that efforts to reduce bias in AI systems are constrained by two forms of "ethical locality": practical and epistemic. The practical locality problem refers to how social structures and institutional limitations shape the choices available to us, meaning that even well-intentioned decisions can reflect and reinforce systemic bias. The epistemic locality problem concerns the limits of moral understanding itself: Our definitions of fairness and discrimination are historically contingent and likely incomplete. Through a 19th-century case study of a hiring manager named Jenny, Askell shows how decisions that appear procedurally fair can still perpetuate injustice due to upstream inequalities or evolving ethical frameworks. For Askell, AI systems inherit both of these problems. Rather than attempting to solve bias outright, she advocates for building systems that reflect current values while remaining open and responsive to future moral progress. AI bias, in this view, is not a fixed technical flaw but a dynamic ethical challenge inseparable from the broader project of AI alignment.
Yet Askell might have begun with an even more foundational concern: language itself. Large language models (LLMs) like Claude are not only shaped by social and ethical locality—they are built entirely out of language. As Nietzsche warned, language imposes artificial order on experience, flattening complexity into familiar forms. Language does not so much reveal reality as reduce it to persuasive, overly simplistic metaphors—what Nietzsche called "illusions we have forgotten are illusions." Language is not a transparent vessel of meaning but a distorting mirror. Moreover, even the most careful use of language can produce unintended consequences, as words often escape the speaker's intended meaning, setting off chains of interpretation (and action) beyond his control.
Robert Frost's "The Road Not Taken" powerfully illustrates this tension between language and reality. The poem's narrator reflects on a seemingly decisive moment—choosing between two paths in a yellow wood—and imagines, years later, that his choice "made all the difference." But Frost subtly undermines the myth of individualism the poem is often taken to celebrate by admitting that both roads were "really about the same." Crucially, the final, often-quoted stanza is in the future tense: "I shall be telling this with a sigh / Somewhere ages and ages hence." The difference the speaker claims is not the result of the choice itself but of the narrative he constructs after the fact; read this way, the poem becomes a meditation on how language reshapes experience retroactively—how we use words not to describe the truth of a moment, but to impose meaning on a moment that has long since passed.
The reason why Frost's poem is so often misunderstood is that its final lines sound decisive and comforting. Readers tend to embrace the myth of the "less traveled" path because it affirms our preference for individualism and moral clarity. Nietzsche might say this misreading reveals our need for language to impose coherence on what is essentially ambiguous. Frost, however, destabilizes that desire. The roads are identical; the choice is arbitrary. What matters is the story we later tell ourselves to make that choice feel necessary, meaningful, and even fated—"meant to be."
Frost's insight has profound implications for large language models. LLMs are trained on precisely these kinds of retrospective human narratives—stories shaped by selective memory and self-justification. They excel at producing coherent, fluent text that sounds true, even when it reflects deeply ambiguous or morally fraught realities. As Nietzsche and Frost caution, such fluency can be a vehicle for self-deception. LLMs do not merely inherit language's limitations; they amplify its most seductive quality—the ability to obscure uncertainty with plausible-sounding answers. The risk is that users will confuse linguistic polish for ethical reliability, mistaking confidence for wisdom and narrative for truth.
Ultimately, acknowledging the instability of language challenges us to approach large language models with critical humility. These systems can simulate decisiveness—just as Frost's traveler imagines recounting his choice with clarity—but beneath that illusion lies the same ambiguity Nietzsche exposed. Ethical use of LLMs demands that we recognize their outputs as provisional, not authoritative—tools for thought, not substitutes for it. Fluency must not be mistaken for moral insight; to deploy these technologies responsibly, we must keep human judgment at the center, and remain vigilant against the comforting myths that language (and machines trained on it) so readily provide.