Confessions of an AI Convert
On AI, education, and the seduction of inevitability
When I first heard about artificial intelligence, I was immediately skeptical; I had read Frankenstein and thought I knew, in the abstract at least, where this was going. I turned to it anyway, as most people do, out of some mixture of desperation and curiosity, at a moment when desperation seemed practical and curiosity could still be mistaken for judgment.
Read essay →What Would an FDA for AI Look Like
Private incentives built the models; public consequences now demand a regulator
The United States created the FDA because the market, left to its own devices, had shown itself far more gifted at invention than at restraint. By the time the harm could be counted, described, and photographed, the sale had already been made, the product had already moved on, and the public had already been left to absorb the consequence.
Read essay →Who Gets to Study AI (And Why That Matters)
Why the people who explain AI are often the same ones who profit from it (and why that matters)
In AI, "interpretability" names the effort to describe what happens inside a model when it reaches a decision. The term, however, carries a larger promise as well, which is that the system can, with enough expertise, be made readable. But who gets to make that promise? And what happens when it is made by the very companies that own the models and sell access to them?
Read essay →Why AI Will Never Write Good Literature
On bodies, feeling, and the metabolic limits of machine prose
Given how good large language models are at so many things—coding, summarizing, explaining complex material—why can't they write well? By "writing well," I do not mean producing competent reports or serviceable summaries. I mean writing with human feeling. Their sentences are sound, their paragraphs coherent, yet something essential is missing. The words convey meaning, but not felt life. You understand what is being said and almost immediately sense that no one is there.
Read essay →How AI Made Monomania Look Like Progress
On startup culture, where obsession becomes strategy and inevitability becomes excuse
A recent investigation accused Delve, a YC-backed startup run—as so many YC-backed startups now seem to be—by people barely past adolescence, of rapidly producing fake compliance certifications for startups eager to close enterprise deals. In a December TikTok interview, Delve's co-founder carried herself with the earnest authority of someone newly entrusted with it.
Read essay →The New Idols of Silicon Valley
Why the AI assistant is replacing God (and human judgment)
Whenever the tech world invents something strange, it immediately dresses the thing up in a human costume and pretends it isn't strange at all. The machine must have a name; it must be friendly; it must address you in the tone of a patient schoolteacher who has just finished a mindfulness course and now believes every question is "a great insight."
Read essay →Why Writers Will Outlast AI
Large language models can simulate voices, but they cannot share in the moral life that gives writing its meaning.
My brother, who is one of the smartest AI evangelists I know, sent me a post reporting that 54% of NYT readers preferred AI-written essays to human ones. I read it once, then again; and almost immediately I felt an instinctive resistance I could not initially translate into language. I have learned over time that this sensation often signals the beginning of an essay.
Read essay →The Logic of the Lesser Evil
War, artificial intelligence, and the comfort of what we call necessity
Arendt's observation captures a pattern that remains as visible in modern politics and economic life as it was in her own time: the gradual normalization of decisions that seem defensible in isolation but prove corrosive when accumulated. A leader chooses the lesser evil because the alternative appears worse; a company adopts a cheaper technology because its competitors already have; a market slides toward a new equilibrium because each participant believes there is no real choice.
Read essay →We Are All Victor Frankenstein Now
After Anthropic refused Washington and OpenAI signed on, the real question is who gets to control the creature.
It is a singular felicity (and a singular terror) of the human mind that it can mistake its own desires for the decrees of fate. Nearly two centuries ago, nineteen-year-old Mary Shelley cast into narrative form a warning so vivid that it has haunted every generation that followed. In her tale of Victor Frankenstein, she traced the perilous arc from curiosity to catastrophe. That arc extends with dreadful symmetry into our own age of artificial intelligence.
Read essay →A City in Cyberpsychosis
The profit motive behind the AGI apocalypse narrative
In 2018, Tarek Mansour and Luana Lopes Lara founded Kalshi, now one of the largest regulated prediction markets in the U.S. Their proposal was to standardize uncertainty and sell it in one-dollar increments. The future was not a mystery to endure but a signal to extract; and wagering on it was not indulgence but discipline—a way to convert intuition into conviction with money behind it.
Read essay →San Francisco's Story of Inevitability
On conviction, consequence, and the 2026 tech economy
Recently, I received a message on LinkedIn from a woman who introduced herself as "a fellow Yalie in the Bay Area." She asked whether I might be "feeling a jump to an early-stage AI startup." Then she explained what they were doing: "We build voice AI agents that sell mortgages over the phone." What unsettled me was not the technology itself, but the absence of hesitation in the way she described the work.
Read essay →Some Dreamers of the Silicon Dream
Reflections on San Francisco in 2025
In 1966, Joan Didion wrote about a woman who burned her husband to death in a car on a lonely road in San Bernardino. She called the essay "Some Dreamers of the Golden Dream." But her piece is less about the murder and more about the Californian faith that life, if pursued with enough intensity, could be remade. Having just moved to San Francisco, I can tell you this: Didion's California still exists.
Read essay →Ash: The First AI for Therapy
Is counterfeit help truly better than no help at all?
It is hard to ignore that over the past several decades, the world has experienced a subtle (though rapid) desocialization driven by technology. What makes this shift particularly insidious is the way its creators disguise their products as tools which enhance human connection, when in reality, these tools have the opposite effect, isolating us from those who are physically present.
Read essay →Holding a Mirror up to Nature
Why LLMs Cannot Replace Human Artists
Every time I read Stevens's "The Snow Man," the word "beholds" leaps off the page and grips me. What does it mean to behold something, as opposed to merely observe it? To behold is not simply to witness, but to participate—to actively shape the meaning of what is seen. Observation is passive; beholding carries intention, presence, and responsibility.
Read essay →Can an LLM Know Itself?
Applying Antonia Peacocke's Philosophy of Self-Knowledge to AI
What does it mean to know oneself? For human beings, Antonia Peacocke argues, self-knowledge is not a matter of passively observing our minds from the outside. Instead, when we judge that p or decide to act, we are not just noticing our beliefs or intentions—we are actively forming them. This kind of knowledge is authoritative because it is based on our capacity to engage in mental action.
Read essay →Narrating the Machine
AI and the Fictions of Fairness
Amanda Askell, Anthropic's "in-house philosopher," argues that efforts to reduce bias in AI systems are constrained by two forms of "ethical locality": practical and epistemic. Yet Askell might have begun with an even more foundational concern: language itself. LLMs are not only shaped by social and ethical locality—they are built entirely out of language.
Read essay →Is Art Still Worth Making?
A Response to Yascha Mounk's "The Third Humbling of Humanity"
Yascha Mounk argues that AI's creative prowess constitutes humanity's third "great humbling"—after Copernicus and Darwin. But perhaps Mounk underestimates the enduring power of belief, and the depth of motivation that drives people to create in the first place. Artistic expression is rarely about the final product alone; it often emerges from a need to make sense of pain, to resist despair, or simply to stay alive.
Read essay →The Algorithmic Eye
Large Language Models and Hume's Standard of Taste
In his essay "Of the Standard of Taste," David Hume insists that a standard of taste exists, discoverable through the consensus of critics endowed with "delicacy of taste." This essay proposes that large language models may represent the most faithful realization of Hume's true critics to date—not because they feel, but because they have access to more examples, fewer personal prejudices, and the capacity for instantaneous comparison.
Read essay →A Blue and Gold Mistake
On Emily Dickinson and nostalgia
The word nostalgia was coined in 1688 by Johannes Hofer, a Swiss medical student, to name a malady of homesick soldiers. It comes from the Greek nostos—return—and algos—pain. Nostalgia is the ache of returning, or rather, of longing to return to a place or time no longer reachable. It is not memory itself, but sorrow sewn into memory's hem.
Read essay →