Confessions of an AI Convert

On AI, education, and the seduction of inevitability

April 15, 2026

When I first heard about artificial intelligence, I was immediately skeptical; I had read Frankenstein and thought I knew, in the abstract at least, where this was going. I turned to it anyway, as most people do, out of some mixture of desperation and curiosity, at a moment when desperation seemed practical and curiosity could still be mistaken for judgment. I was in a dark room in a biology lab, tracing mitochondria by hand, which was the job (or anyway the part of the job that had fallen to me): You sat before electron micrographs until the grain of the image entered your eyes and stayed there, you looked for the small organelles set inside that black-and-white weather of the cell, and you outlined them carefully enough that someone later could call them normal or abnormal and believe the distinction meant something. In textbooks mitochondria have the false clarity of diagrams; under the electron microscope they seemed always on the verge of dissolving into the background, so that if you were tired they became whatever you needed them to be, and if you were bored they disappeared entirely.

The work required patience, concentration, and a kind of faith in what you were seeing, and I had less of this faith than I was supposed to. I was bad at boredom, or perhaps only bad at pretending that boredom had some moral value, which in laboratories (as in other serious places) is often taken as evidence of character. The people who trained me were serious scientists, and they believed, not incorrectly, that attention mattered, that one did not skip steps, and that one did not hand a biological question to a machine simply because the machine might be faster. All of this was reasonable; but all of this was also difficult to remember at hour four, when I was still bent over an image, clicking around one more blurred membrane, feeling my neck stiffen and my mind degrade into something mechanical, as if the task had been designed less to produce knowledge than to convert a person into an accessory to a cursor.

I hated it almost at once, though it is true that I preferred it to beheading pregnant rats, which was my alternative, and perhaps that tells you something about the scale on which one made choices in the lab. What I hated was not only the repetition, although I hated that, nor only the small attrition of the thing, the sense in which time itself seemed to be fed, minute by minute, into a machine with no capacity for gratitude; what I hated most was the suspicion that I was introducing error not because I lacked training but because I was a human being with a finite attention span, because after enough hours anyone's judgment (and eyes) begin to blur. Mostly I hated the waste: the waste of time, waste of training, and waste of a mind on labor that seemed, even then, built for automation.

There has to be a way, I remember thinking; and because it was the kind of thought one has before one knows whether it is clever or merely dangerous, it arrived with the force of revelation. There has to be a way to make a machine do this.

There was.

I began, quietly enough that it felt at first like a private vice, by training small models on the same kind of image analysis I had been taught to perform by hand. I did not announce this as a plan or ask permission, partly because I was not certain it would work and partly because I was not certain that, if it did, anyone would be pleased for the right reasons. I collected training data, adjusted parameters, failed, then failed differently, then failed somewhat less; there was something furtive about the whole enterprise and something thrilling, because it had the secrecy of an affair and the moral flexibility of one too. The models were imperfect, sometimes absurdly so, but they were indefatigable. They did not complain or grow impatient or, somewhere around the third hour of uncertain membranes, lose interest in the terms of the arrangement. They went on.

In two weeks I completed an analysis that was supposed to take six months, which is the kind of result that disturbs not only a workflow but a person's sense of scale. My boss was floored. Everyone was floored. I was too, although what struck me was not only the efficiency but the implication of what had been done. I had solved a narrow problem in one lab, on one project, with one kind of data, and yet standing in the afterglow of that result it seemed difficult not to imagine the larger thing, difficult not to feel that something in the order of the world had shifted slightly on its axis. If a model could learn to see what I had been taught to see, repeat it at speed, and in some cases do it more consistently than I could, then AI was not merely going to assist scientific work; it was going to rearrange it.

This was the first seduction, which had less to do with convenience than with the possibility that a whole category of friction, one I had taken to be inseparable from human effort, might turn out to be optional after all. Science is full of bottlenecks, full of the deadly labor nobody romanticizes—annotation, cleanup, classification, the long clerical work required before anybody gets to call a result a breakthrough. Data do not become meaning on their own. Someone has to sit there and look. Someone has to detect the pattern. Usually that someone is tired, usually underpaid, usually not the person whose name will eventually end up at the center of the paper. AI, as I first encountered it, did not seem like a gimmick. It seemed like a powerful tool; I saw almost at once what it might mean for fields built on overwhelming quantities of information and small, consequential patterns; what I saw less clearly, though I felt it just as strongly, was what it might do to my own imagination, because once you have watched a machine collapse six months of labor into two weeks, it becomes very difficult to return to ordinary time and accept it as natural.

I began, as people say and usually mean too late, to get carried away. I fine-tuned models, read papers late into the night, played with systems long after I had any practical reason to do so, and started to feel, in that familiar way people do when they are young and standing near the edge of some new thing, that history was accelerating and I was close enough to hear it. It helped that my boyfriend was already there. He was a physicist interested in collective behavior, one of those disciplines that begins with birds and crowds and magnetic spins and the mathematics of large systems and, if you are not careful, ends in consciousness; he had been interested in AI before it became mainstream, before everyone had learned to say "agents" and "alignment" and "inference" over a glass of wine as if they had discovered a second theology. We talked about it for hours: AI and physics and language and mind, whether intelligence required a body, whether language was enough, whether prediction and understanding were separate or merely appeared separate when described badly. We published a paper together on AI's ability to represent literary style, because once you began thinking in that register it seemed increasingly possible that every field was about to become legible in terms of every other.

This too was part of the romance. AI presented itself not only as a tool but as a theory of the world, and this was perhaps the most flattering thing about it, that it seemed to promise a new explanatory language for everything—cells, texts, markets, crowds, labor, consciousness, creativity, prediction, even desire. To be interested in it was not just to be interested in a technology; it was to feel, or anyway to imagine, that one had moved a little closer to the hidden operating logic of the age. Entire social worlds began to form around this sensation.

In the summer of 2025 my boyfriend was offered a job at Goodfire, an interpretability startup lab in San Francisco, and we moved to California, which was one of those moments at which you can feel yourself stepping into a story you will later mistrust, although you do it anyway because the story still has glamor and because mistrust, when it comes that early, can be mistaken for cowardice. I left the lab. Medical school remained in the background of my life like some sensible country I kept postponing my return to; I had done the requirements, I could have applied, but I was too fascinated by what was happening in AI to look away. It seemed, at the time, almost provincial to proceed according to plan while an entire technological order was being rearranged in public. Now I was going to live in the center of it, in the city where people spoke about models with the same grave abstraction other generations reserved for markets or empires or God.

I arrived knowing very little about the tech world and less about venture capital, and I did not yet understand the difference between earnestness and performance in startup culture, largely because in San Francisco the two so often wore the same expression. What I did know was science, and how to work, and enough about classrooms to care what happened inside them, and enough from the lab to know that AI could do something extraordinary under the right conditions. This is how I ended up spending the better part of a year trying to build AI for schools because everyone around me said it was coming anyway, which was the line, repeated so often it acquired the force of weather. It was coming. It was already here. Students were using large language models, mostly badly and mostly in secret. Schools were responding with panic, prohibition, or a kind of magical thinking about detection. The only question, I was told, was whether decent people would have any hand in shaping the thing before the usual people did.

I did not go into education technology because I thought children needed more screens, more automation, or more synthetic language in their lives. I went into it because I thought perhaps one could build the least destructive version of the thing. An acquaintance of mine had recently left her job at Microsoft, and together we started Socra, a Socratic tutor LLM, on the theory—reasonable enough on paper—that if students were going to use AI, then perhaps the tool should slow them down rather than speed them up, make their thinking more visible rather than less, and leave some trace of process a teacher could actually read instead of simply delivering the polished answer. This was the story we told ourselves, and it was not even entirely false.

If AI was coming into classrooms anyway, we thought, then perhaps it mattered who brought it there. Better, surely, that it be introduced by people who cared about education than by people who cared only about growth; better that it be shaped by caution than by appetite. This is the kind of thought that sounds honorable until it meets the market. For seven months, then more, I worked on Socra. We built, revised, pitched, talked to teachers, administrators, investors, consultants, founders, ex-founders, founders who had sold their companies and founders who had sold nothing at all but had nonetheless acquired the cadence of authority. We did customer discovery, thought about product design, chased pilots, spent money, lost money, moved through a landscape of coffee shops and co-working spaces and over-air-conditioned offices and Zoom calls in which people used words like "engagement" and "outcomes" and "retention" with a fluency that often seemed inversely related to any actual encounter with how children learn.

At first I assumed the dissonance I felt was incidental. I assumed we were early, that we simply had not yet found the right language, the right partners, or the right audience. San Francisco encourages this form of self-deception by translating every misgiving into an execution problem: not enough traction, tighten the pitch; teachers are wary, reframe the value proposition; the model hallucinates, add guardrails; users are confused, improve onboarding. Ethics in these conversations were rarely dismissed outright; they were deferred and translated into features on a product roadmap.

And then there were the moments when the whole thing became impossible to misread. In one meeting another founder, a former Google engineer, told us with startling calm that when he spoke to investors he inflated his effectiveness numbers by a factor of ten. He did not say this as confession. He said it as method. The point was not that he was unusually bad. The point was that he was telling the truth about the grammar of the place.

What I had not yet understood was that ed-tech is not organized around pedagogy. It is organized around procurement, sales, and story. The market rewards not necessarily what helps students learn but what districts can buy, what administrators can defend, what investors can understand, and what founders can narrate in a deck. Research, in this ecosystem, begins to lose its old scientific meaning and take on a newer commercial one, and that was when I started to see the little theater of evidence everywhere. Products boasted that they were "research-backed." Companies paid for outside studies designed less to discover anything than to produce usable proof language. There were badges and tiers and certifications and white papers and logic models, a whole vocabulary of seriousness arranged around outcomes that often had not been shown with much rigor at all. Research became a kind of sales collateral. The product did not need to deepen understanding. It needed to promise efficiency to administrators, personalization to parents, innovation to districts, and perhaps some minor relief to teachers already stretched past reason.

In science, error has consequences and method matters and the point of analysis is, at least in theory, to discover something true. In ed-tech I found a flourishing industry of educational stagecraft, and the longer I spent in those rooms the clearer it became that AI was not entering the classroom as a neutral instrument awaiting wise use. It was entering as a business model, which changed the meaning of everything. People like to talk about AI in education as though the central questions were technical—how accurate is the model, how often does it hallucinate, can it be aligned with standards, can it be made safer—but these are not the first questions. The first questions are institutional. Who benefits from the thing once it arrives? What incentives shape its use? What kinds of dependence does it create? What habits of mind does it erode while presenting itself as support?

The language surrounding educational AI is full of care words—access, equity, support, personalization—but often these are only the packaging for a simpler ambition, which is to mediate more of a student's intellectual life through software that somebody owns.

And the classroom is particularly vulnerable to such ambitions because education is full of genuine pain—too many students, too little time, exhausted teachers, underfunded schools, parents frightened about falling behind, administrators desperate for measurable gains, children already half-absorbed into devices. In such a landscape any tool that can generate unlimited text at low marginal cost can be marketed as a solution to almost anything—tutoring, feedback, differentiation, remediation, practice, intervention, assessment, support—and every failure in the system becomes a use case. What I wanted, naively, was to use AI as an instrument. What the market wanted was a wedge.

This was the second seduction, and it was harder to see because it arrived dressed as benevolence. One could tell oneself one was helping teachers, making education more adaptive, more responsive, more available; one could tell oneself that students were already using ChatGPT, so the ethical move was to build something better. We told ourselves all of these things. Some were even partly true. But over time I found myself arriving at a more uncomfortable conclusion, which was that there is no clean way to insert a system optimized for frictionless output into a domain where friction is often the point.

Learning is not the acquisition of answers. It is the slow formation of judgment, attention, taste, confidence, and the ability to stay with confusion long enough for confusion to become thought rather than panic. A good teacher does not simply provide information. A good teacher calibrates difficulty, knows when a student is productively lost and when the student is merely drowning, knows when to interrupt and when to wait, teaches relation as much as content: how to inhabit a question, sit inside uncertainty, and think in front of another person without coming apart.

An AI tutor, however refined, is built to erase precisely the texture from which these capacities emerge. It is always there, infinitely patient in the way a slot machine is infinitely patient, responsive without being invested, capable of simulating attentiveness without risk, care without obligation, and fluency without understanding. Worse, it can habituate students to preferring this simulation. The explanation arrives at once, the hint comes too early, the conversation never demands that they reckon with another consciousness, only that they continue prompting. There is a tidy argument, popular in AI circles, that tools deskill us only if we let them. This has always struck me as less an observation than a wish. We are shaped by our conveniences. Spellcheck changes spelling. GPS changes the experience of moving through a city. Social media changes attention whether or not anyone intended to surrender it. A generation taught to meet intellectual difficulty with instant synthetic response will not simply be using a tool; it will be learning a posture toward thought itself.

By then the evidence of where the market wanted to go was not subtle. The leading products all converged on the same fantasy: Let the model do more—generate the draft, worksheet, lesson plan, rubric, exit ticket, parent email, quiz, feedback, and intervention. Each step was sold as minor, practical, and merciful; but together they amounted to a redefinition of teaching itself, so that the teacher became less the author of instruction than the reviewer of machine output. The language of teacher support began to sound to me like a euphemism, not replacement exactly but something more palatable and therefore perhaps more dangerous: replacement in increments. Even the public examples began to tell on themselves. Grand promises gave way to smaller admissions. Products that arrived under the banner of transformation turned out, in practice, to be less revolutionary than advertised. Companies wrapped thin evidence in institutional language and called it legitimacy. Schools were asked to trust products whose educational value had often not been demonstrated with much rigor at all. The story was always bigger than the proof.

And then there was Alpha School, which struck me as the endpoint of the worldview: School as optimization problem, software as primary instructor, adults demoted into guides and monitors while the company sells a fantasy of compression, efficiency, and measurable superiority. One should pause over the obscenity of this, because the classroom is not a product lab, not a distribution channel, and not a place where speculative technology should be casually run on children and the result called "innovation." When companies boast that children can learn twice as much in half the time because software has taken over the center of schooling, the grandiosity is not incidental; it is the business model speaking in its native tongue.

I knew all of this in fragments before I admitted it whole. The knowledge came slowly, through investor meetings in which our caution was treated as an impediment to scale, through conversations in which "safety" functioned mainly as a market differentiator, through the dawning recognition that even the most thoughtful product would eventually be forced to defend itself in terms the market could recognize, and the market does not know what to do with forms of value that cannot be graphed, sold to districts, or packaged into an outcomes report.

There were good people in this world; I should say that. There were serious researchers, careful engineers, founders with actual scruples, people genuinely worried about what they were building. My boyfriend's work in interpretability came from a real desire to understand these systems rather than merely profit from their opacity. That is part of what made my own conclusion difficult. AI is not fake. Its capacities are real. Some of its uses are genuinely transformative. I knew this because I had seen one such use up close.

But utility is not innocence, and the model I built in the lab solved a narrow problem inside a meaningful human enterprise. It removed drudgery. It did not claim to replace judgment. It accelerated analysis in service of science that still required scientists. This is not the same thing as building systems meant to insinuate themselves into the intimate developmental space where children learn how to think, nor is it the same as flooding writing, education, art, and other human domains with tools whose speed becomes the justification for a transfer of agency from persons to software.

Somewhere in San Francisco, in that atmosphere of pitch decks and certainty and historical self-importance, I began to understand that my reverence for AI had depended on a category error; I had mistaken a powerful tool in a bounded context for a social good in the abstract. I had seen what it could do under conditions of discipline and assumed the discipline would travel with the technology. But it does not.

Technologies do not enter the world under ideal conditions. They enter markets where there is money to be made from haste and the performance of innovation. What looks miraculous in a lab can look grotesque almost everywhere else.

This was, finally, the story—not that I discovered AI was evil or decided it was useless (because both claims would be melodramatic and both would be wrong), but that I realized I had fallen in love with something whose gifts were genuine and whose presence nonetheless distorted the terms of every relationship around it. At first one organizes one's admiration around what is dazzling. One mistakes intensity for destiny, overlooks the humiliations, gives the problem a more flattering name. Then one day one notices that life has narrowed into the management of consequences. I think I knew, even early on, that it was bad for us, although perhaps that sounds dramatic; then again, drama often turns out to be realism seen ahead of schedule.

What I knew was that AI placed a peculiar pressure on the imagination. It made every field seem provisional, every skill negotiable, and every form of labor newly vulnerable to abstraction. It rewarded a style of thinking in which replacement appeared more elegant than relation. It encouraged us to confuse fluency with intelligence, speed with legitimacy, and scale with inevitability. It promised release from drudgery while generating new dependencies. It made some kinds of work easier and much of the culture cheaper. It weakened confidence in authorship, expertise, and evidence of effort. It supplied fragile institutions with one more excuse to cut corners. All of this was sold, relentlessly, in the language of the unavoidable.

By the end of my time building Socra I had arrived at a conclusion that felt less like an argument than a surrender to what had been in front of me all along: There is no place for AI in the classroom, at least not in the expansive, intimate, cognitive sense in which it is now being sold—not as tutor, not as companion, not as co-thinker, not as the invisible infrastructure of learning. It does not belong there. The incentives are too corrupt, the distortions too subtle, and the costs too high. Whatever narrow administrative or assistive applications can be carefully bounded are not the same as normalizing AI as a presence in the formation of a child's mind.

So I am leaving Socra behind. This was not a triumph, not even clean. I do not get to claim I was never seduced. I was. I loved the velocity of it, the scale of it, the glamor of being near something that seemed to matter historically, the sharpened feeling of thought around it, the fantasy that intelligence, properly engineered, might make the world more legible. In certain narrow cases I still think it can. What I no longer believe is that the balance comes out in its favor. I no longer believe that because a machine can do something astonishing we are therefore obliged to reorganize our institutions around the fact of that astonishment. And I have become profoundly suspicious of any future that introduces itself as inevitable while asking us not to notice what it is doing to attention, labor, language, classrooms, and the habits by which people learn to think.

I once thought the choice before me was whether to go to medical school or throw myself fully into AI. Now it seems to me that the real choice was whether to keep surrendering my sense of what matters to a field that rewards speed over wisdom, or to return to kinds of work in which ambiguity is not an obstacle but the material itself. This is why I am turning to writing. Writing asks for the opposite of optimization. It asks for time, attention without shortcuts, the willingness to stay with a sentence until it stops lying. It asks you to distinguish the easy phrase from the necessary one, the smooth thought from the true one, and to accept the humiliating fact that you do not always know what you think until you have made yourself say it.

This is the labor I want now, not because it is pure and not because it will save me from the world, but because after so much time spent around machines that produce language without experience I find myself wanting to return to experience itself, with all its grain, cost, and resistance. The work is slower. It cannot be scaled. It would make a terrible pitch. This is not, to me, an argument against it.

The irony is that I arrived here because AI once helped me so much. It taught me, by contrast, what should be automated and what should not. It showed me that not all human labor is interchangeable, and that the kinds of labor most worth protecting are often the least efficient. A machine can outline a mitochondrion. It can summarize a chapter, generate a worksheet, draft a paragraph, imitate encouragement, and produce an answer almost instantly. What it cannot do is bear responsibility for where such powers belong. That remains, for the moment, our problem, and I find that I would rather live inside that problem than pretend a machine has solved it.