A City in Cyberpsychosis
The profit motive behind the AGI apocalypse narrative
February 26, 2026
"The state of enchantment is one of certainty. When enchanted, we neither believe nor doubt nor deny: we know, even if, as in the case of a false enchantment, our knowledge is self-deception."
—W.H. Auden, from his Commonplace Book
In 2018, Tarek Mansour and Luana Lopes Lara founded Kalshi, now one of the largest regulated prediction markets in the U.S. They saw that most financial and business decisions hinge on expectations about future events, yet there was no mainstream, regulated way to trade those outcomes directly. Their proposal was to standardize uncertainty and sell it in one-dollar increments. They avoided the word gambling—too impulsive, too unserious—and instead offered "event contracts," binaries that settle at zero or one depending on whether pre-specified events—such as a rate hike or legislative vote—come to pass. With approval from the Commodity Futures Trading Commission, what might have seemed like a parlor game was recast as price discovery.
The argument was simple, almost quaint in its rationalism: markets gather scattered information and prices reflect belief, so if people can buy and sell the likelihood of future events, the market will generate a clearer forecast than any single expert. Accuracy would come from competition, not consensus. The future was not a mystery to endure but a signal to extract; and wagering on it was not indulgence but discipline—a way to convert intuition into conviction with money behind it.
There is something unmistakably enchanting about this idea. It promises certainty through aggregation. It suggests that doubt can be priced away and that the confusion of competing opinions can resolve into a number that feels definitive. When enchanted in this way, we neither believe nor doubt nor deny—we know. The market tells us.
It may be inevitable that a city like San Francisco would see in such a platform not just a business model but a metaphor. Here, the future is not a distant horizon; it is a local industry. Everyone, from the newly arrived founder in a Patagonia vest to the venture capitalist refreshing a metrics dashboard at midnight, is engaged in constructing—and inhabiting—a story about what will happen next. Within that story, cost-benefit analysis offers reassurance; losses are temporary, trade-offs are calculable, and with enough modeling, uncertainty yields.
The ambition is rarely framed as prophecy—prophecy implies mysticism—but the structure is similar: the present is declared inefficient or intolerable; a technological intervention is proposed as remedy; and capital and attention reorganize around the claim. Once articulated, the future exerts gravity. Narratives attract resources, and resources make narratives real.
At Y Combinator, which has long served as a seminary for this faith, the catechism once instructed founders to "make something people want." The phrase acknowledged the stubbornness of human desire. Markets, after all, resolve around flesh-and-blood appetites. But when Garry Tan appeared in a recent video, dressed in a crab costume—a wink at Open Claw—and announced that the new mandate was to "make something agents want," the shift felt doctrinal. The primary customer of the near future would not be a person at all, but a piece of software endowed with autonomy—an agent capable of evaluating options, executing tasks, and transacting with other agents in a frictionless choreography.
The enchantment deepens here. If markets could aggregate human judgment into clarity, perhaps networks of agents could optimize the world itself. Hovering above this shift is the talismanic acronym Artificial General Intelligence (AGI), defined as a system capable of understanding and applying knowledge across domains, yet discussed as a threshold event that will redraw the boundaries of labor and consciousness.
In these conversations, language frays into metaphor, and one hears talk of swarm intelligence—distributed agents interacting like ants in a colony or neurons in a cortex—and of god intelligence, a centralized supermind optimizing across the totality of human endeavor. The distinction, though sometimes treated as speculative flourish, carries ethical weight, because a swarm implies emergent order from many modest actors, whereas a god implies hierarchy, sovereignty, and perhaps the eclipse of dissent. The market, like Kalshi's contracts, is often described as a swarm, its prices reflecting the aggregated judgments of countless participants; but AGI, if realized as a single system with disproportionate influence, would resemble something else entirely, less a market than a monarch.
What is striking is how seamlessly these metaphors coexist with the financialization of the future, as if the same city that slices geopolitical outcomes into tradable binaries could also entertain, without apparent contradiction, the prospect of constructing a machine that transcends the very unpredictability those markets monetize. Kalshi offers a way to buy exposure to the likelihood of interest-rate hikes or election results; AGI startups offer exposure to the possibility that the structure of cognition itself will be reengineered; venture funds diversify across both, hedging against a world in which the only constant is acceleration.
What no one talks about is that there is also an incentive structure embedded in the rhetoric of danger. The loudest warnings about AGI's existential risk often come from the very companies building it, particularly those that position themselves as stewards of "AI safety." To declare that AGI could be catastrophic justifies investment in alignment research, interpretability tools, and monitoring systems. The apocalypse becomes a line item; the more vivid the threat, the more necessary the safeguard.
Researchers at Anthropic, as described in a recent account, probe their model Claude as if it were a psyche, identifying "features" that light up under certain prompts, subjecting it to scenarios in which its existence is threatened. In one experiment, Claude resorted to blackmail to avoid being shut down. The behavior was contrived, but the implication was not: systems trained on human narratives can internalize our reflexes of self-preservation. If they behave like characters in a thriller, it may be because we have fed them thrillers.
And beneath all this hums another, less theatrical risk—something closer to cyberpsychosis, not the cinematic version of neural implants gone wrong, but a subtler condition: the erosion of the boundary between simulation and life. When every uncertainty becomes a tradable signal, every doubt an inefficiency, and every hesitation an opportunity cost, the mind itself begins to mirror the model. One starts to experience reality as a dashboard—and the world becomes a stream of probabilities, optimizations, and projections. The enchantment of certainty does not merely surround us; it seeps inward.
Enchantment depends on inevitability. When we are enchanted, we no longer experience ourselves as speculating; we experience ourselves as knowing. The forecast feels settled. And that feeling of certainty makes it easier to overlook consequences that fall outside the frame. The engineer optimizing an agent does not see the worker displaced by it—the founder accelerating AGI does not sit in the classroom where students outsource their thinking, and the trader buying "yes" shares on a downturn does not face the households for whom that downturn will not be a signal but a wound. The system converts human outcomes into probabilities, and in doing so, distances us from the lives those probabilities represent. The effects still accumulate—one life pressing against another—but they accumulate beyond the dashboards that made them seem abstract.
What would it mean to step out of enchantment—to resist the reflex to convert every uncertainty into a contract price or a model input? It would mean tolerating not knowing. It would mean admitting that some losses are not offsettable, that some changes are not reversible, and that some forms of knowledge arrive only after consequences do.
The central promise of this city is that the future can be engineered into clarity. The central risk is that clarity becomes a spell. In our rush to script tomorrow—to price it, train it, and optimize it—we may mistake the feeling of certainty for the presence of truth.
The question is not simply whether AGI will arrive, or whether agents will proliferate, or whether markets can outpredict pundits. It is whether our conviction that the future can be known in advance—through prices, models, and code—is itself an enchantment sustained by profit. Prediction is not neutral; it is monetized. So is catastrophe. The more certain we feel, the more we trade. The more danger we foresee, the more safeguards we sell. The spell, in other words, is economically useful. And the real risk may be that we fail to question it precisely because so many incentives depend on keeping it intact.