We Are All Victor Frankenstein Now

After Anthropic refused Washington and OpenAI signed on, the real question is who gets to control the creature.

March 2, 2026

"So much has been done, exclaimed the soul of Frankenstein, — more, far more, will I achieve: treading in the steps already marked, I will pioneer a new way, explore unknown powers, and unfold to the world the deepest mysteries of creation."

—Mary Shelley, Frankenstein

It is a singular felicity (and a singular terror) of the human mind that it can mistake its own desires for the decrees of fate. Nearly two centuries ago, nineteen-year-old Mary Shelley cast into narrative form a warning so vivid that it has haunted every generation that followed. In her tale of Victor Frankenstein, a student who discovered how to animate lifeless matter, she traced the perilous arc from curiosity to catastrophe. That arc extends with dreadful symmetry into our own age of artificial intelligence and autonomous machines.

We flatter ourselves, indulging in the thought that we stand at the summit of enlightenment; yet in our laboratories and data centers there stirs a spirit not unlike that which animated the breast of Victor Frankenstein. He spoke of his earliest impulses as a thirst for understanding—a "curiosity" "to learn the hidden laws of nature." The unfolding of these secrets produced in him a joy bordering upon ecstasy. So too do the architects of AI recount the moment when a model first reasons, translates, or creates beyond expectation; they describe emergence as revelation—an unveiling that thrills the intellect and promises dominion over nature.

There is, of course, one material difference between Victor's experiment and our own. He was driven by curiosity—by the grandiose wish to penetrate nature's secrets. We are driven by that, yes—but also by quarterly earnings, market share, and the terror of being left behind. It is not only the desire to conquer nature that animates us now, but the desire to monetize the conquest (or at least to ensure that someone else does not do so first).

This abstraction acquired edges recently, when Anthropic declined a request from the United States government to drop AI safeguards, invoking safety, scope, and the long horizon of consequences that follow when frontier systems are braided too tightly with state power. Within hours, OpenAI announced its own agreement with that same government, describing the arrangement as the sober necessity of operating in a competitive world. The contrast was almost too neat: refusal on one side, and accommodation on the other, each presented as prudence.

The swiftness of the succession was instructive. One company drew a line; another stepped across it. The language in each case was careful, even earnest. Yet beneath it pulsed the familiar rhythm of competition. If one actor refuses, another will accept. If one firm invokes caution, another invokes duty—to country, security, and progress. The market does not long tolerate abstention, and the state does not long forgo capability.

It is remarkable to me how readily such language echoes the fatalism that pervades Mary Shelley's narrative. Victor recalls the counsel of his teachers not merely as instruction, but as pronouncement, as though the words themselves bore the authority of doom: "Such were the professor's words — rather let me say such the words of fate — announced to destroy me." By framing his ambition as inevitable, Victor absolves himself of full responsibility. The catastrophe becomes not the fruit of choice, but the unfolding of necessity. "It was a strong effort of the spirit of good; but it was ineffectual. Destiny was too potent, and her immutable laws had decreed my utter and terrible destruction," Victor confesses.

We hear the same cadence in our discourse on agentic AI—systems designed not merely to compute but to pursue objectives across time, endowed, in a limited yet consequential sense, with agency. They plan, negotiate, and refine their strategies, moving through financial markets, coordinating logistics, and assisting in military analysis. As of last week, they even attend class; an agentic product called Einstein advertises that it will log into Canvas on behalf of individual students, watching lectures, reading assigned essays, drafting papers, and submitting homework. The twenty-two-year-old founder describes his service as a relief from "busy work." The phrase is telling: as if the work of struggling through the tedium that precedes comprehension were a clerical burden best outsourced to code.

Each expansion of capacity is heralded as the natural next step, an unavoidable consequence of scaling computation and data. "So much has been done," exclaims the spirit of modern enterprise, "more—far more—shall we achieve." The language is triumphant, breathless, and tinged with inevitability. Inevitability, however, is often only desire in ceremonial dress, and, as Mary Shelley observed two centuries ago, dependence has a way of hardening into fate.

The split between Anthropic and OpenAI has brought into focus a question we have preferred to treat as theoretical: Who, precisely, is meant to command these systems once they carry strategic weight? Private institutions, animated by innovation and steadied—if that is the word—by market discipline? Or governments, animated by public mandate and constrained—if that is the hope—by law?

Those who favor private control argue that agility is itself a form of safety. Companies can attract specialized talent, iterate rapidly, and build internal cultures oriented toward technical nuance. They fear that governments, subject to electoral cycles and geopolitical panic, may either hamstring innovation or bend it toward surveillance and coercion. In this view, frontier AI is safer in the hands of engineers and corporate boards than in ministries and defense departments.

Those who favor governmental authority counter that systems capable of shaping information flows, labor markets, military planning, and democratic discourse resemble infrastructure more than consumer products. When a tool influences the distribution of power across society, its governance cannot be left to shareholders alone. Corporations answer to investors; governments, at least in principle, answer to citizens. If AI becomes integral to national security and civil administration, should it not be subject to public oversight?

The spectacle of refusal followed by agreement reveals the instability of relying solely on corporate conscience. A firm may judge a request imprudent; a competitor may judge it indispensable; but competitive pressure exerts its own logic. To decline cooperation is to risk ceding influence, while to accept is to deepen entanglement. Each decision, taken in isolation, may appear rational. Taken together, they form a spiral.

Nor is the state immune to the same temptation. National security is a solvent of hesitation. Faced with the possibility that rival nations will deploy advanced AI for cyber operations, intelligence analysis, or autonomous weapons, policymakers feel compelled to integrate these systems swiftly. If we do not, they will. Thus the rhetoric of inevitability migrates from the boardroom to the cabinet chamber. The same logic that drives companies to compete drives nations to accelerate.

Mary Shelley's warning lies precisely in this tension between noble motive and reckless execution. Victor did not intend malevolence; he sought to conquer mortality. Yet he isolated himself from counsel, spurned restraint, and pursued his object with monomaniacal intensity. When success crowned his labours and the being stirred before him, he recoiled in horror. He had imagined beauty; he beheld deformity. He had anticipated gratitude; he encountered need. His error was not only in creation, but in abandonment.

In the governance of AI, abandonment may take institutional form. A corporation may deploy powerful systems without transparent accountability. A government may procure those systems without robust safeguards. Responsibility diffuses across contracts, committees, and code repositories until no single actor feels the full weight of consequence. The creature acts; the creators debate jurisdiction.

Moreover, as our dependence deepens, our capacity to intervene may diminish. If agentic systems manage supply chains, regulate energy grids, adjudicate benefits, and coordinate defense, disentangling them from public life could prove ruinous. At that stage, the argument over who controls them may seem almost obsolete. Control will have yielded to reliance, and reliance to entrenchment.

Yet history teaches that technological trajectories are shaped by human institutions. The atom was split, but treaties sought to restrain its fury. Medicines were synthesized, but regulations tempered their risks. The insistence that AI's development is unstoppable—whether by corporate momentum or geopolitical rivalry—serves, at times, as an alibi for haste. If catastrophe is preordained, why deliberate? If competition compels us, why pause?

Shelley's novel resists this abdication. Though Victor invokes destiny, the narrative exposes the chain of choices that binds him. He might have confided in Clerval; he might have destroyed his notes; he might have fulfilled his promise to create a companion and thus mitigated the creature's despair. Fate in the novel is less a cosmic decree than the cumulative weight of unexamined decisions.

The same may be said of our present condition. We choose procurement frameworks and transparency standards. We decide whether corporate-government partnerships will be bounded by independent oversight or cloaked in secrecy. We determine whether safety evaluations will be public or proprietary. To speak as though these choices are illusory is to indulge in moral laziness.

And yet, even the most prudent arrangement—whether private, public, or hybrid—cannot eliminate uncertainty. Complex systems give rise to emergent behaviours. When multiplied across billions of users and integrated into critical infrastructure, small biases can scale into vast inequities, and minor vulnerabilities can metastasize into systemic crises. An agent designed to trade may precipitate a market convulsion, one designed to persuade may distort democratic discourse, and one designed to defend may escalate conflict. None of these outcomes requires malevolence, only misalignment between narrow objective and expansive human good.

Thus we stand, as Victor once did, in the charged stillness before animation—except that our creation already moves among us. It drafts memoranda, analyzes intelligence, recommends medical interventions, completes university assignments, and writes the code by which other systems operate. The temptation to believe that such progress is both inevitable and benign is immense, especially when framed as essential to national survival or corporate vitality.

But inevitability is no safeguard against ruin. Victor's pursuit culminated not in glory, but in desolation. Creator and creation alike were undone, not by discovery itself, but by the refusal to temper ambition with foresight and compassion.

The recent divergence between Anthropic and OpenAI does not settle the debate over who should control these tools. It reveals instead that the question cannot be postponed. Whether governed by private institutions, governments, or by some uneasy amalgam of both, these systems will reflect the incentives and fears of those who wield them.

Shall we entrust them to the logic of the market alone? Shall we yield them entirely to the calculus of the state? Or shall we acknowledge that neither sphere, in isolation, is adequate to the magnitude of what has been created?

Mary Shelley's cautionary tale endures because it addresses not a particular technology, but a perennial temptation: to interpret possibility as destiny. In the age of agentic AI, that temptation has found new patrons in both boardrooms and ministries. If we would avoid Victor's fate, we must cultivate a language not of decreed competition or sovereign prerogative, but of deliberation, restraint, and shared responsibility. For destiny, however potent it may appear, is often no more than the echo of our own unchecked desire. And if we neglect that truth, we may awaken, as he did, to behold in our creation not the fulfillment of a dream, but the mirror of our hubris.