The Logic of the Lesser Evil

War, artificial intelligence, and the comfort of what we call necessity

March 6, 2026

"In their moral justification, the argument of the lesser evil has played a prominent role. If you are confronted with two evils, the argument runs, it is your duty to opt for the lesser one, whereas it is irresponsible to refuse to choose altogether. Its weakness has always been that those who choose the lesser evil forget quickly that they chose evil."

—Hannah Arendt, "Personal Responsibility Under Dictatorship"

Arendt's observation captures a pattern that remains as visible in modern politics and economic life as it was in her own time: the gradual normalization of decisions that seem defensible in isolation but prove corrosive when accumulated. A leader chooses the lesser evil because the alternative appears worse; a company adopts a cheaper technology because its competitors already have; a market slides toward a new equilibrium because each participant believes there is no real choice. Over time the language of moral judgment fades, and the system begins to behave as though no one ever chose anything at all.

In recent weeks, this pattern has appeared in an unusually literal form. When President Donald Trump authorized military strikes against Iranian targets, the decision was widely framed—even by some critics—as a choice between undesirable options. Allow Iran to advance its capabilities unchecked, the argument went, or strike now and accept the risks of escalation. Within that framework, the strike could be described as the lesser evil—a dangerous action justified by the belief that inaction would produce something worse.

Arendt's warning is that such reasoning carries its own danger. The moment the choice is framed as a comparison between evils, the moral horizon begins to narrow. And what disappears first is the memory that the chosen path remains an evil at all.

Something similar is beginning to appear in discussions about artificial intelligence and the future of the economy, though the stakes are unfolding more slowly and therefore more insidiously.

A recent memo from CitriniResearch, titled "The Consequences of Abundant Intelligence," attempts to imagine what the next few years might look like if the most optimistic assumptions about artificial intelligence turn out to be correct. The essay adopts a slightly disorienting device; it is written as though the reader is looking backward from June 2028, even though the events described are still unfolding in early 2026.

The essay asks us to imagine a world in which artificial intelligence continues improving rapidly and the cost of intelligence (once among the scarcest resources in the economy) collapses toward zero. What happens when companies respond in the way they reliably do whenever a critical input becomes dramatically cheaper and more capable?

The authors do not describe a technological catastrophe (quite the opposite). Their scenario assumes that artificial intelligence works extremely well, and that the disruption emerges precisely because the technology succeeds.

At the center of the argument lies what the authors call an "intelligence displacement spiral," a feedback loop driven by the ordinary incentives of modern firms. As AI systems improve, companies begin using them to automate tasks previously performed by human workers, allowing labor costs to fall and profit margins to expand. Those profits are then reinvested into additional AI capability, accelerating the next round of automation. Crucially, each step appears rational in the moment and can be defended as the lesser evil: If we don't do it, someone else will…and we might as well do it better, or at least more safely, than they would. Within this framework, the decision to automate begins to look less like a moral choice and more like an inevitability.

The Citrini memo sketches the early stages of this shift through a series of plausible examples. Consumer agents eliminate friction in subscriptions, insurance renewals, and travel booking, gradually eroding entire categories of service intermediaries. Real-estate commissions shrink as AI systems armed with transaction data replicate the informational advantages historically held by brokers. Delivery platforms lose their competitive edge once code-generating models make it trivial to launch competing applications, while algorithmic shoppers route demand automatically toward whichever service offers the lowest fee—a dynamic already visible in startups such as Phia, the price-comparison platform founded by Bill Gates's daughter and backed by millions in venture funding. Seen from this angle, the economy begins to resemble what the memo calls "a long daisy chain of correlated bets on white-collar productivity growth."

When those workers lose jobs—or find themselves pushed into lower-paying roles—the consequences ripple outward. Consumption declines, and firms respond by cutting costs further. Automation becomes the most obvious tool available. And because the underlying technology continues improving each quarter, the cycle accelerates.

The imagined landscape of 2028 is therefore not a sudden crash but a slow unraveling: wage compression across professional sectors, weakening consumption, falling real-estate values in technology-heavy cities, and mortgage markets built on the fragile assumption that high-earning professionals will remain high-earning professionals indefinitely.

Not surprisingly, the memo quickly attracted rebuttals.

Citadel Securities published a response arguing that the scenario misunderstands how technological change actually spreads through the economy. The note, written by strategist Frank Flight, opens with a set of statistics that appear reassuring at first glance. Unemployment remains low, AI investment still represents only a small share of total economic output, and job postings for software engineers have increased rather than declined—though, as Flight does not mention, tech companies frequently advertise positions they never intend to fill or that have already been staffed.

Citadel's argument ultimately rests on a familiar distinction between technological capability and economic adoption; technologies may improve rapidly, but their integration into real economic systems occurs through institutions—firms, supply chains, regulatory frameworks—that historically move much more slowly. Electricity required decades to reorganize industrial production after its invention. Personal computers diffused gradually across offices. Even the internet followed a recognizable adoption curve rather than transforming the economy overnight.

Another response came from writer David Oks in his essay "Why I'm Not Worried About AI Job Loss," which addresses the broader wave of anxiety surrounding artificial intelligence. Oks argues that fears of sudden mass displacement underestimate the adaptability of economic systems. Drawing on the principle of comparative advantage, he suggests that humans working alongside AI will remain more productive than AI operating alone. Workers, he insists, will adapt by incorporating these tools into their existing workflows rather than being replaced by them outright.

Both critiques share an implicit reassurance: the worst outcomes are unlikely. Yet they also rest on a subtle assumption that Arendt's observation makes harder to ignore: each treats the danger as though it must appear immediately in order to be real.

The Citrini memo, by contrast, describes a process structured around delay. Companies experiment cautiously with automation, early layoffs expand margins and encourage reinvestment in the same technologies that made those layoffs possible, and displaced workers continue spending for a time using savings or severance packages, allowing the macroeconomic data to remain stable long after the underlying structure has begun to shift.

For a while, everything still looks normal.

This is precisely how systems governed by the logic of the lesser evil tend to evolve. Each individual decision appears reasonable in isolation, and each participant believes they are responding to circumstances rather than shaping them; only later do the cumulative effects become visible, and by then the process feels less like a series of choices than an inevitability.

Arendt's warning was that the language of necessity often disguises the fact that someone, somewhere, made a decision.

The comparison to military strategy may seem dramatic, yet the underlying logic is strikingly similar. When leaders justify a strike as the lesser evil, they acknowledge that harm will occur while insisting that the alternative would be worse. In many cases that judgment may even be correct. But the moral danger lies in forgetting that harm was chosen at all.

Something similar may now be unfolding in the technological economy. Companies adopt artificial intelligence not because they wish to eliminate jobs but because failing to adopt it appears economically reckless. Executives reduce headcount not out of cruelty but out of competitive necessity (after all, investors reward the firms that move fastest).

No single actor intends to destabilize the system; yet the incentives gradually align in that direction.

None of this guarantees that the future imagined in the Citrini memo will arrive exactly as described. But Arendt's insight remains difficult to dismiss. Systems built on the repeated choice of the lesser evil tend, over time, to forget that the original choice involved evil at all.

And by the time the consequences become visible, the people responsible will say they had no choice. They chose the lesser evil—and in choosing it often enough, they forgot it was evil at all.