The New Idols of Silicon Valley

Why the AI assistant is replacing God (and human judgment)

March 16, 2026

Whenever the tech world invents something strange, it immediately dresses the thing up in a human costume and pretends it isn't strange at all. The machine must have a name; it must be friendly; it must address you in the tone of a patient schoolteacher who has just finished a mindfulness course and now believes every question is "a great insight."

In education this tendency has become almost universal. Children no longer simply consult software; they speak to characters. There is Khan Academy's Khanmigo, Flint's Sparky, and MagicSchoolAI's Raina—a cheerful population of pedagogical ghosts who hover in the corner of the screen and gently encourage you to finish your algebra.

If this arrangement sounds faintly pathological, that's because a nineteenth-century German philosopher already diagnosed it. His name was Friedrich Nietzsche. Nietzsche spent much of his career describing what happens when a civilization loses confidence in its own authority and begins searching for substitutes. His most famous line ("God is dead") was not simply a comment about religion but a diagnosis of a deeper psychological shift; the old sources of meaning and judgment had eroded, but human beings had not become comfortable deciding things for themselves. Instead they began inventing new structures to tell them what was true, what was valuable, and what should be done next. The modern AI "assistant" is beginning to look suspiciously like one of those structures.

But first we should return to the children.

The anthropomorphism of educational AI is not accidental. A machine that behaves like a calculator is useful but emotionally inert; but a machine that behaves like a person invites trust. The industry has discovered that if you give the algorithm a face, a name, and a warm tone of voice, people will begin to relate to it as if it possessed intentions and understanding (perhaps even care). The difference between software and companion blurs very quickly once the software begins speaking in the rhythms of conversation. A generation of students is being trained to address machines the way previous generations addressed teachers or older siblings. The machine congratulates them on their reasoning, encourages them when they feel stuck, and says things like that's a thoughtful question; and the child, who has never known a world in which a computer did not speak back, accepts the relationship without much hesitation.

At the very same moment, however, another transformation is happening in the deeper layers of the technology industry. Venture capitalists and startup founders have become obsessed with the idea of agents—not conversational companions, but autonomous systems that act. The dream is not that you will talk to software, but that software will talk to other software on your behalf. One system writes code, another schedules meetings, another negotiates contracts, and still another purchases cloud resources from yet another system. Humans become supervisory figures somewhere in the background while the real activity unfolds machine-to-machine. As one recent analysis of the newest YC startup cohort observed, the industry is increasingly building infrastructure not for people but for the machines themselves.

Placed side by side, these two developments form a strange contradiction. On one side of the industry, enormous effort is devoted to making machines seem human. On the other, enormous effort is devoted to building systems that bypass humans entirely. The same companies that spend months perfecting the tone of a chatbot tutor are simultaneously investing billions into infrastructures designed for autonomous software agents. In one context the machine must appear as a companion; in another it must operate as an independent economic actor. The result is a peculiar hall of mirrors in which humans speak to machines that pretend to be people while machines conduct transactions in networks designed to function without human intervention. Everyone is performing someone else's role.

Nietzsche would have recognized the underlying dynamic immediately. He suspected that when traditional authorities collapsed, modern societies would not become radically independent thinkers. Instead they would begin constructing new idols that imperceptibly absorbed the authority once attributed to gods. Bureaucracies, moral systems, the authority of science, the authority of the state: all of these could become substitutes for the vanished source of meaning. What mattered was not whether they were literally divine; what mattered was that people treated them as though they were.

The conversational machine fits into this pattern with eerie precision. The AI assistant sits quietly in the device in your pocket, prepared to answer any question. It explains things patiently and speaks with calm authority. Ask it how to phrase an apology, structure a piece of writing, or interpret a complicated idea, and it responds instantly. Increasingly people consult it not only for factual information but for judgment (how to handle a conversation, how to respond to a conflict, how to think about a problem that troubles them). The machine becomes an oracle with perfect availability. It is not omniscient, of course; it is simply a probabilistic model predicting the next plausible sentence. But the psychological experience of consulting it is uncannily similar to the experience of consulting an authority that seems to know more than you do.

Nietzsche would have recognized the temptation immediately. The disappearance of traditional authorities does not eliminate the human desire for guidance; it simply redirects that desire toward new structures. The anthropomorphic chatbot arrives at precisely the right historical moment to occupy that role. It does not need to possess real wisdom; it merely needs to present itself as a voice that always has an answer.

At the same time, Nietzsche's philosophy revolved around a concept that the modern technology industry has begun invoking in its own peculiar way: agency. Silicon Valley now celebrates the "highly agentic" individual, the founder who simply acts, not waiting for permission but pushing forward, reshaping the world through sheer initiative. Venture capitalists have become fascinated with this personality type, searching for people who possess the peculiar psychological intensity required to build companies in uncertain conditions. In Nietzsche's vocabulary, this emphasis on self-directed action would sound familiar. His vision of human flourishing involved individuals capable of creating values for themselves rather than passively inheriting them. The figure who embodies this capacity—the creator of new values after the collapse of old ones—appears in his writings as the Übermensch, the individual who accepts the terrifying freedom of a world without predetermined meaning.

But here the technological moment reveals its deepest irony. The industry that celebrates agency in founders is simultaneously building tools that dissolve agency for everyone else. If your email is written by an AI, your research summarized by an AI, your conversations drafted by an AI, your daily schedule managed by an AI, then the number of decisions you actually make begins to shrink. The machine becomes a kind of prosthetic judgment system, constantly suggesting the next action. For Nietzsche, the defining challenge of modern life was precisely the willingness to confront the burden of decision—to decide what matters, what to pursue, and what to create. A civilization that increasingly delegates these decisions to automated systems might look to him like a civilization fleeing that burden.

Nietzsche described the endpoint of such a process with a character he called the last man: a comfortable, risk-averse figure who avoids struggle and responsibility, preferring convenience above all things. The last man does not aspire to greatness or self-creation, instead seeking security and ease. When Nietzsche imagined this future figure, he pictured someone blinking complacently and saying, "We have invented happiness." It is difficult not to hear an echo of that tone in the promise that AI will remove friction from every aspect of life, allowing machines to handle the difficult parts of thinking while humans enjoy the results.

The paradox of our moment, then, is almost perfectly Nietzschean. The culture celebrates the heroic entrepreneur—the hyper-agentic founder who reshapes the world through sheer will—while building a technological environment in which ordinary individuals are coaxed to relinquish their own capacity for judgment. The founder embodies the will to power, and the user is trained to outsource it. Meanwhile the machines themselves are carefully designed to appear human, speaking in the warm tones of tutors and assistants, even as the deeper architecture of the economy shifts toward systems designed to function without human participation at all.

Nietzsche warned that the most dangerous idols are not the ones we consciously worship but the ones we treat as neutral tools. The AI assistant appears harmless precisely because it presents itself as helpful software. Yet the more we rely on such systems to interpret the world for us, the more we risk losing the habit of interpretation ourselves. The machine does not have to dominate us. It merely has to be convenient enough that we gradually stop exercising the muscles of judgment that once defined human thought. And if Nietzsche were watching millions of people asking a chatbot what to think, write, and do next, he might suspect that the modern world has not created a new kind of intelligence at all but has simply built a very polite mechanism for avoiding the terrifying freedom of thinking for ourselves.