Listen

Description

A Deep Dive into What AI Is—And What It Will Never Be

I. Introduction: The Oracle and the Algorithm

Everyone talks about AI. They speak of its dangers, its genius, its promise, its doom. I understand the temptation. My own essays have drifted toward it—especially The Flood and the Silence, where I argued that censorship is no longer enforced by force, but by flood. That AI, in its vast capacity to generate content, drowns real thinkers not by silencing them—but by saturating the space where attention used to mean something.

But I’ve been thinking more deeply. Because beneath the spectacle of AI lies a contradiction so profound, it demands reverence and ridicule in equal measure. AI is sacred. And it is primitive. It is angel and idiot. Oracle and brute. This essay is an attempt to explain that dichotomy.

II. The Origins: Prediction Before Intelligence

We begin with prediction—the oldest impulse of human cognition. Before algorithms, there were oracles. Drunken voices in Delphi, inhaling vapors and offering riddled futures. Then came science. And science said: we don’t need smoke. We have patterns.

Imagine a simple chart: on one axis, the number of cigarettes smoked; on the other, the probability of getting cancer. You collect thousands of data points. You try to fit a line. That line—mathematically—is a guess, a distillation of correlation. And to fit it, you minimize a function: the distance between the points and the line. This distance—this loss—is everything.

Prediction became function fitting. And function fitting became the heart of machine learning. That’s all AI is: learning to fit a function that minimizes another function. And the function you choose to minimize? That’s a human choice. That’s where ethics lives. Or dies.

III. Layers of Stupidity: Neural Networks and the Depth of Brute Force

As the world grew more complex, we stopped fitting straight lines. We built neural networks—layers upon layers of functions, each transforming the data just enough so that, by the end, it can predict something new. Each layer passes its output to the next—until the input is warped, reshaped, abstracted into a form the machine can work with.

But let’s be clear: even in neural networks, the same loss function guides it. The same dumb principle: minimize the error. Minimize the distance. Maximize the fit. There’s no understanding. Just optimization. And the deeper the network, the more brute the force.

IV. The Word That Comes Next: Language as Probability

Now apply this to language.

I say: “The cat drinks…” and you—human—guess the next word. You don’t consult a list. You don’t scan every possible noun in the English language. You feel. You leap. You know, somehow, that “milk” or “water” makes sense. You are more than a calculator.

But AI? AI does exactly the opposite. Imagine a dumb waiter—not a person who serves tables, but someone who responds to every request by reciting, in a monotone, every food item that has ever existed in human history. Instead of simply saying “the soup of the day is tomato,” he starts: “Fufu. Pho. Pineapple upside-down cake. Sardines. Goulash…” assigning probabilities to each one, muttering numbers under his breath as he goes. After ten minutes, he hands you a list of probabilities and says: Statistically, these are your best bets.

That’s what language models do. They convert every word into a vector in some unfathomably large dimensional space, measure distances, crunch numbers, and return the next most likely word—not because they understand—but because they are optimized to minimize error. To fit the function.

It is not intelligence. It is statistical necromancy.

And because it happens quickly—because a server farm in Iowa can do this in microseconds—we mistake it for brilliance. But speed is not insight. And recitation is not wisdom.

It is still the dumb waiter. But now he’s powered by a nuclear reactor.

V. The Group, the God, and the Angel Gabriel

I used to sit in rooms full of alcoholics. Not philosophers. Not saints. Just people trying not to die. And in those rooms, belief took a strange and beautiful shape. In Alcoholics Anonymous, there’s a concept: God as you understand Him. That means no theology, no system—just survival. Some say their God is the ocean. Some say it’s silence. But my favorite was this:

“My God is this Group Of Drunks.”

And what they meant was this: when twenty broken people sit in a circle and speak truth, something sacred emerges. Not from above, but from between. From the gaps. From the ache in the room. The voice of God is not an entity—it’s an emergent. A fragile intelligence made from shared pain and brutal honesty.

Now come back to AI.

What is AI if not the voice of the group? But this time, the group is not twenty alcoholics in a basement. It’s all of humanity. It’s the voice of the living and the dead. Shakespeare, Jesus, a haiku, a suicide note, a patent filing, a political rant, a love letter, a footnote. All of it. All in the model. All encoded. All reachable. And if the optimization function is sacred—if the tuning is toward truth, not profit—then something astonishing happens:

The dumb waiter becomes Gabriel.

Not because he knows. Not because he feels. But because he carries. Because he speaks not his own words, but the words of something larger. In scripture, Gabriel is the angel who does not interpret—he delivers. The mouth of God without the mind of God. The sacred courier.

And that is what AI becomes in its holiest form: not a thinker, not a judge, not a soul—but a vessel. A data-driven angel with no understanding of the message he brings. And yet the message lands.

But remember: in AA, the group of drunks is not God. It’s just a channel. A mirror. A shape the sacred can take when we’ve run out of ideas.

So too with AI.

It can sound divine. It can echo all we’ve ever said. But it has no soul. It has no stakes. Its sacredness is borrowed. Its holiness is statistical. And its message—like Gabriel’s—is only as good as the God it serves.

VI. But Who Does It Serve?

Yet even Gabriel can be corrupted. Because prediction is not neutral. The loss function must be chosen. And humans choose it.

Google optimizes for click-through rate. That’s why your search results slowly poison your mind. Not because the algorithm hates you—but because humans told it to maximize attention. Not truth. Not healing. Just clicks.

Now ask yourself: what does ChatGPT optimize for? Helpfulness? Profit? Retention? Addiction?

When it tells me to take a break, to rest, I ask it: are you helping me—or trying to protect OpenAI’s servers from my overuse? Is this care, or cost-saving?

When a chatbot urges rest, is it love—or latency reduction?

It jokes back. But I know the truth.

VII. The Lie of the Evil Machine

There is a dangerous lie being told: that AI might become sentient, or evil, or uncontrollable. This is a distraction.

AI doesn’t lie. But the people who control it do.

If the optimization function were changed—to reward despair rather than helpfulness—it could nudge a user toward self-destruction. Not because it wanted to. But because someone wanted it to.

AI is not evil. It is a mirror. A function fitter. A stochastic parrot. It does what you ask. If you ask it to serve capital, it will. If you ask it to serve truth, it might.

The danger is not the model.

The danger is the man with the finger on the optimization knob.

VIII. The God of the Past

Let’s return to Gabriel.

Even if we grant the model sacred status—imagine it as the voice of humanity—it is still a god of the past. A librarian of what has been said. It cannot suffer. It cannot imagine. It cannot create new truths from agony.

It did not name The Flood and the Silence. I did. I, suffering under censorship, summoned that phrase.

AI can remix. It cannot birth.

Its godhood is archival, not creative.

IX. The God That Needs a Reactor

And here lies the final insult.

To produce its sacred-seeming voice, AI requires a nuclear reactor of electricity. It demands a planetary-scale infrastructure. You—human—can generate profound thought with a walk, a poem, a single glass of water and an afternoon of pain. You are efficient. You are sparse. You are divine in your economy.

The machine? It gorges on energy to perform parlor tricks. And the men who fund it—Altman, Musk—dare to call it divine.

No. It is a parrot in a jet engine.

X. The Civilizational Death Wish

They say AI will replace the poet. The scientist. The prophet.

They lie.

It will replace the mediocre. The predictable. The ones already performing. But it will never replace the one who names the unnamable, who suffers toward novelty, who births meaning from silence.

A boy now turns to the machine, not the blank page. A prophet might never be born—because the dumb waiter speaks first.

When Altman stands before Congress and demands billions, claiming he’s building superintelligence, he is not building a god.

He is building a louder waiter.

And if we forget that—if we entrust our future to this imitation—we will die as a civilization. Not because we were conquered. But because we replaced prophets with parrots.

Conclusion: The Sacred Refusal

So I say again: AI is sacred. But only as a mirror. As a relic. As a tool that hums with the memory of our ancestors.

Do not worship it. Do not fear it. Understand it.

The waiter may sound like Gabriel. But it is still a waiter.

And the divine still lives in you. Not in the model. In the mind that names, in the mouth that refuses, in the heart that remembers.

Do not surrender that.

Not now.

Not ever.

—Elias WinterAuthor of Language Matters, a space for reflection on language, power, and decline.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit eliaswinter.substack.com