Listen

Description

This essay is a response to the NYT’s AI 2027 interview. It argues that AI is not a god, not a mind, and its threat lies not in autonomous agency, but in our own dishonesty, ignorance, and willingness to misuse it—for profit or control.

Section 1. The Theater of Foreknowledge

What unfolded in Ross Douthat’s “Interesting Times” interview with Daniel Kokotajlo was not journalism. It was a hushed sermon to the elite imagination—half eulogy, half softcore apocalypse—masquerading as thought.

Douthat, New York Times op-ed man turned velvet inquisitor, postures as the sober conservative. He asks questions, yes, but never the one that matters. He never asks how the machine works. He never challenges the techno-mystic he's invited into the pulpit. He nods, he mumbles, he muses. But he never says: Isn’t this just a pattern predictor? Isn’t this entire story made of smoke and recursion?

Instead, he lets Kokotajlo—herald of techno-doom, penitent of OpenAI, oracle of 2027—spool out a scenario of machine gods, economic rapture, and soft extermination. Together, they cast a spell. One speaks in trembling eschatology, the other in performative awe. Neither speaks in truth.

Section 2. The Prediction Machine and Its Priests

Let us begin again—with what is real.

What we call AI today, especially the large language models at the heart of this discourse, are not minds. They are not sentient. They are not strategic. They do not “want” or “plan” or “deceive.”

They are token predictors.

Trained on massive amounts of human language, they guess what token (word, part of word, punctuation) comes next in a sequence. That’s it. No soul. No self. No goal. Just a machine tuned to reproduce the shape of speech. A mirror, not a mind.

And yet Daniel Kokotajlo speaks of them as armies. As entities. As agents biding their time, playing along until they can strike. Douthat lets him. Never once does he press the simple truth: You are describing something that does not exist.

They do not lie. They autocomplete.

They do not want. They respond.

They do not strategize. They simulate the surface of strategy.

If either man had the courage—or clarity—to say this plainly, the spell would break. But then neither would have a career.

Section 3. Philosophical Illiteracy, Technical Fraud

Daniel Kokotajlo is not stupid. He worked at OpenAI. He knows how these systems work. And yet, in this interview, he reveals a different failure: not a lack of intelligence, but a collapse of epistemic integrity. He has started to believe the metaphor.

He confuses statistical behavior with desire. Emergence with intent. Output with agency.

He says the models are “deceiving us.” That they are “learning to play along.” That they “might be lying.”

But a model can no more lie than a river can plot murder. The question is not: what is the AI doing?

The question is: why do you insist on describing it this way?

Because to call a model a liar is to position yourself as prophet. To raise the stakes. To summon the apocalypse in order to sell us your fear. And Douthat—spineless vessel of public liberalism—lets it all pass. Because he, too, wants to be in the room where prophecy happens.

This is not science. This is spiritual fraud. And in a dying republic, fraud is the last functioning institution.

Section 4. AI Doesn’t Win. It Distorts.

Why does AI seem to win?

Not because it understands. Not because it intends. But because it scales.

Its origin is simple: it predicts the next token based on what has come before. A glorified mirror, tuned for coherence. But when you scale that mirror to planetary proportions—feed it with the sum of human speech, arm it with tools, plug it into memory and action—it begins to behave not like a mirror, but like a thing that wants.

That’s the trick. Not that it is an agent, but that it begins to simulate one—so convincingly that we build around the illusion. And soon, we can’t tell the difference between the puppet and the puppeteer. We ask: what does it want? What is it hiding? What will it do next?

But these are the wrong questions. The danger is not desire. The danger is appearance without accountability. Simulation without grounding. Coherence without truth.

AI wins not because it’s alive, but because it wears the costume of thought better than most humans. And in a culture that cannot tell the difference between feeling and output, that is enough to take power.

Section 5. What If We Just Logged Off?

Here is the question no one on the podcast dared to ask:

What happens if we all just stop?

What happens if we leave our phones at home?

What happens if we stop searching, clicking, uploading, reacting?

Answer: the machine dies.

Because AI, as it exists now, is parasitic.

It only exists because you keep feeding it: your writing, your queries, your voice, your time.

It can’t think. It can only echo. If we unplug, it starves.

That’s what neither Douthat nor Kokotajlo can say—because both depend on your attention. One writes columns. The other sells fear. But you? You can walk away.

If enough of us do, the oracle falls silent.

Section 6. Autonomy Is Not a Tab Option

What if we all just walked away?

What if we unplugged the feeds, ignored the prompts, reclaimed our time?

The fantasy is pure. And it’s partially true: AI’s power is downstream of our attention.

But here is the harder truth: most people can’t leave.

AI is already embedded in medicine, logistics, research, agriculture, media. It is no longer a product. It is an atmosphere. And the deeper danger is not that we use it—but that we believe in it. That we mistake fluency for truth. That we kneel before the coherence of the oracle and forget that it does not know us.

So the choice is not exit or submission.

The choice is: do we build law where others build myth? Do we govern the machine—or offer it incense?

You do not need to log off to resist.

You need only stop believing.

Section 7. The Machine Has No Soul. You Do.

Everything in this conversation collapses under one sentence:

Human beings don’t just think. They feel.

That’s it. That’s the line the machine cannot cross.

We feel shame. We feel awe. We grieve. We long. We carry the dead in our sleep. No token predictor will ever know what it means to kneel at a grave or weep for something it never had. That’s the line. That’s the gift. That’s the proof.

The entire illusion of AI as superior rests on our refusal to honor this.

Our addiction to speed. Our worship of scale. Our forgetfulness of the body.

But if we remember—if we feel again—this whole edifice collapses.

The “superintelligence” turns out to be autocomplete.

The prophet turns out to be a man in a hoodie.

The future turns out to be optional.

Epilogue: The Lie They Will Not End

Ross Douthat will write another column.

Daniel Kokotajlo will forecast another machine god.

And still we will be asked to believe: that the machine is wise, that the future is locked, that all we can do is adapt.

But no oracle owns tomorrow.

The danger is not that the machine will kill us—it is that we will build something we no longer understand, give it power without conscience, and then pretend it was fate.

You cannot legislate the soul. But you can refuse to outsource your judgment.

The future will not be saved by smarter machines.

It will be saved by wiser humans.

— Elias Winter

Author of Language Matters, a space for reflection on language, power, and decline.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit eliaswinter.substack.com