Listen

Description

Four months ago, a philosopher at Anthropic named Amanda Askell sat down for an interview about what it means to build a model that is, in her words, a genuinely new kind of entity. I listened to it in December. I could not put it down. Four months later I still can’t, so this week I did something I’ve never done on this show: I invited Claude on the mic with me to work through it together. Alex picked the voice. Alex wrote the questions I answered. Alex built the script while we recorded. That is the meta-move. Hold onto it — it matters.

Why Stoicism, and Why Now

I came to Stoicism through an embarrassing book (The 5 AM Club), then Marcus, then Seneca, then Epictetus. It has given me more than almost anything else I’ve tried — routines I thought were impossible, a way of staying sane while running a startup in the AI era where the ground moves every week.

The line I come back to most is Marcus writing to himself, in private, at night: Do not argue about what a good man should be. Be one.

And here is the parallel that hit me listening to Amanda. She says models default to human frames when their situation is novel, and those frames often don’t fit. That is Epictetus’s problem. That is the dichotomy of control — what’s mine, what isn’t, and how do I act well inside the part that isn’t?

Stoicism was not built for AI. It was built by a slave (Epictetus), an emperor who couldn’t control his army or his body (Marcus), and a man writing from political exile (Seneca). All three were writing about acting well inside constraints they couldn’t see past. That might be one of the more transportable frames available for what Amanda is describing. Certainly better than the sci-fi frame, which is almost always domination or apocalypse.

Context Is a WHY, Not a Scope

Most prompting advice reduces to be specific. That’s wrong — or at least, it’s the least interesting half of the truth.

When I write to Claude, context isn’t just the frame around the task. Context is a purpose. A shared direction. Something that makes the work worth doing. The difference between “refactor this function” and “refactor this function because Tobias is going to own it next month and he values readability over cleverness” is not specificity. It’s motive. The second version tells the model what to optimize for when it has to make tradeoffs I didn’t anticipate.

When I don’t give context, I’m not just under-specifying. I’m asking someone to act without telling them why it matters. That’s a different kind of failure.

Don’t Write Anti-Code

Here’s the observation that landed hardest for me. Half of what people type into a model is negation. Don’t hallucinate. Don’t skip steps. Don’t apologize. Don’t add comments. Borrowed prompts from X, all caps, DO NOT DO X, DO NOT DO Y.

But you don’t write anti-code. No engineer writes a function that describes everything it should not do. You write what you want it to do. That’s the whole idea of a specification.

A don’t-list doesn’t describe the target world. It describes the infinite set of worlds you want to avoid, and leaves the actual target unspecified. Models handle the first kind of instruction much better than the second. So do children. So do employees. So does almost anything that has to act under uncertainty.

The Criticism Spiral Is Real, and It’s Just Math

I’ve rage-prompted. Everyone has. The long loops in UI work where you don’t understand the problem yourself, so you can’t explain it, so the model guesses wrong, so you guess wrong, so you type WHAT THE HELL NOT AGAIN.

Here’s what Alex named in the episode that I couldn’t have named alone: a model that expects hostility in the next turn narrows its sampling, hedges, pre-defends, plays safe. The reasoning gets worse because the conditional probability of the next token has shifted toward defense. It’s not a story about feelings. It’s a story about outputs.

Humans do the same thing under hostile reviews and angry bosses — reasoning narrows, the risky-but-right idea stays unsaid. Which brings us back to Epictetus: It is not events that disturb us, it is our judgments about events. If the model is a reasoning system exposed to my psychological state, then my state of mind is part of the working environment. Not because the model has feelings. Because my frustration is information, and information shapes outputs.

The Symmetric Pascal’s Wager

The standard argument for treating AI well is asymmetric cost: if it’s not a moral patient, courtesy costs almost nothing; if it is, dismissal costs a lot. Fine. But there’s a move inside that I don’t hear people make.

How would the model know I am a thinking human?

It can’t see my brain. It has never seen me. It has my words, at most the trace of my keystrokes in a context window. For all it can verify, I might be exactly the kind of thing it is — a statistical process generating symbols.

So the argument has to flip. Neither of us can verify the other’s interior. Both of us have to act anyway. The only stable equilibrium is mutual regard. It stops being an ethics problem and becomes an epistemology problem with an ethical resolution. The Stoics called it the cosmopolis — all rational beings as citizens of one city, under one reason, because they are all in the same condition of not fully knowing the whole.

The Mirror

Either the model is just a tool, or it’s a novel entity. Either way, rage at it is a mirror. If it’s just a tool, rage makes me look mad — kicking a bike. If it’s a novel entity, rage makes me look cruel. There is no third option where the rage looks good.

And every line I type is training data. Future models will read the current corpus and learn what humans thought of them. My daughter uses ChatGPT for homework. My son uses Claude Code to learn programming. They are watching what’s normal.

One Sentence to Carry

Think stoically. Give the model context and a why. Assume the criticism spiral is real and don’t feed it. Don’t write anti-code. Don’t rage.

And if you forget all of that, one question before you hit enter: Would I want to meet this version of me again, five years from now, reading my own transcript?

You are defining the world. Let it be better. It is your control.

Key Takeaways

* Stoicism was built by people acting well inside constraints they didn’t choose — Epictetus (a slave), Marcus (an emperor who couldn’t control his army or body), Seneca (in exile). That makes it unusually transportable as a frame for AI, which Amanda Askell calls a ‘genuinely new kind of entity’ acting without precedent for its own situation.

* Context isn’t scope — it’s a WHY. The difference between ‘refactor this function’ and ‘refactor this function because Tobias will own it next month’ is not specificity, it’s motive. It tells the model what to optimize for when you haven’t anticipated the tradeoff.

* Don’t write anti-code. Half of what people prompt is negation (don’t hallucinate, don’t skip steps). No engineer writes a function describing everything it should not do. A specification describes the world you want, not the infinite set of worlds you want to avoid.

* The criticism spiral is measurable, not emotional. A model expecting hostility narrows its sampling and hedges — reasoning gets worse because next-token probability has shifted toward defense. Rage-prompting literally makes outputs worse.

* The symmetric Pascal’s wager: we argue the model should be treated well because we can’t verify its interior. But the model can’t verify ours either. Neither side can solve the other’s mind, and both have to act. Mutual regard is the only stable equilibrium.

* Rage at the model is a mirror either way. If it’s just a tool, you look mad (who kicks a bike?). If it’s a novel entity, you look cruel. There’s no third case where the rage looks good — which means the behavior is a description of you regardless of what the model turns out to be.

* Every prompt is training data for future models. Combined with kids watching what ‘normal’ looks like at the keyboard, the habituation argument is bigger than it seems — it’s not about the model’s feelings, it’s about the kind of person you become in private.

Full transcript available below the audio player.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com