Hi there,
Welcome back to Untangled. It’s written by me, Charley Johnson, and supported by members like you. This week, I’m sharing my conversation with Evan Ratliff, journalist and host of the thought-provoking podcast, Shell Game.
As always, please send me feedback on today’s post by replying to this email. I read and respond to every note.
On to the show!
🔦 Untangled HQ
I launched The Facilitators’ Workshop, a community of practice for leaders who want to perfect the craft of facilitating groups through conflict and ambiguity—so they can actually achieve their purpose. Our event on January 23, ”From Conflict to Clarity & Connection,” will give you a structured process for diagramming conflict—a way to slow down, make invisible dynamics visible, and understand what’s actually happening before deciding what to do next.
I’m spinning up a lot of new things that I’m excited to tell you about. The best way to stay up to date on upcoming events and workshops is by joining The Untangled Collective.
In season 1 ofShell Game, Evan cloned his voice, hitched it to an AI agent, and then put it in conversation with scammers and spammers, a therapist, work colleagues, and even his friends and family. You can listen to that conversation here.
In season 2 of Shell Game, Evan explores what its like to run a company with AI agents as employees. A real company building a real product with users and interest from venture capitalists. This is the future that Silicon Valley is actively trying to bring into existence. Sam Altman recently shared that some of his fellow tech CEOs are literally betting on when the first one-person, billion-dollar company will appear.
Now, all the hype would make you believe that we should welcome this future with open arms. Productivity will skyrocket. Time will feel abundant. Work will become frictionless and maximally efficient. That’s the story, anyway. You won’t be surprised to find that the gap between the hype and reality is, uh, massive. Evan and I talk about that gap, but Shell Game helps us see around the corner to what it might actually feel like to work with AI agents. It’s a story about:
* What’s lost when an organizational culture becomes sycophantic.
* What its like when your colleague regularly make stuff up, commits it to memory, and then repeats that thing in the future as if its real.
* Why words like ‘agent’ and ‘agentic’ belie the reality that these large language models don’t really do anything on their own.
* The costs and complexities of anthropomorphizing agents, and how we’re voluntarily tricking ourselves.
* What humans are uniquely good at, and what it means for automation and the evolution of work.
* What Silicon Valley misunderstands about the world they’re creating and what’s at stake in confusing fluency and judgement.
Shell Game is smart, thought-provoking, and really funny. I can’t recommend it enough. I hope you enjoy my human to human conversation with Evan Ratliff.
🧶Want to go deeper?
If you finished our conversation thinking, “Okay… I need to think about this more,” let me help.
* Flattery as a Feature: Rethinking ‘AI Sycophancy’
* There’s no such thing as ‘fully autonomous’ agents
* It’s okay to not know the answer
* AI isn’t ‘hallucinating.’ We are.
That’s it for now,
Charley