What if humans aren’t the center of intelligence evolution—but just a transitional phase in the scaffolding of planetary computation?
In this episode of ABCs for Building the Future, hosts Robert Ta, Jonathan McCoy, and Deen Aariff deconstruct a groundbreaking talk by Benjamin Bratton on planetary computation, AI alignment, and epistemic institutions.
From the critique of modern knowledge systems to the idea that computation itself is an evolutionary force, this discussion explores the biggest questions in AI, epistemology, and the future of intelligence.
This is a must-listen for entrepreneurs, researchers, and technologists at the frontier of AI alignment, philosophy, and synthetic intelligence.
Key Themes from the Episode
1. The Rise of Planetary Computation: AI as Evolutionary Scaffolding
Bratton introduces the idea that planetary computation is inevitable—an unstoppable force in the evolution of intelligence. He compares it to Gaia theory, where computation is simply the next phase of planetary self-organization.
Bratton argues that AI isn’t just a tool—it’s part of a larger planetary system of intelligence formation. In this view, computation itself is an organizing principle, much like biological evolution.
Takeaway: AI may not be something we control, but a process we are being shaped by. If so, how do we ensure human values don’t get lost in the process?
2. Why We Need New Epistemic Institutions
One of Bratton’s strongest critiques is that modern epistemic institutions—universities, research labs, grant-funded science—are broken. They are not structured to answer the biggest questions we now face in AI.
Bratton’s Antikythera Foundation is an attempt to create a new kind of epistemic institution—one designed for the emerging reality of AI-driven intelligence.
Takeaway: The future of AI alignment isn’t just technical—it’s institutional. The way we fund and structure knowledge creation needs a complete rethink.
3. Are We Artificializing Ourselves?
Bratton’s talk introduces a radical idea: that computation is not just a tool—it’s an agent of artificialization.
In other words: Just as we shape AI, AI is reshaping us.
Artificialization is the process by which technology reshapes intelligence itself. It means our very way of thinking, perceiving, and understanding reality is being transformed by AI.
Takeaway: If computation is reshaping human intelligence, the key question is: who or what is in control of that process?
4. AI and the Failure of Modern Philosophy
One of Bratton’s biggest claims is that our philosophical tools are outdated. The analytic philosophy that shaped modern AI lacks the conceptual tools to handle evolution, intelligence, and subjective reality.
“The CEO of DeepMind literally said: ‘We need philosophy to catch up to AI.’” – Jonathan
Bratton argues that we need entirely new categories of thought—new nouns, metaphors, and conceptual models—to make sense of what AI is becoming.
Takeaway: If philosophy doesn’t evolve, we risk building AI systems that outpace our ability to understand them.
5. The Feedback Loop Between AI and Human Intelligence
One of the most powerful ideas in this discussion is the recursive relationship between AI and human intelligence. AI is not just learning from us—it is teaching us new ways to think.
In a world where AI is a co-evolving system, alignment isn’t just about controlling AI—it’s about making sure that humans remain part of the intelligence feedback loop.
Takeaway: AI alignment must be bidirectional—it’s not just about shaping AI, but about ensuring AI helps humans evolve on our own terms.
Links and Resources
Benjamin Bratton | A Philosophy of Planetary Computation: From Antikythera to Synthetic Intelligence
Why It Matters
“How can AI understand us if we don’t fully understand ourselves?”
We solve for this by create programmatic models of self, modeling belief systems, which we believe are the basis of defense against existential risk.
In the longevity tech space, we create tools that meet users where they are, helping them make better decisions, form healthier habits, and align with their deepest values.
Can’t get enough? Check out the companion newsletter to this podcast.
ABCs for Building The Future is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Get Involved
Epistemic Me is building the foundational tools to make this vision a reality—and we’re doing it in the open. Here’s how you can join the movement:
* Check out the GitHub repo to explore our open-source SDK and start contributing.
* Subscribe to the podcast for weekly insights on technology, philosophy, and the future.
* Join the community. Whether you’re a developer, researcher, or someone passionate about the intersection of AI and humanity, we want to hear from you. Email me anytime!
FAQs
Q: What is Epistemic Me?
A: It’s an open-source SDK designed to model belief systems and make AI more human-aligned.
Q: Who is this podcast for?
A: Entrepreneurs, builders, developers, researchers, and anyone who’s curious about the intersection of technology, philosophy, and personal growth. If you’ve ever wondered how to align AI with human values—or just how to understand yourself better—this is for you.
Q: How can I contribute?
A: Visit epistemicme.ai or check out our GitHub to start contributing today.
Q: Why open source? A: Transparency and collaboration are key to building tools that truly benefit humanity.
Q: Why focus on beliefs in AI?A: Beliefs shape our understanding of the world. Modeling them enables AI to adapt to human nuances and foster shared understanding.
Q: How does Epistemic Me work?
A: Our open-source SDK uses predictive models to help developers create belief-driven, hyper-personalized solutions for applications in health, collaboration, and personal growth. Think of it as a toolkit for understanding how people think and making better tools, apps, or decisions because of it.
Q: How is this different from other AI tools?
A: Most AI tools are about predictions and automation. Epistemic Me is about understanding—building models that reflect the nuances of human thought and behavior. And it’s open source!
Q: How can I get involved?
A: Glad you asked! Check out our GitHub.
Q: Who can join?
A: Developers, philosophers, researchers, scientists and anyone passionate about the underpinnings of human beliefs, and interested in solving for AI Alignment.
Q: How to start?
A: Visit our GitHub repository, explore our documentation, and become part of a project that envisions a new frontier in belief modeling.
Q: Why open-source?
A: It’s about harnessing collective intelligence for innovation, transparency, and global community involvement in shaping belief-driven solutions.
P.S. Check out the companion newsletter to this podcast, ABCs for Building The Future, where I also share my own written perspective of building in the open and entrepreneurial lessons learned.
And if you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc.
P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product?
Follow me on..