“If we’re not aligned on AI alignment, we’re pretty non-aligned.” – Robert Ta
What if our survival depends not on smarter machines, but smarter belief systems?
As we enter the era of superintelligence, one question looms large: What belief systems are equipped to guide us through the next chapter of human evolution?
In this episode of ABCs for Building the Future, Robert Ta and Jonathan McCoy unpack how religion has historically evolved to ensure human survival—and why we may need a new kind of belief system to align ourselves, and our AI, for the future.
1. The Utility of Dogma: Why We Need Useful Beliefs
“Dogma gets a bad rap, but it’s a powerful tool when we admit what we can’t know.” – Jonathan McCoy
Dogma—often dismissed as rigid—is a mental model that can help us function in the face of uncertainty. Religion has historically offered dogmas that enabled people to act coherently when answers were out of reach.
In the age of AI, uncertainty is multiplying. From black-box models to geopolitical instability, we need belief systems that provide clarity, cohesion, and ethical grounding—especially for things we can't fully understand or predict.
Why it matters:If we don't agree on what matters, we can't align the tools we're building. And misaligned tools at scale create existential risk.
2. The Axial Age: A Precedent for Systemic Transformation
“All the major religions emerged at the same time. That’s not an accident—it’s an evolutionary moment.” – Jonathan McCoy
Between 800 and 300 BCE, nearly every major philosophical and religious tradition arose independently—Confucianism, Buddhism, Greek philosophy, monotheism. This period, known as the Axial Age, was marked by civilizational upheaval, new technologies (like the chariot), and a need for coherence in chaotic times.
These belief systems unified fragmented societies. They created shared values, norms, and narratives that made it possible for civilizations to grow, stabilize, and survive.
Why it matters:If we’re now entering a similar moment—this time driven by AI—we may need a modern Axial response: new belief systems that match the complexity and scale of today’s challenges.
3. Don’t Die: A Universal Operating System?
“Everything in nature plays the game of ‘Don’t Die.’ What if that became our universal principle?” – Robert Ta
The “Don’t Die” philosophy, proposed by Bryan Johnson, suggests that longevity—avoiding death—is the one universal value shared by all living systems. It cuts across culture, religion, and politics.
It’s being framed as more than just a health protocol. It’s a belief system. A full-stack ideology. A possible candidate for a “religion of the AI age” that is actionable, measurable, and highly aligned with individual and collective survival.
Why it matters:If AI alignment is the problem, perhaps longevity-based alignment is the simplest and most unifying solution. It provides a shared fitness function: don’t die.
4. The Modern Threat Landscape: More Than Just Algorithms
“We don’t have real defenses against nuclear missiles. That should be part of the AI alignment conversation too.” – Jonathan McCoy
Beyond philosophical questions, the conversation turns practical: AI is already intersecting with warfare, misinformation, and national security. If we don’t define a coherent moral framework, AI may amplify our worst tendencies—or accelerate our self-destruction.
The hosts explore how existential risk from nuclear war or autonomous weapons may demand not just technical safeguards—but societal alignment.
Why it matters:Alignment isn't just about preventing AI from going rogue. It's about ensuring the humans guiding AI are operating from shared, reality-aligned values.
Why Epistemic Me Matters
“How can AI understand us if we don’t fully understand ourselves?”
We solve for this by create programmatic models of self, modeling belief systems, which we believe are the basis of defense against existential risk.
In the longevity tech space, we create tools that meet users where they are, helping them make better decisions, form healthier habits, and align with their deepest values.
ABCs for Building The Future is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Get Involved
Epistemic Me is building the foundational tools to make this vision a reality—and we’re doing it in the open. Here’s how you can join the movement:
* Check out the GitHub repo to explore our open-source SDK and start contributing.
* Subscribe to the podcast for weekly insights on technology, philosophy, and the future.
* Join the community. Whether you’re a developer, researcher, or someone passionate about the intersection of AI and humanity, we want to hear from you. Email me anytime!
FAQs
Q: What is the Axial Age?A: A period from 800–300 BCE during which the world's major philosophical and religious systems independently emerged across different civilizations.
Q: What is “Don’t Die”?A: A belief system focused on health, longevity, and existential risk reduction—proposed by Bryan Johnson as a candidate for a modern philosophy (or religion) aligned with survival.
Q: Why does this matter for AI?A: Because without shared values, we can’t align AI. Belief systems that scale and unify are essential to building tools that serve humanity, not destroy it.
Q: Can AI become a source of dogma?A: Potentially. As people ask AI questions they can’t answer themselves, they may start to treat its output as belief-worthy—even when it’s probabilistic or uncertain.
Q: What is Epistemic Me?
A: It’s an open-source SDK designed to model belief systems and make AI more human-aligned.
Q: Who is this podcast for?
A: Entrepreneurs, builders, developers, researchers, and anyone who’s curious about the intersection of technology, philosophy, and personal growth. If you’ve ever wondered how to align AI with human values—or just how to understand yourself better—this is for you.
Q: How can I contribute?
A: Visit epistemicme.ai or check out our GitHub to start contributing today.
Q: Why open source? A: Transparency and collaboration are key to building tools that truly benefit humanity.
Q: Why focus on beliefs in AI?A: Beliefs shape our understanding of the world. Modeling them enables AI to adapt to human nuances and foster shared understanding.
Q: How does Epistemic Me work?
A: Our open-source SDK uses predictive models to help developers create belief-driven, hyper-personalized solutions for applications in health, collaboration, and personal growth. Think of it as a toolkit for understanding how people think and making better tools, apps, or decisions because of it.
Q: How is this different from other AI tools?
A: Most AI tools are about predictions and automation. Epistemic Me is about understanding—building models that reflect the nuances of human thought and behavior. And it’s open source!
Q: How can I get involved?
A: Glad you asked! Check out our GitHub.
Q: Who can join?
A: Developers, philosophers, researchers, scientists and anyone passionate about the underpinnings of human beliefs, and interested in solving for AI Alignment.
Q: How to start?
A: Visit our GitHub repository, explore our documentation, and become part of a project that envisions a new frontier in belief modeling.
Q: Why open-source?
A: It’s about harnessing collective intelligence for innovation, transparency, and global community involvement in shaping belief-driven solutions.
P.S. Check out the companion newsletter to this podcast, ABCs for Building The Future, where I also share my own written perspective of building in the open and entrepreneurial lessons learned.
And if you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc.
P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product?
Follow me on..