What if humans aren’t the smartest beings forever? What if we are just a stepping stone in the growth of something bigger—like artificial intelligence?
In this episode of ABCs for Building the Future, hosts Robert Ta, and Jonathan McCoy, talk about how intelligence has changed over time, from fire to AI. They ask big questions like:
* Who is really in charge—humans or machines?
* How can we make sure AI makes good choices?
* What happens when computers start thinking for themselves?
If you are curious about the future and how AI might change everything, this episode is for you.
Big Ideas from This Episode
1. The Evolution of Intelligence: From Fire to AI
Long ago, humans figured out how to make fire by hitting rocks together. Now, we put intelligence inside rocks (which is what computers and chips are made of).
"Apes made fire from rock. Now we put intelligence into rocks. Now what?" – Robert
We have always used tools to help us do things. But now, those tools are starting to think for themselves. The big question is: Are we guiding AI, or is AI starting to change us?
2. How Technology Changes Over Time
There is a way to think about the future called the Three Horizons Framework:
* Horizon 1 – The way things work today
* Horizon 2 – The messy middle, where old and new ideas mix
* Horizon 3 – The future, where new ideas take over
"Horizon 1 is the present, Horizon 3 is the future, and Horizon 2 is the chaotic transition. The question is—what role does humanity play in Horizon 3?" – Jonathan
The problem is, we don’t know if AI will help us or replace us. Right now, we are in Horizon 2, where AI is changing fast.
3. AI and Human Decisions: Who Should Be in Charge?
Some people think humans should stay in control of big decisions, like health and medicine. Others think AI should decide for us because it can process more information.
There are two ways to think about this:
* Aubrey de Grey believes humans should make decisions and use science to live longer.
* Bryan Johnson believes AI should tell us what to do to keep us healthy.
"Aubrey is betting on biology. Bryan is betting on AI. The question is—how much agency should we give AI over human health?" – Robert
Should we trust AI to make life-and-death decisions? Or should humans always have the final say?
4. Are We Losing Control of Our Choices?
Humans make choices every day, but AI is starting to make more choices for us. Think about how we used to memorize driving directions, but now we just follow Google Maps.
"Remember when we had to memorize routes before GPS? Now we just follow Google Maps. That’s a small loss of agency—but multiply that across every decision, and what do we become?" – Jonathan
Now, AI is choosing:
* What we watch
* What news we read
* What products we buy
If we let AI decide everything, what happens to our own thinking?
5. How Do We Know AI Is Safe?
AI is getting smarter, but how do we know if it’s giving good answers? AI companies are working on evaluations—ways to test AI before it makes big decisions.
"We need to evaluate AI the way we evaluate software—before we deploy it into the world. Otherwise, we're just unleashing black boxes and hoping for the best." – Jonathan
The goal is to make sure AI is reliable and safe, especially for health and medicine.
6. Measuring Intelligence: The Consciousness Complexity Index
Jonathan introduces an idea called the Consciousness Complexity Index (CCI). This idea tries to measure intelligence across different species and AI.
"What if we could quantify consciousness? If we had a Complexity Index for intelligence, could we track how AI and humans evolve together?" – Jonathan
If we understand intelligence better, maybe we can make AI that works with us instead of replacing us.
Links and Resources
3 Horizons Framework: https://en.wikipedia.org/wiki/Three_Horizons
Theory of Change: https://en.wikipedia.org/wiki/Theory_of_Change
Free Energy Principle: https://en.wikipedia.org/wiki/Free_energy_principle
Three Body Problem: https://en.wikipedia.org/wiki/Euler%27s_three-body_problem
Michael Levin: https://en.wikipedia.org/wiki/Michael_Levin_(biologist)
Karl Friston: https://en.wikipedia.org/wiki/Karl_J._Friston
Why It Matters
“How can AI understand us if we don’t fully understand ourselves?”
We solve for this by create programmatic models of self, modeling belief systems, which we believe are the basis of defense against existential risk.
In the longevity tech space, we create tools that meet users where they are, helping them make better decisions, form healthier habits, and align with their deepest values.
Can’t get enough? Check out the companion newsletter to this podcast.
ABCs for Building The Future is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Get Involved
Epistemic Me is building the foundational tools to make this vision a reality—and we’re doing it in the open. Here’s how you can join the movement:
* Check out the GitHub repo to explore our open-source SDK and start contributing.
* Subscribe to the podcast for weekly insights on technology, philosophy, and the future.
* Join the community. Whether you’re a developer, researcher, or someone passionate about the intersection of AI and humanity, we want to hear from you. Email me anytime!
FAQs
Q: What is Epistemic Me?
A: It’s an open-source SDK designed to model belief systems and make AI more human-aligned.
Q: Who is this podcast for?
A: Entrepreneurs, builders, developers, researchers, and anyone who’s curious about the intersection of technology, philosophy, and personal growth. If you’ve ever wondered how to align AI with human values—or just how to understand yourself better—this is for you.
Q: How can I contribute?
A: Visit epistemicme.ai or check out our GitHub to start contributing today.
Q: Why open source? A: Transparency and collaboration are key to building tools that truly benefit humanity.
Q: Why focus on beliefs in AI?A: Beliefs shape our understanding of the world. Modeling them enables AI to adapt to human nuances and foster shared understanding.
Q: How does Epistemic Me work?
A: Our open-source SDK uses predictive models to help developers create belief-driven, hyper-personalized solutions for applications in health, collaboration, and personal growth. Think of it as a toolkit for understanding how people think and making better tools, apps, or decisions because of it.
Q: How is this different from other AI tools?
A: Most AI tools are about predictions and automation. Epistemic Me is about understanding—building models that reflect the nuances of human thought and behavior. And it’s open source!
Q: How can I get involved?
A: Glad you asked! Check out our GitHub.
Q: Who can join?
A: Developers, philosophers, researchers, scientists and anyone passionate about the underpinnings of human beliefs, and interested in solving for AI Alignment.
Q: How to start?
A: Visit our GitHub repository, explore our documentation, and become part of a project that envisions a new frontier in belief modeling.
Q: Why open-source?
A: It’s about harnessing collective intelligence for innovation, transparency, and global community involvement in shaping belief-driven solutions.
P.S. Check out the companion newsletter to this podcast, ABCs for Building The Future, where I also share my own written perspective of building in the open and entrepreneurial lessons learned.
And if you haven’t already checked out my other newsletter, ABCs for Growth—that’s where I have personal reflections on personal growth related to applied emotional intelligence, leadership and influence concepts, etc.
P.S.S. Want reminders on entrepreneurship, growth, leadership, empathy, and product?
Follow me on..