In this episode of the Inspiring Brave Leaders Podcast, Sabine Gromer speaks with Gregor Sieber – software industry veteran with over 20 years of experience and most recently Managing Director at CloudFlight Austria. Gregor brings a rare combination of deep technical expertise and sharp strategic thinking to one of the most pressing questions facing leaders today: Are you truly ready for what AI is about to demand of you?
Together, Sabine and Gregor challenge some of the most dangerous myths and biases that are keeping executives passive in the face of exponential change. From the status quo bias and optimism bias to confirmation bias – they name what's getting in the way, and make a case for why traditional three-to-five-year strategies are no longer fit for purpose.
The conversation takes a turn toward scenario thinking, anti-fragility, and what it means to lead organizations that don't just survive uncertainty, but are built to benefit from it.
Whether you're just starting your AI journey or rethinking your entire organizational model, this episode's for you.
Our Guest
https://www.linkedin.com/in/gsieber/
https://www.postdigitalleader.blog/
Shownotes
AI Sources – Recommendations by Gregor
Gregor Sieber suggests following these blogs and people for AI insights. He notes that he doesn't have time for every Lex Fridman or Dwarkesh podcast, so he uses them to spot trending topics and guests, then researches them traditionally. Gregor also recommends using AI tools to discover more good AI sources.
https://www.deeplearning.ai/the-batch/
https://www.deeplearning.ai/the-batch/tag/data-points/
https://huggingface.co/blog
https://developer.nvidia.com/blog
https://techcrunch.com/category/artificial-intelligence/
https://hai.stanford.edu
https://bair.berkeley.edu/blog/
https://machinelearning.apple.com
https://openai.com/news/
https://openai.com/research/index/
https://research.google/blog/
https://ai.google/research/
https://deepmind.google/blog/
https://blog.google/innovation-and-ai/models-and-research/google-deepmind/
https://www.distillabs.ai/blog
https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026
http://yann.lecun.com
https://www.linkedin.com/in/yann-lecun/
https://www.linkedin.com/in/andrewyng/
https://www.linkedin.com/in/demishassabis/
https://www.darioamodei.com
https://www.linkedin.com/in/fei-fei-li-4541247/
https://lexfridman.com/podcast/ – if not for listening, then for identifying topics and people
https://karpathy.ai
https://simonwillison.net
https://www.dwarkesh.com Dwarkesh Patel podcast – if not for listening, then for identifying topics and people
https://twimlai.com/podcast/twimlai
https://neurips.cc – one of the most important conferences in the field
https://arxiv.org – find papers, e.g.: https://arxiv.org/list/cs.AI/recent
https://www.platformer.news/
https://garymarcus.substack.com/
https://datasociety.net/
https://jack-clark.net/
https://thegradient.pub
https://www.alignmentforum.org/
https://www.lesswrong.com/
https://www.latent.space
https://www.oneusefulthing.org
https://www.ben-evans.com/essays