Sunk cost in the AI era: John and Eric define the bias, share candid stories, and show how identity, tech debt, and market shifts demand pivots, reality checks and the freedom of starting over.
John and Eric unpack the sunk cost fallacy through personal stories, clean definitions, and why it intensifies in fast-moving AI and software. They contrast stubbornness-as-craft with market reality, show how identity and ego can cloud pivots, and offer practical checks: external feedback, tighter problem framing, and willingness to start over.
Name the bias: Prior investment should not drive future investment. Always optimize for present and future ROI, not the past.
Identity check: Notice when a project becomes “part of me,” because that’s when impartial judgment collapses.
Use outside calibration: Ask trusted, domain-relevant peers to sanity-check your assumptions.
Accept utilitarian wins: AI-produced code may be inelegant, yet commercially superior. Tests and agents will raise quality anyway, so it’s time to accept the future of software development.
Freedom is willingness to start over: If you can let go of valuable things and start from zero, you won’t run the risk of getting bogged down by sunk costs.
Sunk cost fallacy is defined as the bias of using prior investment (time, money, effort) to justify continued investment, even when it impairs present decision-making.
Thinking, Fast and Slow, written by Daniel Kahneman, is referenced for its System 1 / System 2 lens to explain why sunk cost can feel emotional and irrational.
Steam-powered boats and the Morse code/telegraph are cited as cases where stubborn persistence eventually met enabling tech, highlighting survivorship bias.
The "rich young ruler" story from Matthew 19 in the Bible is used to illustrate identity attachment and how letting go of things core to oneself can be the real barrier to change.
Elon Musk, via Walter Isaacson's biography, is referenced as an anti–sunk-cost archetype, repeatedly risking everything and switching when needed.
Benn Stancil's framing (LLMs read fast and summarize "roughly") is echoed to explain why AI coding feels transformative: machines don't slow down on code reading/writing.