Bambenek's new book, Lies, Damn Lies, and AI, explores how AI’s promise is often overstated — especially when it’s portrayed as a magical black box that can solve any problem.
Title is inspired by the classic quote about misleading statistics — now updated for the AI era.
AI is valuable only when paired with human professionals, not when left on its own.
Example: AI can help doctors detect anomalies on X-rays, but only a human can make complex medical judgments.
The lie: That AI will replace doctors, when in reality it augments them.
Bambenek argues that AI amplifies misinformation at scale, and platforms aren’t prepared to filter it properly.
AI-generated content (deepfakes, fake articles, fake studies) can erode trust faster than any disinformation campaign in history.
AI tools are now helping hackers, not just defenders.
Script kiddies can launch sophisticated attacks without understanding the code.
A new arms race is underway: AI vs. AI in cybersecurity.
Venture capital and corporate hype often exaggerate what AI can do, leading to wasted money, broken products, and dangerous expectations.
The idea of “general AI taking over everything” is a misleading narrative pushed for clicks and funding.
Bambenek advocates for “explainable AI” — if an algorithm makes a decision, we should understand why.
Black-box systems being deployed in legal, medical, or hiring decisions are a major red flag.
“AI doesn’t replace expertise. It amplifies it — or it amplifies ignorance if you’re not careful.” – John Bambenek
“There’s nothing more dangerous than trusting a machine you don’t understand.” – Mark Fidelman
Book: Lies, Damn Lies, and AI by John Bambenek – Available on Amazon
Guest: John Bambenek on LinkedIn