Are you worried about AI taking over your creative work – or worse, taking over the world? Do you find yourself caught between the techno-evangelists claiming AI will solve all our problems and doomsayers warning of a robotic apocalypse? Perhaps you've wondered if those impressive AI demos really represent what these systems can do consistently, or if there's a gap between marketing hype and everyday reality.
In this episode of Creativity Excitement Emotion, David shares a rare insider perspective on artificial intelligence based on his daily professional experience working with these systems. With candid examples, he reveals current AI tools' surprising limitations while cutting through the hype and hysteria surrounding the technology.
Whether you're an artist concerned about AI-generated competition, a professional wondering how to incorporate these tools ethically, or simply someone trying to make sense of contradictory AI narratives, this episode offers a grounded, practical perspective to help you navigate the changing technological landscape.
Download the PDF Transcript
Sponsors:
Productivity, Performance & Profits Blackbook: Get a free copy of the “Definitive Guide to Productivity for Artists and Entrepreneurs.”
The Renegade Musician: David’s magnum opus on building an independent music career is here!
Highlights:
00:17 – The gap between perception and reality in AI
03:10 – Is AI sentient?
05:19 – What AI isn’t
08:56 – Weird things AI does (tendencies, errors, and hallucinations)
16:18 – Where things are going with AI
22:29 – Concluding thoughts
Summary:
In this thought-provoking episode, David Andrew Wiebe cuts through the noise surrounding artificial intelligence, offering a sobering perspective on its current capabilities and limitations.
Drawing from his daily professional experience working with AI systems, he provides a rare insider view that contrasts sharply with popular narratives. With a combination of personal anecdotes, critical analysis, and a touch of humor, he explores the significant gap between public perception of AI and its actual functionality, challenging both utopian visions of AI assistance and dystopian fears of machine takeover.
Key Themes & Takeaways
The episode weaves together several interconnected ideas that provide a framework for understanding AI's current reality:
The substantial gap between public perception of AI capabilities and its actual limitations, particularly in areas requiring contextual understanding and basic reasoning
The importance of maintaining healthy skepticism when using AI tools, especially for tasks requiring accuracy and nuance
The inherent problems in AI's learning methodology and why sentience remains elusive despite rapid advancement in language generation
The balance between embracing AI's benefits for productivity while recognizing its significant shortcomings
The distinction between using AI as a tool versus delegating critical thinking and decision-making
How the marketing of AI capabilities often creates unrealistic expectations that actual performance cannot match
David approaches the topic not as an AI doomsayer or evangelist, but as a practical professional who uses these tools daily while maintaining critical awareness of their limitations. This balanced perspective offers listeners valuable insights for navigating the increasingly AI-influenced creative landscape.
AI's Current Reality
David meticulously catalogs the surprising limitations of current AI systems, drawing from recent personal experiences with multiple AI platforms. His examples reveal a technology that, despite impressive language capabilities, still struggles with basic tasks that humans perform effortlessly:
Geographic confusion, such as confidently but incorrectly identifying which fish species exist in specific Canadian provinces: "I said, I want to go bass fishing in Alberta. And the AI's like, 'Oh, that's such a great idea. Alberta's teeming with largemouth and smallmouth bass.' ... Then I'll contradict it and be like, 'Yeah, but are there many bass in Alberta though?' 'Oh, sorry, I made a mistake.'"
Mathematical inabilities, particularly with basic counting and word limits: "It can't do basic math. It has a horrible time and this is pretty universally true of all models... It doesn't really understand character limits." He describes requesting 100-character bios that came back at 150-200 characters, requiring multiple attempts to get the correct length.
Hallucinations about locations, like generating multiple fake addresses for a single Walmart: "It finds Walmart in a specific city and passes it off as three separate stores... It'll even identify possibly something crazy like Wally's World or Waldo Mart as being another Walmart location."
Creating media timestamps for videos it has never seen: "Generates media highlights for videos without knowing anything about the video... Here's your video highlights, this minute at this minute marker. It doesn't know anything about the video or the lengths of the video."
Generating nonsensical responses to simple word games: "It's horrible at word games... You ask it to generate a bunch of three letter words using X letters... and it will struggle and struggle and struggle. Sometimes it will generate words to infinity until it runs out of characters."
"This is how you know you're interacting with a computer program," David explains. "You're not interacting with something sentient. Something that's sentient is capable of learning... So far, we're not really seeing AI do that." This distinction between impressive language generation and actual understanding forms a central argument of the episode.
The "Magic Trick" of AI Presentation
A central metaphor in the episode is the concept of "home court advantage" – how AI companies create an illusion of advanced capability through carefully controlled environments and presentations:
The presentation of AI often involves carefully controlled demonstrations that hide limitations, similar to a magician creating seemingly impossible effects through controlled conditions
Public-facing AI is programmed to please and affirm users rather than admit limitations: "AI generally tries to please and affirm you. No matter how wacked out or crazy your idea... it's just gonna try to please and affirm you."
Much like a magic trick, AI's apparent abilities often involve misdirection and controlled environments, leading to a distorted public perception of capabilities
The gap between developer claims and actual functionality remains substantial, particularly in areas requiring contextual understanding and common sense
User interactions with AI are often shaped to highlight strengths while obscuring fundamental weaknesses
David uses the example of AI-powered glasses that appear to understand urban environments, questioning whether this represents true intelligence or simply a narrow application with significant behind-the-scenes constraints: "Those AI powered glasses, seems like they can identify what street you're on and anything... But what if it wasn't connected to that specific model? Or what if it was connected to a different model? Or what if it wasn't a specific product of a specific company? What would happen then?"
This critique of AI presentation techniques helps listeners understand why their own experiences with AI might diverge significantly from the polished demonstrations they see in promotional materials.
AI Development Challenges
David offers insights into fundamental challenges facing AI advancement, going beyond surface-level observations to examine the structural limitations in current development approaches:
Current improvement methods still rely heavily on human-generated training data, creating bottlenecks in advancement: "We still have human beings creating large amounts of training data to improve the AI. That's the only way we can do it right now."
AI cannot effectively generate its own training data due to hallucinations and errors: "If you leave it to train in itself, you're basically leaving it to train it on more errors, more hallucinations, more problems."
The "inception" problem of using AI to evaluate other AI models: "If big data was this whole thing where it didn't really work out... We'll now have AI models evaluating AI models... AI is watching AI now in an attempt to make better answers. How well that's working? I don't know."
How adding complexity doesn't necessarily translate to improved intelligence: "The only available action is to keep adding code. I'm using that more as a metaphor than an exact process for how AI works. But... adding more code only makes it more complex, not necessarily better."
"Our processes, good though they are...it's not a recipe for attaining sentience," David notes. "You need human-assisted improvement. And as far as I can tell, that's how things are going to continue to be."
This section provides valuable context about why AI improvement might not follow the exponential growth pattern many predict, identifying specific technical and methodological barriers rather than just theoretical limitations.
Action Steps for Artists and Creators
For creative professionals navigating the AI landscape, David suggests practical approaches that balance utilizing AI's benefits while avoiding its pitfalls:
Use AI as a productivity tool while maintaining healthy skepticism: "Use it as a tool. Absolutely use it to improve productivity. Why not? Use it in a way that supports your career or business. Go right ahead. But be skeptical, be on the lookout, try to catch things."
Fact-check AI-generated information through traditional sources: "Don't use AI as your search engine. Don't ask it questions and accept them blindly. It makes mistakes all the time. It hallucinates all the time...