[Apple Podcasts] | [Spotify] | [YouTube] | [Simplecast]
Victoria frames a common failure mode: organizations bolt AI onto existing workflows instead of questioning whether those workflows make sense in the first place, especially in knowledge work.
Practical reframing for leaders:
What is the purpose of this workflow?
What decision is a human actually making here?
What can be simplified or deleted before we automate anything?
The episode highlights how quickly AI is moving from specialist territory to broad accessibility (no-code tools, conversational interfaces), which raises the stakes for shared understanding, norms, and guardrails.
We discuss the reality that many people are already using AI quietly, which means schools and organizations need clarity and psychologically safe training environments.
A central point: if AI removes routine tasks but leaders refill that time with more routine tasks, nothing improves. The higher-order shift is using reclaimed capacity for work that builds culture and learning (coaching, reflection, feedback, relationship-rich instruction, better decisions).
Victoria repeatedly returns to “run the reps” thinking: pick a small use case, test it quickly, learn, and stack wins as data points.
You explicitly connect the conversation to school realities: the goal is not to “win AI,” but to move the mission forward in a world where AI is embedded into everything.
Run a 2-week “AI workflow audit”
Pick one recurring task (newsletter, family comms, lesson resource creation, feedback bank).
Map the current steps.
Ask: Which steps are “human judgment” vs “human labor”?
Create a “safe sandbox” norm
One protected time block/week for staff to try a use case and report back.
Focus on learnings, not performance.
Name and support champions (formal or informal)
Champions are “self-appointed” and momentum makers; don’t wait for a committee.
Reinvest reclaimed time into the most human work
Student conferencing, richer feedback loops, community-building routines, coaching conversations.
Silicon Valley Executive Academy (SVEA) — program model centered on immersion and experience-based knowledge sharing. Silicon Valley Executive Academy
Victoria Mensch (LinkedIn) — leadership and AI transformation writing. LinkedIn
Microsoft / LinkedIn Work Trend Index (AI at work + BYOAI) — useful framing for why hidden adoption and governance matter. Microsoft
216: Designing Trustworthy AI in K-12: NASA, Ethics, and Teacher Voice (David Lockett) — direct complement on governance, ethics, and implementation realities in schools. Podcasts
222: From Burnout to Better Questions – Human-Centered AI Adoption (Jackie Celske) — closely aligned with the burnout → redesign theme and the “people/process over tools” framing. Podcasts
218: Teaching What Can’t Be AI’d (John “Camp”) — matches the “reinvest in what’s human” thread (presence, discourse, competency-based learning). Podcasts
If you found this episode valuable, please share it with a colleague and leave a review. Your support helps other educators and leaders discover the show.