Jake and Nathan just got back from their third Stanford AI + Education Summit — The AI Inflection Point: What, How, and Why We Learn — and a week later, they still can't stop talking about it. In this episode they dig into the tension at the heart of AI in schools right now: how do you protect the human skill development that education exists to build, while letting AI do the things it's actually good at? They get into the AI Assessment Scale, why cheating is the wrong frame, what it means when kids turn to AI for emotional connection, and whether the "perfect tutor" is the answer anyone thinks it is. Honest, critical, and grounded in classroom reality.
Referenced in this episode
Stanford AI + Education Summit 2026 The fourth annual summit, held February 11, 2026. Full conference on the Stanford HAI YouTube channel.
AI Assessment Scale (AIAS) Developed by Mike Perkins, Leon Furze, Jasper Roe, and Jason MacVaugh. Five levels of acceptable AI use — from no AI to full AI with the student as director and evaluator. First published 2023, updated Version 2 in 2024. Adopted by hundreds of institutions worldwide, translated into 30+ languages.
Matt Miller — AI for Educators Source of the 12 cheating scenarios Jake has been using to poll educators across the country. Miller also runs Ditch That Textbook.
Google AI Quests Free, code-free, game-based AI literacy tool for students ages 11–14. Students step into the role of Google researchers solving real-world problems in climate, health, and science. Co-developed by Google Research and the Stanford Accelerator for Learning. Complete lesson plans and teacher guides included.
Ethan Mollick — Co-Intelligence: Living and Working with AI (Penguin, 2024) Source of the centaur/cyborg framing. The centaur divides labor strategically between human and AI; the cyborg integrates the two fluidly within the same task. Mollick's Substack One Useful Thing is one of the more practically useful ongoing resources for educators thinking about AI.
Cheating research Jake references "Cheating in the Age of Generative AI: A High School Survey Study of Cheating Behaviors Before and After the Release of ChatGPT" — Computers and Education: Artificial Intelligence (2024). Note: Jake mis-attributes this to Stanford — the actual source is below. Key findings: overall cheating volume stayed stable after ChatGPT launched; students who self-reported higher AI competence cheated less; clear boundaries and consequences remained the strongest deterrent.
A note on homo technologicus was attributed to Yuval Noah Harari. It circulates in academic commentary on Harari's work but doesn't appear to be a direct Harari coinage. The concept maps to themes in Homo Deus, but we can't confirm the specific term originated there. We're leaving it as spoken and flagging it
Got a question? We'd love to answer it! Leave us a voicemail on SpeakPipe: https://www.speakpipe.com/whatteachershavetosay
Want more EduProtocols from Jake? Check out his book at Amazon, Barnes and Noble, and more.