Listen

Description

OpenAI announced parental controls for ChatGPT on September 29 2025, but after testing them myself, I found critical flaws that make them ineffective for protecting children. In this video, I break down exactly what's wrong with the current system and what the parental controls should actually look like.

Key Problems Covered:

- Children can bypass controls by simply logging out

- Defaults are not safe-by-default (backwards safety engineering)

- Zero visibility for parents (not even conversation summaries)

- Controls are not tamper-proof

- System appears designed for PR, not actual child protection

What Real Controls Would Include:

- Tamper-proof architecture

- Safe-by-default settings

- Tiered monitoring (metadata + summaries, not full surveillance)

- Age-appropriate developmental stages

Do not rely on these controls. Kids should use ChatGPT only in shared spaces with ongoing parental engagement and clear family agreements.