Is Your AI Calling the Police On You? | ChatGPT Privacy, AI Monitoring & Debates
OpenAI has confirmed it will scan ChatGPT conversations for “problematic” content—and in extreme cases, report users to law enforcement. What does this mean for your privacy, your rights, and the future of AI safety? In this episode of They Might Be Self-Aware, Hunter Powers and Daniel Bishop dive into the growing concerns around ChatGPT privacy and whether AI tools are quietly becoming digital informants.
We debate whether AI monitoring is necessary for safety or a slippery slope toward AI Big Brother. Along the way, we share stories about suspicious emails, bug bounty scams, and how even innocent prompts could one day be used to build a legal case against you. We also explore Columbia University’s experiment with Sway AI, an artificial intelligence “moderator” for debates, and ask if this is a preview of political debates being run by robots.
🎧 Listen & Subscribe
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1
⏱️ CHAPTERS
00:00 Welcome to the AI Jungle
02:00 Suspicious “AI Bug Bounty” Email
06:45 ChatGPT Privacy Concerns & Monitoring
09:00 OpenAI Scanning & Reporting to Police
14:30 Should AI Intervene on Self-Harm?
18:00 The Slippery Slope of AI Surveillance
22:00 Sway AI – Debates Moderated by Artificial Intelligence
27:30 Could an AI Moderate U.S. Presidential Debates?
34:00 Wrap-Up & Subscribe
📢 Engage
Do you trust AI companies with your private conversations? Should AI report dangerous behavior, or is that the ultimate violation of privacy? Share your thoughts in the comments and we might feature them in a future episode.
#ChatGPT #Privacy #ArtificialIntelligence