Should profit be part of the calculation in developing safe AI? The future of artificial general intelligence (“AGI”) hinges on how well we balance innovation with safety. In this episode, Tyler Whitmer, founder, president, and CEO of Legal Advocates for Safe Science and Technology (LASST), talks about his work to protect OpenAI’s original mission to ensure AGI is safe and benefits all of humanity. Drawing on his background as a commercial litigator and nonprofit leader, Tyler explains why OpenAI’s unique corporate structure was designed to safeguard against profit motives and how a proposed restructuring could weaken those protections. He outlines the legal and ethical risks of shifting control away from the nonprofit, the coalition effort that led to an open letter to California and Delaware attorneys general, and what changes are still needed to keep mission ahead of money. The conversation also explores broader concerns about the democratization of harmful technologies, the role of legal advocacy in tech safety, and advice for lawyers who want to work in this critical space. Listen in for a timely look at the intersection between law, technology, and the public interest!
Key Points From This Episode:
Links Mentioned in Today’s Episode:
Tyler Whitmer
Tyler Whitmer on LinkedIn
Legal Advocates for Safe Science and Technology (LASST)
Encode Amicus Brief
'Not for Private Gain: An Open Letter to OpenAI' | April 2025
'Not for Private Gain: An Open Letter to OpenAI Update' | May 2025