What does it actually take to run an AI company? And can OpenAI afford it? This week, Cindy Goodwin-Sak and Jaime Peters kick off a multi-part series diving into the business of being an AI company, not just using one.
It all started with a CNBC article on OpenAI’s data center pivot that had them talking for days. So they decided to break it down for you, starting from the ground up: what goes into a data center, why AI requires specialized (and expensive) computing hardware, and just how much energy and water these facilities consume. Spoiler: think the combined electricity usage of Chicago, LA, New York, Dallas, and St. Louis, and then double it.
They also unpack OpenAI’s original strategy for scaling its infrastructure (the Stargate initiative, its partnerships with Microsoft, Oracle, Amazon, and SoftBank) and what any manager can learn from those moves around vertical integration, vendor diversification, and cost control.
But here’s the catch: OpenAI generated $20 billion in revenue last year, and their projected compute bill through 2030 is $600 billion. The math isn’t mathing. Tune in next week to find out what happens when reality sets in.