“We want every layer — chip, system, software — because when you own the stack you can outrun a GPU cluster by 40-70x,” Cerebras CEO Andrew Feldman says. In this episode of Tech Disruptors, Cerebras returns to the Bloomberg Intelligence podcast studios as Feldman joins Bloomberg Intelligence’s Kunjan Sobhani and Mandeep Singh to explain the progress from “biggest chip” to “fastest inference cloud.” Feldman unpacks the WSE-3 upgrade, six new data-center builds and fresh Meta and IBM deals that aim to deliver sub-second answers at a fraction of GPU cost, plus Feldman’s views on scaling laws, synthetic data and the looming power crunch.