Is synthetic data the solution to "jagged" enterprise AI... or the fast track to Model Collapse?
We just got used to "Agentic AI." Now, Salesforce is defining the next frontier of automation with the new term Enterprise Generalized Intelligence (EGI) and betting big on synthetic data to train its new Agent Force solutions. But is this the right path for enterprise trust?
In this episode of Leading Change of the Wild, I dig into Salesforce's move and the massive risks involved in training AI on "fake" data.
Here’s what I explore:
The goal is 100% accurate, trustworthy AI. But training models on data that was literally designed to mimic human output might be the opposite of what's needed for lasting organizational trust.
👇 Let’s discuss:
Do you believe synthetic data is a viable path to increasing AI trust and accuracy in the enterprise?
Should models be honed with proprietary data or a specialized synthetic environment before deployment?