This paper proposes a novel approach to the challenging problem of forecasting consumer demand for new brands where preference is largely driven by intangibles. The central framework integrates a structural demand model, which initially estimates existing brand utilities, with a fine-tuned large language model (LLM). By training the LLM on textual descriptions of products and markets, the resulting model can successfully predict consumer preferences ($\delta_{jt}$) for items it has never encountered, significantly outperforming conventional models based on text embeddings. The author analyzes the internal mechanics of the LLM using techniques like sparse autoencoders to identify interpretable features influencing choices, thus offering guidance on brand positioning. Furthermore, the methodology allows researchers to perform advanced market simulations, such as solving for optimal pricing by combining LLM predictions with causal estimates derived from instrumental variable methods.