The May 2025 academic paper introduces **BurstGPT**, a novel, real-world workload dataset consisting of over ten million traces from regional Azure OpenAI GPT services collected over 213 days, which aims to optimize Large Language Model (LLM) serving systems. The authors argue that existing LLM serving optimizations are often evaluated using **unrealistic synthetic or non-LLM workloads**, leading to performance degradation in real-world deployments. BurstGPT provides empirical data on **user concurrency patterns, conversation structures, model response lengths, and system failures** to facilitate more accurate system evaluation and refinement of scheduling, caching, and resource provisioning strategies. The source presents **BurstGPT-Perf**, a benchmark suite using the dataset to demonstrate how realistic, bursty workloads reveal declines in efficiency, stability, and reliability in serving systems like vLLM. Ultimately, the work advocates for **data-driven methodologies** in optimizing LLM serving for better efficiency and quality of service.
Source:
https://arxiv.org/pdf/2401.17644