On this November 2025 paper the Meta Llama Team's paper introduces Llama 3, a new family of large language models featuring 8B, 70B, and 405B parameters, designed with native multilingual support, coding, reasoning, and tool usage capabilities. The development emphasizes data quality and diversity, employing extensive filtering, de-duplication, and heuristic cleaning processes for both English and multilingual data, alongside scaling laws to optimize model size and training budgets. The models utilize a standard dense Transformer architecture with minor adaptations like grouped query attention and an attention mask for multi-document sequences, demonstrating comparable performance to leading models such as GPT-4 across various benchmarks. Furthermore, the research explores integrating multimodal capabilities—image, video, and speech—through compositional approaches involving specialized encoders and adapters, which are trained through multi-stage pre-training and fine-tuning. A significant focus is also placed on safety and responsible development, incorporating comprehensive data cleaning, iterative safety finetuning with reward models and DPO, and robust red teaming efforts to address risks like insecure coding and prompt injection, while publicly releasing Llama Guard 3 as a system-level safety classifier.
Source:
https://arxiv.org/pdf/2407.21783