Meta AI released the open-weight Llama 4 model family on Wednesday, making the weights freely available for download on Hugging Face and via Meta's developer portal. The release marks the company's most significant AI model update since Llama 3, and arrives amid intensifying competition with OpenAI, Google DeepMind, and Anthropic for dominance in the large language model space.
According to Meta's technical report published alongside the release, the flagship Llama 4 variant — a 400-billion-parameter mixture-of-experts architecture — achieves top scores on the MMLU Pro reasoning suite and outperforms OpenAI's GPT-4o on the MATH and HumanEval coding benchmarks, as measured by Meta's internal evaluations. Independent researchers at several university labs and AI benchmarking organisations said they had begun replication tests within hours of the weights becoming available.
Meta Chief AI Scientist Yann LeCun framed the release as a continuation of the company's open-source philosophy, arguing at an online briefing that broadly accessible foundation models accelerate safety research and reduce industry concentration. "Closed ecosystems create blind spots," LeCun said. "When thousands of researchers can inspect and stress-test a model, you find failure modes faster."
The release drew immediate commentary from the AI developer community. Prominent researchers on social media noted that the mixture-of-experts design — routing inputs to specialised sub-networks — allows Llama 4 to achieve high capability with lower per-token inference costs compared to dense models of equivalent benchmark performance. Several cloud infrastructure providers, including AWS and Microsoft Azure, confirmed they would make Llama 4 available through managed API endpoints within days.
Privacy advocates and AI safety organisations cautioned that open weights also lower barriers for misuse, including fine-tuning for disinformation or code generation for malicious purposes. The EU AI Office indicated it was reviewing whether Llama 4's release obligations under the EU AI Act's general-purpose AI provisions had been fully met, citing Meta's documentation on training data provenance. Meta said it had complied with all applicable disclosure requirements and published a detailed model card and acceptable-use policy alongside the weights.