Mark Zuckerberg introduced the first models in the Llama 4 collection and shared that you can try Llama 4 today in Meta AI across WhatsApp, Instagram, Messenger, and meta.ai. Llama 4 Scout and Llama 4 Maverick (both open sourced) are our most advanced models yet & the best in their class for multimodality. These new models boast industry-leading performance thanks to distillation from Llama 4 Behemoth, our most powerful model that we’re previewing today & one of the world’s smartest LLMs.

Quick facts on Models
- Llama 4 Scout, a 17B-active-parameter Mixture of Experts (MoE) model with 16 experts. It offers an industry-leading 10M token context window and outperforms Gemma 3, Gemini 2.0 Flash-Lite and Mistral 3.1 across a broad range of widely accepted benchmarks.
- Llama 4 Maverick, a 17B-active-parameter MoE model with 128 experts, features best-in-class image grounding and outperforms GPT-4o and Gemini 2.0 Flash across a broad range of widely accepted benchmarks. It achieves comparable results to DeepSeek v3 on reasoning and coding—at half the active parameters. It offers an unparalleled performance-to-cost ratio with a chat version scoring ELO of 1417 on LMArena.
- Llama 4 Behemoth – These models are our best yet thanks to distillation from Llama 4 Behemoth, our most powerful model yet. Llama 4 Behemoth is still in training and not yet released, but is currently seeing results that outperform GPT-4.5, Claude Sonnet 3.7 and Gemini 2.0 Pro on STEM-focused benchmarks.
We aim to develop the most helpful, useful models while making sure we have the right protections in place. As part of this work, we’re continuing to make Llama more responsive so that it answers more questions, can respond to a variety of different viewpoints without passing judgment, and doesn’t favor some views over others. You can read more about the approach for this release in our blog [HERE].
More on Meta AI with Llama 4
It’s rolling out in more than 40 countries and 13 languages across WhatsApp, Messenger, Instagram Direct – and meta.ai. The multimodal features are limited today to the US in English, which is consistent with our existing multimodal feature availability in our apps. We’re working to bring Meta AI with Llama 4, including multimodal features, to more people around the world this year.
“Thanks to model improvements, Meta AI with Llama 4 is the assistant you can count on to provide helpful, factual responses without judgment. It responds conversationally and shares informative answers to more requests on a range of topics like personal advice, opinions and recommendations, and more. Meta AI is more steerable, following your explicit instructions more precisely. And it takes text replies to the next level, with better and more effective formatting to improve structure, readability, and clarity.” – a Meta spokesperson.