
In an aggressive move to stake its claim in the generative AI sector, Meta unveiled the first models of its open-source AI arsenal, Llama 4, over the weekend. CEO Mark Zuckerberg proclaimed the company’s ambition in an Instagram video: “Our goal is to build the world’s leading AI, open source it, and make it universally accessible.”
The initial models, Llama 4 Scout and Llama 4 Maverick, are already downloadable on the Llama website and Hugging Face. They form the core of Meta AI, the AI-driven assistant that is integrated across WhatsApp, Instagram, Messenger, and the web. Meta also made Llama 4 Behemoth available, which is noted as one of the most advanced large language models (LLMs) developed so far, to be utilized to train and teach future models.
The debut is Meta’s first use of a mixture-of-experts (MoE) model, which splits the model into specialized components that focus on diverse areas like physics and computer coding. Only the relevant expert modules are engaged for specific tasks, which makes it more efficient and less costly.
Llama 4 Scout features 17 billion parameters and 16 experts, designed for single-GPU operation. In contrast, Llama 4 Maverick, also with 17 billion parameters but 128 experts, is positioned as a versatile model for various applications. Meta claims Maverick outperforms OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across several benchmarks.
Still in the works, the Llama 4 Behemoth model will feature 288 billion active parameters, per Zuckerberg, who also previewed Llama 4 Reasoning for tackling hard problems, with more to come.
Meta’s 2025 AI infrastructure expenditure is predicted at between $60 billion and $65 billion, supporting the company’s ambitious AI push as Llama model downloads surpassed one billion prior to the Llama 4 release.