Current location: Home> Ai News

Meta launches Llama 4 multimodal AI model to reshape the open source AI landscape

Author: LoRA Time: 07 Apr 2025 1026

Meta officially launched a new generation of open source AI model - Llama 4, focusing on "multimodal processing" and "efficient computing", regaining the technological high ground of open source AI models in one fell swoop. Compared with previous generations, Llama 4 not only achieved a qualitative leap in model architecture, but also showed strong strength in language understanding, image recognition, code assistance, etc.

Meta launches Llama 4 multimodal AI model.jpg

The Llama 4 series released this time includes three versions: Scout, Maverick and Behemoth that is currently in training. Among them, Llama 4 Scout has 17 billion active parameters, supports ultra-long context input (10 million tokens), and can run on a single H100 GPU, surpassing most lightweight models on the market; Llama 4 Maverick is equipped with a MoE architecture with a total parameters of up to 400 billion, focusing on tasks such as image understanding, creative writing, and ranks among the top in the LMSYS rankings.

It is worth mentioning that Llama 4 Behemoth is a super-large-scale model still under training, with a parameter volume of 2 trillion, which is particularly outstanding in STEM tasks and is highly anticipated by developers.

Official Resources: