Current location: Home> Ai News

IBM launches small AI model of efficient reasoning Granite 3.2: Multimodal and practical

Author: LoRA Time: 06 Mar 2025 801

IBM recently released its latest Granite 3.2 large language model, aiming to provide enterprises and open source communities with a "small, efficient, and practical" enterprise AI solution. This model not only has multimodal and reasoning capabilities, but also improves flexibility and cost-effectiveness, making it easier for users to adopt.

QQ_1741229530181.png

Granite 3.2 introduces the Visual Language Model (VLM) for processing documents, performing data classification and extraction. IBM claims that this new model has performance reaching or exceeding larger models in some key benchmarks, such as Llama3.211B and Pixtral12B. In addition, Granite3.2's 8B model also showed the ability to match or surpass larger models in standard mathematical reasoning benchmarks.

In order to improve reasoning capabilities, some models of Granite 3.2 also have the "thinking chain" function, which can clarify intermediate reasoning steps. Although this feature requires a lot of computing power, users can enable or disable it at any time as needed to optimize efficiency and reduce overall costs. Sriram Raghavan, vice president of research at IBM AI, said at the press conference that the focus of next-generation artificial intelligence is on efficiency, integration and practical impact, allowing enterprises to achieve strong results without overspending.

In addition to the improvement of reasoning capabilities, Granite3.2 also launched a miniaturized version of the "Granite Guardian" security model. Although the volume is reduced by 30%, its performance remains at the level of previous generation models. In addition, IBM has introduced a capability called "verbalized confidence" that allows for more detailed assessment of risks and consider uncertainty in security monitoring.

Granite 3.2 is trained on IBM's open source Docling toolkit, which allows developers to convert documents into specific data required for customized enterprise AI models. During the model training process, 85 million PDF files and 26 million synthetic Q&A pairs were processed to enhance VLM's ability to handle complex document workflows.

IBM also announced the next generation of TinyTimeMixers (TTM) model, a compact pre-trained model focusing on multivariable time series prediction with long-term prediction capabilities up to two years.

Official blog: https://www.ibm.com/new/announcements/ibm-granite-3-2-open-source-reasoning-and-vision