At a recent launch, Data Dynamics (DDN) announced its latest Infinia 2.0 object storage system, designed for artificial intelligence (AI) training and reasoning. The system claims to achieve up to 100 times the acceleration of AI data and 10 times the cost efficiency of cloud data centers, attracting the attention of many industries.
Alex Bouzari, CEO and co-founder of DDN, said: “85 of the world’s top 500 companies are using DDN’s data intelligence platform to run their AI and high-performance computing (HPCs) ) Applications. Infinia will help customers achieve faster model training and real-time insights in data analytics and AI frameworks while ensuring future adaptability to GPU efficiency and energy consumption.”
Paul Bloch, co-founder and president of DDN, also added: “Our platform has been put into use in some of the world’s largest AI factories and cloud environments, demonstrating its ability to support critical AI operations.” It is reported that Elon Musk's xAI is also one of DDN's customers.
In the design of Infinia 2.0, AI data storage is the core. Chief Technology Officer Sven Oehme stressed: “AI workloads require real-time data intelligence, eliminating bottlenecks, accelerating workflows, and in complex model enumeration, pre-training and post-training, enhanced generation (RAG), Agentic AI and seamlessly scale in multimodal environments.” Infinia 2.0 aims to maximize the value of AI while providing real-time data services, efficient multi-tenant management, intelligent automation and a powerful AI native architecture.
The system has event-driven data mobility, multi-tenant, hardware-independent design and other features, ensuring 99.999% uptime, and achieving up to 10 times the quality of service (QoS) for always-on data reduction, fault-tolerant network erase encoding and automation . Infinia2.0 is combined with Nvidia's Nemo, NIMS microservices, GPU, Bluefield3DPU and Spectrum-X networks to accelerate the efficiency of AI data pipelines.
DDN claims Infinia has bandwidth up to TBps, latency below milliseconds, and performance far exceeds AWS S3 Express. Other remarkable parameters include, based on independent benchmarks, Infinia achieves 100 times improvements in AI data acceleration, AI workload processing speed, metadata processing and object list processing, and the speed of AI model training and inference query 25 times faster.
The Infinia system supports scale-up from TB to EB, and can support over 100,000 GPUs and 1 million simultaneous clients, providing a solid foundation for large-scale AI innovation. DDN emphasizes that its systems perform well in real data center and cloud deployments, enabling unparalleled efficiency and cost savings from 10 to over 100,000 GPUs.
Charles Liang, CEO of Supermicro, said: “By combining Infinia 2.0, DDN’s data intelligence platform, with Supermicro’s high-end server solutions, the two companies have collaborated to build the world’s largest AI. One of the data centers. ” This partnership may be related to the Colossus data center expansion of xAI.