Current location: Home> Ai News

Nvidia CEO: Our AI chips are improving faster than Moore’s Law

Author: LoRA Time: 08 Jan 2025 904

At the recent CES exhibition, Nvidia CEO Jensen Huang said that the performance improvement rate of the company's AI chips has exceeded the historical standards of Moore's Law.

Moore's Law, proposed by Intel co-founder Gordon Moore in 1965, predicts that the number of transistors on a computer chip will roughly double every year, thus doubling the chip's performance accordingly. In recent years, however, progress on Moore's Law has slowed significantly.

Graphics card, NVIDIA, RTX40

Huang Renxun pointed out that Nvidia’s latest data center super chip is more than 30 times faster than the previous generation when running AI inference workloads. "We're able to build architectures, chips, systems, libraries and algorithms at the same time, and if we can do that, we can surpass Moore's Law because we can innovate across the entire technology stack," he said.

This statement is particularly important in the context of many questioning whether progress in AI has stalled. Currently, leading AI laboratories such as Google, OpenAI, and Anthropic are using NVIDIA's AI chips to train and run AI models, so the advancement of these chips will directly affect the capabilities of AI models.

Huang Renxun also mentioned that there are now three active AI expansion laws: pre-training, post-training and test-time calculation. He emphasized that Moore's Law is so important in the history of computing because it drives the reduction of computing costs, and performance improvements in the inference process will also bring about reductions in inference costs.

Although some have expressed concerns about whether Nvidia's expensive chips can continue to lead the field of inference, Huang said that the latest GB200NVL72 chip is 30 to 40 times faster than the H100 chip on inference workloads, which will make AI inference models more powerful. More economical.

Huang Renxun emphasized that improving computing power is a direct and effective way to solve the problems of computing performance and cost affordability during inference. He expects that as computing technology continues to advance, the cost of AI models will continue to decline, although some models from companies such as OpenAI currently cost more to run.

Huang Renxun said that today's AI chips have improved 1,000 times compared to ten years ago. This progress far exceeds Moore's Law, and he believes that this trend will not stop soon.

FAQ

Who is the AI course suitable for?

AI courses are suitable for people who are interested in artificial intelligence technology, including but not limited to students, engineers, data scientists, developers, and professionals in AI technology.

How difficult is the AI course to learn?

The course content ranges from basic to advanced. Beginners can choose basic courses and gradually go into more complex algorithms and applications.

What foundations are needed to learn AI?

Learning AI requires a certain mathematical foundation (such as linear algebra, probability theory, calculus, etc.), as well as programming knowledge (Python is the most commonly used programming language).

What can I learn from the AI course?

You will learn the core concepts and technologies in the fields of natural language processing, computer vision, data analysis, and master the use of AI tools and frameworks for practical development.

What kind of work can I do after completing the AI ​​course?

You can work as a data scientist, machine learning engineer, AI researcher, or apply AI technology to innovate in all walks of life.