NVIDIA AI Chip Sales Prospects in China Not Looking Bright
  • Jan 20, 2025, 07:05 am
  • Semiconductor

NVIDIA AI Chip Sales Prospects in China Not Looking Bright

On the evening of January 19, 2025, at an appreciation and Spring Festival reception in Beijing, NVIDIA founder and CEO Jensen Huang revealed some eye-catching figures: Currently, NVIDIA has 1.5 million developers in mainland China and is collaborating with nearly 3,000 startup companies.

In January of last year, Huang had a similar visit, during which he visited NVIDIA's offices in Beijing, Shanghai, and Shenzhen, and attended the annual China region conference. This was Huang's first visit to mainland China in several years. Compared to 2024, this year’s visit by Huang appeared much lower-profile, drawing widespread attention.

To the outside world, this visit comes at a particularly sensitive time, as the U.S. government has introduced new regulations on AI chip exports. On January 13, the U.S. Department of Commerce's Bureau of Industry and Security announced export control measures for AI chips, expanding the restriction scope from China to a global level. According to the new rules, only 18 countries considered “U.S. allies” can import NVIDIA's advanced AI chips without restrictions, while all other regions face certain purchasing limitations. China has been listed as a high-risk country and cannot import NVIDIA's advanced AI chips through any channel.

AI Chip Restrictions

The AI chips targeted by the U.S. restrictions are high-performance GPUs, NVIDIA's core products, and foundational components for data centers. Their performance and stability directly determine the computational power and smooth operation of data centers.

An NVIDIA distributor stated that this restriction effectively blocks Chinese companies from accessing NVIDIA's chips, shutting all potential channels for importing advanced AI chips into China. In Q3 2024, NVIDIA earned $5.42 billion in revenue from the Chinese market, accounting for approximately 15% of its total revenue.

As competition between China and the U.S. in the field of artificial intelligence intensifies, large tech companies' data centers are gradually expanding from kilowatt-scale to megawatt-scale operations. The distributor mentioned that their company plans to promote large-scale computing clusters (composed of over 10,000 GPUs) to Chinese data centers, involving upgrades in chips, switches, optical modules, and servers. However, it remains uncertain whether the chips required for this upgrade are within the scope of the new regulations. If they are, this plan will be blocked in China.

In fact, the U.S. government’s chip export controls have been continuously strengthening. Since October 7, 2022, the Biden administration has implemented a series of export control measures to prevent China from acquiring advanced process chips and chip manufacturing equipment. These measures have gradually escalated, and in 2024, the U.S. government further expanded the export restrictions to the consumer electronics sector, covering devices like AI chip-enabled laptops. In December of the same year, the controls were deepened, extending the scope of the restrictions to high-bandwidth memory (HBM) chips, aligning them with high-performance GPU controls, thus further limiting China's development in high-performance computing hardware.

Competition from Huawei's AI Chips

In October 2024, Huawei unveiled a new chip, the Ascend 910, in Munich, Germany. This chip belongs to Huawei's "Ascend" series and is a self-developed AI chip.

Huawei's rotating chairman Xu Zhijun stated, "The Ascend 910 can efficiently solve AI application problems such as image recognition and video analysis."

He also mentioned, "The Ascend 910 will be widely used in fields like smartphones, smart driving, and smart cities."

How does this chip perform?

According to Huawei, when handling AI tasks, its training speed is twice as fast as NVIDIA's H200, and its inference speed is also twice as fast.

In other words, if NVIDIA's H200 chip completes a task in 30 minutes, Huawei's new chip can accomplish the same task in 15 minutes.

Of course, this is a figurative statement because different AI tasks require different computing power, and different chips have different design architectures and optimization levels, making direct comparisons of their performance difficult.

However, we can get a general idea of the computational power of these two chips.

The NVIDIA H200 chip is specifically designed for AI computation, with its performance mainly dependent on its computing power. This chip has a computational power of 35.2 TFLOPS and an energy efficiency of 14.9 TOPS/W.

Huawei’s new chip, on the other hand, has a computational power of 256 TFOPS and an energy efficiency of 6.9 TOPS/W.

From these data, it’s clear that Huawei's new chip exceeds NVIDIA's H200 in terms of computational power.

Post Tags: