ARTICLE AD BOX
Nvidia says its next-generation Rubin data centre processors are now in production, with customer deployments planned for the second half of the year. CEO Jensen Huang unveiled performance gains, cloud adoption plans and new autonomous vehicle tools at CES.

Nvidia Corp. has confirmed that its next-generation Rubin data centre processors have entered production, with customers expected to begin testing the technology later this year. Chief Executive Officer Jensen Huang made the announcement during his keynote appearance at the CES trade show in Las Vegas on Monday.
Named after astronomer Vera Rubin, the new platform represents Nvidia’s latest attempt to stay ahead in the fast-growing artificial intelligence hardware market.
Customer deployments planned for second half of the year
Huang said all six chips that make up the Rubin computing platform have returned from manufacturing partners and are on schedule for customer deployment in the second half of the year.
According to Huang, demand for advanced computing power continues to surge as artificial intelligence software becomes more complex and widely adopted. He said existing data centre infrastructure is struggling to keep pace with the scale of modern AI workloads.
Significant performance gains over Blackwell
Nvidia claims Rubin offers major improvements over its predecessor, Blackwell. The company said the new accelerator delivers 3.5 times better performance for AI training and up to five times better performance for running AI models.
The accompanying central processing unit features 88 cores and is said to provide double the performance of the CPU it replaces. Nvidia added that Rubin-based systems will also be cheaper to operate, as they can achieve the same results using fewer components.
Early disclosures aim to sustain momentum
Nvidia has shared details of Rubin earlier than usual, a shift from its traditional practice of unveiling major hardware updates at its annual GTC conference in the spring. The move appears aimed at keeping customers and partners focused on Nvidia’s technology roadmap as competition intensifies.
Despite highlighting its upcoming products, Nvidia stressed that demand for its existing platforms remains strong.
China demand and licensing uncertainty
Nvidia also said it continues to see robust interest from Chinese customers for its H200 chip. The Trump administration is currently reviewing licence applications that would allow the company to ship the product to China.
Chief Financial Officer Colette Kress told analysts that Nvidia has sufficient supply to meet Chinese demand without affecting deliveries to other regions, regardless of the final licensing outcome. However, approval from Chinese authorities would also be required for local companies to deploy the US-made chips.
Major cloud providers among first adopters
Rubin hardware will be offered both as part of Nvidia’s DGX SuperPod supercomputer systems and as standalone products for customers seeking more modular deployments. Microsoft and other major cloud providers are expected to be among the first to roll out the new technology later this year.
At present, a large share of spending on Nvidia-powered systems comes from a small group of customers, including Microsoft, Google Cloud and Amazon Web Services.
Expansion into autonomous vehicles and robotics
Alongside its data centre announcements, Nvidia unveiled new tools aimed at accelerating the development of autonomous vehicles and robots. The company introduced a platform called Alpamayo, designed to help vehicles reason through real-world scenarios.
The model can be retrained by users and is intended to help autonomous systems respond to unexpected situations, such as infrastructure failures. Nvidia said its work builds on existing partnerships, including with Mercedes-Benz, and that the first Nvidia-powered car is expected to hit US roads in the first quarter of the year.
(With inputs from Bloomberg)

1 week ago
3






English (US) ·