GTC 2024: NVIDIA Unveils New Blackwell System, Showcases Partner Lineup
GPU maker looks to data centers to expand gains in AI
Virtual Reality (VR) and Augmented Reality (AR) News
Virtual Reality (VR) and Augmented Reality (AR) Resources
March 22, 2024
This week, the 17,000-seat SAP center in San Jose, California, was filled to the brim, not by a hockey match or a concert but by a tech event. On Monday, NVIDIA CEO Jensen Huang stepped up to the podium there to deliver his keynote talk, to be met with applauses from the audience. The highlight of the talk was the big reveal—the unveiling of the new NVIDIA Blackwell chip for AI workloads.
The Blackwell Era
It's misleading to call Blackwell a “chip,” because it's meant to function as the heart of a supercomputing system—“a platform,” in Huang's words. “People think we make GPUs, and we do, but GPUs don't look the way they used to,” he said. “Generative AI is the defining technology of our time. Blackwell is the engine to power this new industrial revolution. Working with the most dynamic companies in the world, we will realize the promise of AI for every industry.”
The new Blackwell GPU comprises “208 billion transistors,” Huang proudly revealed. “Two dies are abutted together in such a way so they think of themselves as one chip. Ten Terabytes of data per second flows between them. There's no memory issue, no cache issue. They function as one giant chip.” NVLink switches further streamline the data flow within the Blackwell systems.
NVIDIA has designed the new Blackwell to easily fit into systems built with the previous-generation Hopper architecture chips. “You slide off Hopper, and you slide in Blackwell,” Huang said. This ensures firms with installations of Hopper-powered systems can easily upgrade to Blackwell, should they want to harvest the new GPUs' processing power.
The Blackwell dies are connected to an NVIDIA Grace CPU, built on Arm architecture. The development of its own data-center CPU allows NVIDIA to rely less on CPU giant Intel, whose competes with NVIDIA for AI computing market share.
AWS is planning to build one of the first Blackwell systems, capable of processing 222 exaFLOPS per sec, Huang revealed. Others looking to build Blackwell systems include Microsoft and Oracle, according to Huang.
In his keynote, Huang celebrated a number of partnerships with leading simulation vendors: Cadence, Synopsis, and Ansys, to name but a few. Finite Element Analysis (FEA) was the established computer-simulation method, but with generative AI, many of these vendors are looking to accelerate their solutions with the use of surrogate models and machine learning. The new method speeds up simulation but demands GPU-powered machine learning to develop the necessary models.
Omniverse Speaks English
NVIDIA's immersive 3D simulation and visualization platform Omniverse uses USD (Universal Scene Description) as its core visual language, but NVIDIA is also bolstering Omniverse's natural language processing to remove technical barriers from potential users. “You can speak to [Omniverse] in English, and it would directly generate USD and talk back in USD,” Huang said.
The ability to use natural language as prompts to generate 3D objects could significantly improve the user experience in CAD programs and quicken adoption. NVIDIA announced it's adding cloud APIs to Omniverse. “With these APIs, you're going to have magical digital-twin capability,” Huang said.
During GTC, Siemens announced, “In the next phase of our collaboration with NVIDIA, the company will release a new product later this year—powered by NVIDIA Omniverse Cloud APIs—for Teamcenter X, our industry-leading cloud-based product lifecycle management (PLM) software, part of the Siemens Xcelerator platform.”
NVIDIA Inferencing Microservices
During GTC, NVIDIA launched NVIDIA Inferencing Microservices (NIMs), for developers to create generative AI applications. The company describes them as “Cloud endpoints for pretrained AI models optimized to run on hundreds of millions of CUDA-enabled GPUs across clouds, data centers, workstations, and PCs.” Adobe, Cadence, Getty Images, and SAP are listed as the earliest firms to access these services, included in the NVIDIA AI Enterprise 5.0 portfolio.
“These NIMs are going to help you create new types of applications for the future—not one that you write completely from scratch, but you're going to integrate them like teams to create these applications,” Huang said.
Apple Vision Pro
Huang also revealed Omniverse can now stream to the Apple Vision Pro (priced beginning at $3,500), a mixed reality device from Apple. During the show, NVIDIA presented an interactive, physically accurate digital twin of a car streaming in full fidelity to Apple Vision Pro XR displays.
“Vision Pro acts as a portal into Omniverse. And because all of these CAD tools and different design tools are now integrated and connected to Omniverse. you can have this type of workflow,” said Huang.
Humanoid Robotics
During the keynote's closing minutes, Huang was flanked by a row of robots, from dwarfish ones to life-size ones. Huang said, “In the future, everything that moves will be robotics. And these robotic systems, whether they are humanoid, AMRs (autonomous mobile robots), self-driving cars, forklifts, or manipulating arms, they will all need one thing—a digital twin platform. We call it Omniverse.”
NVIDIA offers Isaac Sim, a virtual robot training system. The system uses NVIDIA Omniverse's immersive 3D environment to replicate the robot's operations in the real world. The rise of robotics, Huang expects, would boost demand for Omniverse.
More NVIDIA Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!About the Author
Kenneth WongKenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at kennethwong@digitaleng.news or share your thoughts on this article at digitaleng.news/facebook.
Follow DE