NVIDIA announced at its GTC event a variety of recent hardware, computing platforms and simulation engines designed to speed up the event of generative AI and robotics.

Yesterday, NVIDIA founder and CEO Jensen Huang unveiled the most recent products the corporate will offer to developers of tomorrow’s AI solutions.

Huang said: “Accelerated computing has reached the tipping point. General-purpose computing has run out of steam. We need a distinct way of processing data in order that we are able to proceed to scale, in order that we are able to proceed to scale back the price of processing data, in order that we are able to proceed to devour increasingly data processing while being sustainable.”

To enable the huge modernization of the world’s AI infrastructure, Huang introduced the next latest hardware and software solutions:

  • The Blackwell GPU and computing platform
  • NVIDIA DGX SuperPOD supercomputer
  • NIM Microservices – a brand new strategy to construct AI software
  • Omniverse – a real-world simulator for training robots
  • Isaac Perceptor software and Project GR00T – a universal base model for humanoid robots and robotics software

Here’s a better take a look at these exciting latest releases.


Huang said the industry needs much larger GPUs to enable multimodal training of ever-larger AI models. NVIDIA claims its latest Blackwell chip is “the physically largest chip possible,” containing 104 billion transistors.

Connecting two of those to 1 GPU ends in a major increase in processing performance. Blackwell delivers 2.5x the performance of NVIDIA’s Hopper architecture in FP8 for training per chip and 5x with FP4 for inference.

The NVLink connection connecting these GPUs is twice as fast as its predecessor and allows 576 Blackwell GPUs to be connected.

By connecting two Blackwell GPUs and a Grace CPU, the Grace Blackwell superchip is created, which forms the idea of NVIDIA’s GB200 NVL2 racks. These deliver exaflop computing in a single rack.

NVIDIA has connected a few of these to create its latest AI supercomputer called NVIDIA DGX SuperPOD, which delivers 11.5 exaflops of AI supercomputing with FP4 precision.

The media release listed AWS, Google Cloud, Microsoft Azure and Oracle as early candidates for the brand new computing hardware, which NVIDIA says can “construct and run generative AI in real-time on large language models with trillions of parameters at as much as 25 times lower cost.” energy consumption than its predecessor.”

This is what NVIDIA’s computing progress looks like during the last 8 years.

The Grace Blackwell Superchip represents a 1000x increase in NVIDIA AI computing power during the last 8 years. Source: NVIDIA


“How will we construct software in the longer term? “It’s unlikely that you simply’re going to rewrite it from scratch or write a complete bunch of Python code or anything like that,” Huang said. “It’s very likely that you would put together a team of AIs.”

Huang said that as a substitute of writing software, corporations will “assemble AI models, give them tasks, give examples of labor products, review plans and supply intermediate results.”

NVIDIA has launched a group of pre-built containers or microservices called NIM (NVIDIA Inference Microservice).

NIMs are like little boxes filled with AI software with a pre-trained model, APIs, and other software components. Companies can deploy these in an analogous strategy to how we use GPTs or Zapier, reasonably than having to construct the functionality from scratch.


There is quite a lot of development happening in embodied AI or physical AI. Training robots within the physical world is dear and inefficient, and NVIDIA says it has the answer.

Omniverse is a virtual world simulation engine that acts like a virtual “gym” where a robot can learn the articulation and physics of interacting with the actual world.

NVIDIA provides developers with API access to coach their robots in Omniverse. Developers can create a digital twin of a physical space reminiscent of a warehouse and optimize automated devices and robots before deploying them within the physical space.

Isaac Software and Project GR00T

Huang announced latest software to support robotics developers. The Isaac Perceptor software and the Isaac Manipulator library will help robots see, navigate and manipulate their environment.

NVIDIA also introduced Project GR00T (General Robotics 003), a universal base model for humanoid robots. This model will run on a brand new computer, Jetson Thor, together with the Isaac Perceptor software, to assist train robots in Omniverse after which deploy them directly into the actual world.

The first day of GTC featured some major latest technology announcements that may likely keep NVIDIA’s stock price rising. It might be interesting to see what other surprises Huang has in store for us in the following few days.

You can watch Huang’s keynote here.

This article was originally published at dailyai.com