The race for more powerful and efficient AI hardware surged ahead this week with Intel and Google announcing latest chips to assist them grow to be less reliant on NVIDIA tech.

It looks like every week latest AI models are being released. Behind each release are weeks of coaching on cloud computing data centers, most of that are powered by NVIDIA GPUs.

Intel and Google each announced latest in-house AI chips that may train and deploy big AI models faster while using less power.

Intel’s Gaudi 3 AI accelerator chip

Intel might be higher known for the chips that power your PC but on Tuesday the corporate announced its latest AI chip called Gaudi 3.

NVIDIA’s H100 GPUs have formed the majority of AI data centers, but Intel says Gaudi 3 delivers “50% on average higher inference and 40% on average higher power efficiency than Nvidia H100 – at a fraction of the fee.”

An enormous contributor to the ability efficiency of Gaudi 3 is that Intel used Taiwan Semiconductor Manufacturing Co’s 5nm process to make the chips.

Intel didn’t give any pricing information but when asked the way it compares with NVIDIA’s products, Das Kamhout, VP of Xeon software at Intel said, “We do expect it to be highly competitive.”

Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro can be the primary to deploy Gaudi 3 of their AI data centers.

Intel CEO Pat Gelsinger summarized the corporate’s AI ambitions saying “Intel is bringing AI all over the place across the enterprise, from the PC to the info center to the sting.”

Google’s Arm and TPU upgrades

On Tuesday Google announced its first custom Arm-based CPUs it plans to make use of to power its data centers. The latest chip, dubbed Axion, is a direct competitor to Intel and AMD’s CPUs.

Google claims Axion delivers “30% higher performance than the fastest general-purpose Arm-based instances available within the cloud today, as much as 50% higher performance and as much as 60% higher energy-efficiency than comparable current-generation x86-based instances.”

Google’s latest Arm-based Axion CPU. Source: Google

Google has been moving several of its services like YouTube and Google Earth to its current generation Arm-based servers which can soon be upgraded with the Axion chips.

Having a robust Arm-based option makes it easier for patrons to migrate their CPU-based AI training, inferencing, and other applications to Google’s cloud platform without having to revamp them.

For large-scale model training, Google has largely relied on its TPU chips as an alternative choice to NVIDIA’s hardware. These can even be upgraded with a single latest TPU v5p now containing greater than double the variety of chips in the present TPU v4 pod.

Google isn’t seeking to sell either its latest Arm chips or TPUs. The company is seeking to drive its cloud computing services quite than becoming a direct hardware competitor of NVIDIA.

The upgraded TPUs will provide a lift to Google’s AI Hypercomputer which enables large-scale AI model training. The AI Hypercomputer also uses NVIDIA H100 GPUs that Google says will soon get replaced with NVIDIA’s latest Blackwell GPUs.

The demand for AI chips isn’t more likely to decelerate anytime soon, and it’s looking less just like the one-horse NVIDIA race than before.


This article was originally published at dailyai.com