Nvidia Corporation, most commonly known as Nvidia, is an American technology company based in Santa Clara, California. They basically design graphics processing units (GPUs) for the professional and gaming markets, as well as a system on a chip unit for the automotive and mobile computing market.
Ever since 2014, Nvidia has shifted to becoming a platform company focused on four markets: gaming, data centers, professional, and auto. Nvidia is now also focused on artificial intelligence.
Just yesterday, Nvidia launched a monster box called the HGX-2 and it’s exactly the kind of thing that technology nerds would die for. It’s a cloud server that is purported to be extremely powerful in that it combines high-performance computing with artificial intelligence requirements in one phenomenal and compelling package.
Now, let’s see the specifications since I’m sure you are all waiting for it. The Nvidia HGX-2 starts with 16x NVIDIA Tesla V100 GPUs. That is really good for 2 petaFLOPS for artificial intelligence with low precision, 250 teraFLOPS for medium precision, and 125 teraFLOPS for those times when you require the highest precision. It also comes typically with a ½ a terabyte of memory and 12 Nvidia NVSwitches that enables GPU to GPU communications at 300 GB per second. Compared with the HGX-1, which was released last year, they have doubled the capacity of the HGX-2.
The sad part about all of this is you won’t be able to buy one of these boxes. As a matter of fact, Nvidia is distributing them strictly to resellers, all of whom would most likely package all of these boxes and sell them to data centers and cloud providers. “The beauty of this approach for cloud resellers is that when you buy it, they have the entire range of precision in a single box,” Paresh Kharya – Group Product Marketing Manager for Nvidia’s Tesla data center products – said.
“The benefit of a unified platform is that while companies and cloud providers are building their infrastructure, they can standardize on a single unified architecture that supports the whole range of high-performance workloads. So, whether it’s an artificial intelligence or high-performance simulations, the whole range of workloads is now possible in just a single platform,” Kharya further explained.
“In hyper-scale companies or cloud providers, the main benefit that the Nvidia HGX-2 provides is the economies of scale. If they can standardize on the fewest possible architectures, they can ultimately maximize their operational efficiency. What HGX-2 allows them to do is to standardize on that single unified platform,” he added.
While Jensen Huang – Nvidia’s founder and chief executive – said, “The world of computing has changed. CPU scaling has slowed at a time when computing demand is skyrocketing. Nvidia’s HGX-2 with Tensor Core GPUs gives the industry a powerful, versatile computing platform that fuses high-performance computing and artificial intelligence to solve the world’s grand challenges.”
But this GPU-driven technological powerhouse is just the beginning. Although it is incredibly capable in its own right, it could also just simply act as the basis for even more powerful arrays of technologically-advanced creations.
The Nvidia HGX-2 powered serves will be available in the market later this year.