Nvidia has announced the shipment of its DGX H100 systems, which are highly advanced AI supercomputers that are the size of a regular server.

The product features eight H100 Tensor Core GPUs that are connected via NVLink, alongside dual Intel Xeon Platinum 8480C processors, 2TB of system memory, and 30 terabytes of NVMe SSD, the company said in a recent blog post.

Additionally, there is an innovative transformer engine in place that enables FP8 training, which monitors the accuracy of the training and inference job along the way and dynamically lowers things to FP8.

The GPU giant is a leader in the AI chip market, which has made them the go-to supplier of GPUs for developers in areas where performance and speed of deployment are at a premium.

As established companies seek new AI architectures and optimized deployments on cheaper gear, Nvidia has set its sights on the future of AI supercomputers.

Top AI and Robotic Centers Among First Customers of Nvidia’s DGX H100

Among the early customers of the DGX H100 system include the Boston Dynamics AI Institute, KTH Royal Institute of Technology in Sweden, Ecuadorian telco Telconet, and DeepL.

The system will assist these companies in achieving goals for smart digital ads, celebrity avatars, intelligent video analytics, and advanced language models.

In addition to these organizations, the DGX H100 will be used by startup Scissero, which has plans for a GPT-powered chatbot for legal services, and the Johns Hopkins University Applied Physics Laboratory, which will use it to train large language models.

The announcement marks a significant milestone for Nvidia as it seeks to dominate the AI and data center market.

According to SemiAnalysis’ chief analyst, Dylan Patel, as everyone tries to race ahead in building larger and more complex models, they’re going to use Nvidia’s GPUs because they’re better, easier to use, and generally cheaper.

Nvidia Now Spends 80% of Time Working on Software

It is worth noting that Nvidia has recently shifted its attention to its software stack to improve performance.

While building hardware is essential, driving that hardware is a more crucial task, which is why Nvidia has been developing its software stack to get the most out of its GPUs.

According to Manuvir Das, Nvidia’s Head of Enterprise Computing, the tech giant now spends 80% of its time on software and 20% of its time on hardware.

Nvidia’s software stack includes Base Command, AI Enterprise, and the entire CUDA framework, which are just building blocks of the Nvidia software stack that is available to its customers and partners.

Moreover, Nvidia is determined to leverage its own AI prowess to develop the next-generation GPU. With the help of machine learning, Nvidia will be capable of making chips that are better than humans with cuLitho, and the same chips will be 40x faster than traditional designs.

Recently, MLPerf published benchmarks that showcased the impressive performance of Nvidia’s GPUs, including the Hopper H100, in AI tasks such as inferencing and deep learning.