NVIDIA GPUs Lead in New AI Inference Benchmarks

This week NVIDIA released the results of new benchmarks that analyze inference in Artificial Intelligence, focused on datacenters and edge . In the results, Turing and Jetson Xavier GPUs outperformed the five MLPerf indicators.

All were applied in various form factors and four inference scenarios. In both datacenter, both server and offline, Turing cards were superior with the best performance per processor.

Among the commercially available mobile and edge SoCs , Jetson Xavier achieved the highest performance in both category scenarios - single stream and multiple streams.

NVIDIA highlights its programming platform and computing platform performance in the area by being the only company to submit results across benchmark comparisons. The results were achieved through the TensorRT 6 software, intended for deep learning.

“AI is at a turning point, rapidly moving from research to large-scale deployment in real-world applications. AI inference is a major computational challenge. Combining the industry's most advanced programmable accelerator, the CUDA-X AI algorithms suite, and our deep AI computing experience, NVIDIA can help data centers deploy their ever-increasing number of complex AI models. ”
Ian Buck
NVIDIA Accelerated Computing General Manager and Vice President

Remember that the company also introduced the Jetson Xavier NX , an artificial intelligence processing supercomputer for integrated computing and robotics.

What did you think of the NVIDIA GPU results in the AI ​​benchmarks? Leave your opinion in the comments below.

Post a Comment