Nvidia announces Ampere technology

Written by Jennifer Allen

May 18, 2020 | 11:00

Tags: #a100-gpu #ampere #gtc

Companies: #nvidia

Nvidia officially announced its Ampere architecture, A100 GPU, in a series of kitchen-based speeches and, of course, it should be a massive generational leap in GPU performance. 

As part of the GTC digital keynote, Nvidia CEO Jensen Huang, offered plenty of interesting bits and pieces about the technology while also avoiding any big news regarding the GeForce line of GPUs. Oh, and he did all this via his kitchen. The full line up is available as part of a playlist on YouTube known as the Kitchen Keynote but here are the main details. 

The big one is that the new A100 GPU - built on the Ampere architecture - will offer a 20x AI performance leap, according to Huang. It's the power behind the new DGX A100 AI system which can deliver 5 Petaflops of performance thanks to featuring eight A100 GPUs. Huang explained, "for the first time, scale-up and scale-out workloads can be accelerated on one platform. Nvidia A100 will simultaneously boost throughput and drive down the cost of data centres."

The A100 accelerator is built on the TSMC 7nm process node with 3D stacking, contains 54 billion transistors, and offers 3rd generation Tensor cores, MIG, 3rd generation NVLink and NVSwitch and HBM2 1.6TB/s memory bandwidth. 

How does that translate? It means 6,912 FP32 CUDA cores, a boost clock of roughly 1.41GHz, memory clock of 2.4Gbps HBM2, and 40GB VRAM amongst many other advantages.

In comparison, predecessor Volta-based GV100 GPU/V100 accelerator offered 5,120 FP32 CUDA cores, a boost clock of 1.53GHz, memory clock of 1.75Gbps HBM2, and 16GB/32GB VRAM. 

While the specs might be a little less revolutionary than one would expect, Huang explained that tech like the third generation tensor cores mean that single-precision AI training is far improved than anything seen before, with new efficiency techniques like structural sparsity acceleration also there for improving upon AI math performance. Multi-instance GPU, otherwise known as MIG, allows for one A100 to be partitioned into as many as 7 independent GPUs, each with its own resources. A100 is smarter and far more efficient, basically. 

Huang also used the time to demonstrate how the firm has worked with researchers and scientists to use GPUs and AI computing to help with COVID-19 efforts. For instance, Oxford Nanopore Technologies has sequenced the virus genome in only 7 hours, while Plotly is conducting real-time infection rate tracing, all thanks to Nvidia's AI efforts. 

As is often the way at GTC briefings, there was no word on consumer products, i.e. the GeForce Ampere range. Instead, we're stuck with rumours that there will be a gaming-focused event in September, making sure it's nearer to the Cyberpunk 2077 launch. That seems logical to us given the tie-ins that have already occurred with GeForce products and Cyberpunk 2077. 

Whenever it is, expect some predictably impressive specs and performance if A100 is anything to go by. 


Discuss this in the forums
YouTube logo
MSI MPG Velox 100R Chassis Review

October 14 2021 | 15:04