DeepBench Training: GEMM and RNN

Opening up our DeepBench results are GEMM tests, which we've already seen as pure synthetic operations. Using kernels and GEMM operations used in certain deep learning applications (DeepSpeech, Speaker ID, and Language Modelling), performance here is a little more representative than running through pure matrix-matrix multiplications in cuBLAS.

To preface, the NVIDIA Titan Xp has crippled half precision, while the GeForce GTX Titan X (Maxwell) only supports single precision. According to Baidu, they test a FP32 with tensor cores mode for Volta, where 32-bit inputs undergo 16-bit multiplication and 32-bit accumulation. The specifics of this are somewhat unclear, but we've gone ahead and included those results. Otherwise, FP16 with tensor cores is the 'standard' Volta mixed precision.

The average results of all sub-tests seem unsurprising: enabling tensor cores results in large performance increases all around. Digging into the details reveals just how specific tensor core acceleration is to certain types of matrix-matrix multiplications.

DL Training: DeepBench - GEMM Average Performance

Splitting up the GEMM tests by DL application, we can start to see how tensor cores fare in ideal (and non-ideal) circumstances.

DL Training: DeepBench - GEMM (Speaker ID)

The Speaker ID GEMM workloads actually consist of only two kernels, where a difference of 10 microseconds means a difference of around 1 TFLOPS. The Titan Xp's higher performance is normal variance.

DL Training: DeepBench - GEMM (Language Modelling)

Looking into the Language Modelling kernels explains the poor performance of tensor cores. The sizes of those kernel matrices are m = 512 or 1024, n = 8 or 16, and k = 500000, and the small size of n compared to the very large k is notable. While each number is technically divisible by 8 – one of the basic requirements to qualify for tensor core acceleration – the shape of these matrices isn't a neat fit with the supported basic WMMA shapes: 16x16x16, 32x8x16, and 8x32x16. Nor does it go well with 8x8x8, if we're assuming that tensor cores truly operate on an independent 8x8x8 level.

So tensor cores are being pulled into action on very lopsided matrices that can't be broken up easily as n = 8 or 16, at least, not without performance penalties.

Meanwhile, the tensor cores have runaway performance on DeepSpeech kernels:

DL Training: DeepBench - GEMM (DeepSpeech)

As an average, it turns out to be an impressive number of TFLOPs. Granularly, the same impact of wrong-proportioned matrices is occuring. When matrices fit the tensor core proportions, performance can jump to more than 90 TFLOPS. When they don't, and the right transpositions are not in play, then performances can drop to below 1 TFLOPS.

For the DeepBench RNN kernels, there is no drastic divergence between RNN type. But within in each RNN type, the same patterns can be seen if judging kernel-by-kernel.

DL Training: DeepBench - RNN (Vanilla)

DL Training: DeepBench - RNN (LSTM)

DL Training: DeepBench - RNN (GRU)

What is interesting is how close the Titan Xp can come to non-tensor core accelerated Titan V performance. We know that the Titan Xp's superior clockspeeds are doing it a favor here, but also one of Titan V's big advantages – HBM2 – is partially mitigated by its 3 controller configuration. Bandwidth-to-bandwidth, the Titan V only has theoretically around 100GB/s more, but at the same amount of VRAM; and while Volta's HBM controller efficiency has improved over Pascal P100, Titan Xp presumably features NVIDIA's 2nd generation GDDR5X controller.

Benchmarking Testbed DeepBench Training: Convolutions
Comments Locked

65 Comments

View All Comments

  • mode_13h - Wednesday, July 4, 2018 - link

    It's not that hard, really. They're just saying Nvidia made a library (cuDNN), so that different deep learning frameworks don't each have to hand-optimize code for things like its exotic tensor cores.

    For their part, AMD has a similar library they call MIOpen.
  • philehidiot - Wednesday, July 4, 2018 - link

    Why thank you. That now does help it make a little more sense. The maths does make sense but the computer science is generally beyond me.
  • aelizo - Wednesday, July 4, 2018 - link

    At that price point, I would have liked to see some comparison to 2xTinan Xp, or even some comparison to 3x1080Ti's.
    Last year I saw some comparison between this sets on pytorch:
    https://medium.com/@u39kun/titan-v-vs-1080-ti-head...
  • mode_13h - Wednesday, July 4, 2018 - link

    I'm suspicious that he's not actually using the tensor cores. The V100/GV100 also has double-rate fp16, like the P100/GP100 before it. So, a < 2x improvement from going to 16-bit suggests it might only be using the packed half-precision instructions, rather than the tensor cores.

    Either that or he's not using batching and is completely limited by memory bottlenecks.
  • aelizo - Wednesday, July 4, 2018 - link

    I suspect something similar, that is Why Nate could have done a great job with a similar comparison.
  • Nate Oh - Monday, July 9, 2018 - link

    Unfortunately, we only have 1 Titan Xp, which is actually on loan from TH. These class of devices are (usually) not sampled by NVIDIA so we could not have pursued what you suggest. We split custody of Titan V, and that alone was not an insignificant investment.

    Additionally, mGPU DL analysis introduces a whole new can of worms. As some may have noticed, I have not mentioned NCCL/MPI, NVLink, Volta mGPU enhancements, All Reduce, etc. It's definitely a topic for further investigation if the demand and resources match.
  • mode_13h - Tuesday, July 10, 2018 - link

    Multi-GPU scaling is getting somewhat esoteric, but perhaps a good topic for future articles.

    Would be cool to see the effect of NVLink, if you can get access to such a system in the cloud. Maybe Nvidia will give you some sort of "press" access to their cloud?
  • ballsystemlord - Saturday, July 7, 2018 - link

    Here are some spelling/grammar corrections. You write far fewer than most of the other authors at anandtech ( If Ian had written this I would have need 2 pages for all the corrections :) ). Good job!

    "And Volta does has those separate INT32 units."
    You mean "have".
    And Volta does have those separate INT32 units.

    "For our purposes, the tiny image dataset of CIFAR10 works fine as running a single-node on a dataset like ImageNet with non-professional hardware that could be old as Kepler"...
    Missing "as".
    For our purposes, the tiny image dataset of CIFAR10 works fine as running a single-node on a dataset like ImageNet with non-professional hardware that could be as old as Kepler...

    "Moving forward, we're hoping that MLPerf and similar efforts make good headway, so that we can tease out a bit more secrets from GPUs."
    Grammar error.
    Moving forward, we're hoping that MLPerf and similar efforts make good headway, so that we can tease out a bit more of the secrets from GPUs.
  • mode_13h - Saturday, July 7, 2018 - link

    Yeah, if that's the worst you found, no one would even *suspect* him for being a lolcat.
  • Vanguarde - Monday, July 9, 2018 - link

    I purchased this card to get better frames in Witcher 3 at 4K everything maxed out, heavily modded. Never dips below 60fps and usually near 80-100fps

Log in

Don't have an account? Sign up now