DeepBench Inference: GEMM

Shifting gears to inferencing, what DeepBench is simulating with graphics cards are more for their suitability in inference deployment servers, rather than edge devices. So of the inference kernels, none of the 'device' type ones are run.

Precision-wise, DeepBench inference tasks support INT8 on Volta and Pascal. More specifically, Baidu qualifies DeepBench GEMM and convolutions as INT8 mutiply with 32-bit accumulate.

DL Inference: DeepBench - GEMM

Both Titan V and Titan Xp feature quad rate INT8 performance, and DeepBench's INT8 inferencing implementation neatly falls under the DP4A vector dot product capability introduced by Pascal. The instruction is still supported under Volta, and shows up once again as IDP and IDP4A in the instruction set.

For IGEMM, as CUTLASS shows, DP4A is the bespoke operation. So we see 8-bit integer performance at a high level throughout, except in language modelling. Once again, the irregular dimensions don't lend itself to easy acceleration, if at all.

DL Inference: DeepBench - GEMM (DeepSpeech)

DL Inference: DeepBench - GEMM (DeepSpeech, GRU Affine)

DL Inference: DeepBench - GEMM (DeepSpeech, Output Layer)

In fully-connected (or 'affine') layers, every node in that layer is connected to every node in the previous layer. What this means for a typical CNN is that the fully-connected layer is able combines all the extracted features to make a final prediction and classify the image. By the numbers, this also means large and regularly-proportioned matrices, which can equal large speedups as we see here.

DL Inference: DeepBench - GEMM (Language Modelling)

DL Inference: DeepBench - GEMM (Speaker ID)

DeepBench Training: Convolutions DeepBench Inference: Convolutions
Comments Locked

65 Comments

View All Comments

  • krazyfrog - Saturday, July 7, 2018 - link

    I don't think so.

    https://www.anandtech.com/show/12170/nvidia-titan-...
  • mode_13h - Saturday, July 7, 2018 - link

    Yeah, I mean why else do you think they built the DGX Station?

    https://www.nvidia.com/en-us/data-center/dgx-stati...

    They claim "AI", but I'm sure it was just an excuse they told their investors.
  • keg504 - Tuesday, July 3, 2018 - link

    "With Volta, there has little detail of anything other than GV100 exists..." (First page)
    What is this sentence supposed to be saying?
  • Nate Oh - Tuesday, July 3, 2018 - link

    Apologies, was a brain fart :)

    I've reworked the sentence, but the gist is: GV100 is the only Volta silicon that we know of (outside of an upcoming Drive iGPU)
  • junky77 - Tuesday, July 3, 2018 - link

    Thanks

    Any thoughts about Google TPUv2 in comparison?
  • mode_13h - Tuesday, July 3, 2018 - link

    TPUv2 is only 45 TFLOPS/chip. They initially grabbed a lot of attention with a 180 TFLOPS figure, but that turned out to be per-board.

    I'm not sure if they said how many TFLOPS/w.
  • SirPerro - Thursday, July 5, 2018 - link

    TPUv3 was announced in May with 8x the performance of TPUv2 for a total of a 1 PF per pod
  • tuxRoller - Tuesday, July 3, 2018 - link

    Since utilization is, apparently, an issue with these workloads, I'm interested in seeing how radically different architectures, such as tpu2+ and the just announced ibm ai accelerator (https://spectrum.ieee.org/tech-talk/semiconductors... which looks like a monster.
  • MDD1963 - Wednesday, July 4, 2018 - link

    4 ordinary people will buy this....by mistake, thinking it is a gamer. :)
  • philehidiot - Wednesday, July 4, 2018 - link

    "With DL researchers and academics successfully using CUDA to train neural network models faster, it was only a matter of time before NVIDIA released their cuDNN library of optimized deep learning primitives, of which there was ample precedent with the HPC-focused BLAS (Basic Linear Algebra Subroutines) and corresponding cuBLAS. So cuDNN abstracted away the need for researchers to create and optimize CUDA code for DL performance. As for AMD’s equivalent to cuDNN, MIOpen was only released last year under the ROCm umbrella, though currently is only publicly enabled in Caffe."

    Whatever drugs you're on that allow this to make any sense, I need some. Being a layman, I was hoping maybe 1/5th of this might make sense. I'm going back to the porn. </headache>

Log in

Don't have an account? Sign up now