The Test

For our purposes, we have utilized the full Baidu DeepBench for a single GPU, a reference benchmark from NVIDIA's Caffe2 Docker image, submissions for Stanford DAWNBench, and benchmarks from HPE DLBS. Altogether, this offers a low-level look into the Titan V, as well as real-world performance, as well as a glance at NVIDIA's TensorRT inference optimizer.

Outside of DeepBench, all tests were done in Docker images. Configuring and troubleshooting ROCm/HIP/MIOpen beyond DeepBench was beyond the scope of this article, and so the Radeon RX Vega 64 only features in the DeepBench tests.

Overview of Conducted Deep Learning Tests
Parent Suite/Test Type Dataset Model Framework Tensor Core Aware
DeepBench
Dense Matrix Multiplies
Training N/A Yes
Inference
DeepBench
Convolutions
Training N/A Yes
Inference
DeepBench
Recurrent Layers
Training N/A Yes
Inference
DeepBench Sparse Ops Inference N/A N/A
NVIDIA Caffe2 Docker
ImageNet Training
Training ILSVRC2012 (ImageNet) ResNet-50 (CNN) Caffe2 Yes
HPE DLBS Caffe2 Training ILSVRC2012 (ImageNet) ResNet-50 Caffe2 Yes
Inference
HPE DLBS TensorRT Inference ILSVRC2012
(ImageNet)
ResNet-50 TensorRT Yes
DAWNBench CIFAR10
Image Classification
Training CIFAR10 Custom ResNet34 PyTorch No
Custom ResNet18

For one, we are limited by our single-node, single-GPU configuration, as well as the need for regression testing. In that sense, multi-day training runtimes are not ideal, particularly as on older hardware this might translate into multi-week runtimes and non-convergence.

As our first foray into deep learning performance on GPUs, we do not expect this to be the most optimal test lineup, and we welcome constructive criticism on our ongoing deep learning investigations.

Software Configurations

The testbed was put in non-graphical mode when running benchmarks, so that the GPU was not additionally rendering a desktop environment. For the implementations of the two DAWNBench CIFAR10 submissions, we utilized later versions and lightly modified them for easier logging/use (models, optimizers, parameters, etc., were untouched). Docker images were pulled from NVIDIA GPU Cloud (NGC).
 

Deep Learning Tests Comparison
Test Software Versions
DeepBench NVIDIA CUDA 9.1.85
CuDNN 7.1.3
NVIDIA Driver 390.30
AMD ROCm 1.8.118
MIOpen-HIP 1.3.0
rocBLAS 0.13.2.1
NVIDIA Caffe2 Docker
ImageNet Training
NGC Docker Image: Caffe 18.04-py2
DAWNBench Image Classification Submissions NGC Docker Image: PyTorch 18.04-py3
HPE DLBS NGC Docker Image:
Caffe2 18.04-py2
PyTorch 18.04-py3

Citations

Baidu DeepBench

Baidu Research. DeepBench: Benchmarking Deep Learning operations on different hardware. https://github.com/baidu-research/DeepBench

ImageNet (ILSVRC2012)

Olga Russakovsky and Jia Deng (equal contribution), Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV). 2014, 115, 211-252. https://arxiv.org/abs/1409.0575

Stanford DAWNBench

Cody A. Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. DAWNBench: An End-to-End Deep Learning Benchmark and Competition. NIPS ML Systems Workshop 2017. https://dawn.cs.stanford.edu/benchmark/papers/nips17-dawnbench.pdf

CIFAR10

Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. University of Toronto, 2009.

KervResNet

Chen Wang. https://github.com/wang-chen/KervNets

Basenet (ResNet18 with Modifications)

Ben Johnson. https://github.com/bkj/basenet/

A Look at Deep Learning Benchmarking Benchmarking Testbed
Comments Locked

65 Comments

View All Comments

  • krazyfrog - Saturday, July 7, 2018 - link

    I don't think so.

    https://www.anandtech.com/show/12170/nvidia-titan-...
  • mode_13h - Saturday, July 7, 2018 - link

    Yeah, I mean why else do you think they built the DGX Station?

    https://www.nvidia.com/en-us/data-center/dgx-stati...

    They claim "AI", but I'm sure it was just an excuse they told their investors.
  • keg504 - Tuesday, July 3, 2018 - link

    "With Volta, there has little detail of anything other than GV100 exists..." (First page)
    What is this sentence supposed to be saying?
  • Nate Oh - Tuesday, July 3, 2018 - link

    Apologies, was a brain fart :)

    I've reworked the sentence, but the gist is: GV100 is the only Volta silicon that we know of (outside of an upcoming Drive iGPU)
  • junky77 - Tuesday, July 3, 2018 - link

    Thanks

    Any thoughts about Google TPUv2 in comparison?
  • mode_13h - Tuesday, July 3, 2018 - link

    TPUv2 is only 45 TFLOPS/chip. They initially grabbed a lot of attention with a 180 TFLOPS figure, but that turned out to be per-board.

    I'm not sure if they said how many TFLOPS/w.
  • SirPerro - Thursday, July 5, 2018 - link

    TPUv3 was announced in May with 8x the performance of TPUv2 for a total of a 1 PF per pod
  • tuxRoller - Tuesday, July 3, 2018 - link

    Since utilization is, apparently, an issue with these workloads, I'm interested in seeing how radically different architectures, such as tpu2+ and the just announced ibm ai accelerator (https://spectrum.ieee.org/tech-talk/semiconductors... which looks like a monster.
  • MDD1963 - Wednesday, July 4, 2018 - link

    4 ordinary people will buy this....by mistake, thinking it is a gamer. :)
  • philehidiot - Wednesday, July 4, 2018 - link

    "With DL researchers and academics successfully using CUDA to train neural network models faster, it was only a matter of time before NVIDIA released their cuDNN library of optimized deep learning primitives, of which there was ample precedent with the HPC-focused BLAS (Basic Linear Algebra Subroutines) and corresponding cuBLAS. So cuDNN abstracted away the need for researchers to create and optimize CUDA code for DL performance. As for AMD’s equivalent to cuDNN, MIOpen was only released last year under the ROCm umbrella, though currently is only publicly enabled in Caffe."

    Whatever drugs you're on that allow this to make any sense, I need some. Being a layman, I was hoping maybe 1/5th of this might make sense. I'm going back to the porn. </headache>

Log in

Don't have an account? Sign up now