The Test

For our purposes, we have utilized the full Baidu DeepBench for a single GPU, a reference benchmark from NVIDIA's Caffe2 Docker image, submissions for Stanford DAWNBench, and benchmarks from HPE DLBS. Altogether, this offers a low-level look into the Titan V, as well as real-world performance, as well as a glance at NVIDIA's TensorRT inference optimizer.

Outside of DeepBench, all tests were done in Docker images. Configuring and troubleshooting ROCm/HIP/MIOpen beyond DeepBench was beyond the scope of this article, and so the Radeon RX Vega 64 only features in the DeepBench tests.

Overview of Conducted Deep Learning Tests
Parent Suite/Test Type Dataset Model Framework Tensor Core Aware
DeepBench
Dense Matrix Multiplies
Training N/A Yes
Inference
DeepBench
Convolutions
Training N/A Yes
Inference
DeepBench
Recurrent Layers
Training N/A Yes
Inference
DeepBench Sparse Ops Inference N/A N/A
NVIDIA Caffe2 Docker
ImageNet Training
Training ILSVRC2012 (ImageNet) ResNet-50 (CNN) Caffe2 Yes
HPE DLBS Caffe2 Training ILSVRC2012 (ImageNet) ResNet-50 Caffe2 Yes
Inference
HPE DLBS TensorRT Inference ILSVRC2012
(ImageNet)
ResNet-50 TensorRT Yes
DAWNBench CIFAR10
Image Classification
Training CIFAR10 Custom ResNet34 PyTorch No
Custom ResNet18

For one, we are limited by our single-node, single-GPU configuration, as well as the need for regression testing. In that sense, multi-day training runtimes are not ideal, particularly as on older hardware this might translate into multi-week runtimes and non-convergence.

As our first foray into deep learning performance on GPUs, we do not expect this to be the most optimal test lineup, and we welcome constructive criticism on our ongoing deep learning investigations.

Software Configurations

The testbed was put in non-graphical mode when running benchmarks, so that the GPU was not additionally rendering a desktop environment. For the implementations of the two DAWNBench CIFAR10 submissions, we utilized later versions and lightly modified them for easier logging/use (models, optimizers, parameters, etc., were untouched). Docker images were pulled from NVIDIA GPU Cloud (NGC).
 

Deep Learning Tests Comparison
Test Software Versions
DeepBench NVIDIA CUDA 9.1.85
CuDNN 7.1.3
NVIDIA Driver 390.30
AMD ROCm 1.8.118
MIOpen-HIP 1.3.0
rocBLAS 0.13.2.1
NVIDIA Caffe2 Docker
ImageNet Training
NGC Docker Image: Caffe 18.04-py2
DAWNBench Image Classification Submissions NGC Docker Image: PyTorch 18.04-py3
HPE DLBS NGC Docker Image:
Caffe2 18.04-py2
PyTorch 18.04-py3

Citations

Baidu DeepBench

Baidu Research. DeepBench: Benchmarking Deep Learning operations on different hardware. https://github.com/baidu-research/DeepBench

ImageNet (ILSVRC2012)

Olga Russakovsky and Jia Deng (equal contribution), Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV). 2014, 115, 211-252. https://arxiv.org/abs/1409.0575

Stanford DAWNBench

Cody A. Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. DAWNBench: An End-to-End Deep Learning Benchmark and Competition. NIPS ML Systems Workshop 2017. https://dawn.cs.stanford.edu/benchmark/papers/nips17-dawnbench.pdf

CIFAR10

Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. University of Toronto, 2009.

KervResNet

Chen Wang. https://github.com/wang-chen/KervNets

Basenet (ResNet18 with Modifications)

Ben Johnson. https://github.com/bkj/basenet/

A Look at Deep Learning Benchmarking Benchmarking Testbed
Comments Locked

65 Comments

View All Comments

  • Ryan Smith - Tuesday, July 3, 2018 - link

    To clarify: SXM3 is the name of the socket used for the mezzanine form factor cards for servers. All Titan Vs are PCie.
  • Drumsticks - Tuesday, July 3, 2018 - link

    Nice review. Will anandtech be putting forth an effort to cover the ML hardware space in the future? AMD and Intel both seem to have plans here.

    The V100 and Titan V should have well over 100TF according to Nvidia in training and inference, if I remember correctly, but nothing I saw here got close in actuality. Were these benches not designed to hit those numbers, or are those numbers just too optimistic in most scenarios to occur?
  • Ryan Smith - Tuesday, July 3, 2018 - link

    "The V100 and Titan V should have well over 100TF according to Nvidia in training and inference"

    The Titan V only has 75% of the memory bandwidth of the V100. So it's really hard to hit 100TF. Even in our Titan V preview where we ran a pure CUDA-based GEMM benchmark, we only hit 97 TFLOPS. Meanwhile real-world use cases are going to be lower still, as you can only achieve those kinds of high numbers in pure tensor core compute workloads.

    https://www.anandtech.com/show/12170/nvidia-titan-...
  • Nate Oh - Tuesday, July 3, 2018 - link

    To add on to Ryan's comment, 100+ TF is best-case (i.e. synthetic) performance based on peak FMA ops on individual matrix elements, which only comes about when everything perfectly qualifies for tensor core acceleration, no memory bottleneck by reusing tons of register data, etc.
  • remedo - Tuesday, July 3, 2018 - link

    Nate, I hope you could have included more TensorFlow/Keras specific benchmarks, given that the majority of deep learning researchers/developers are now using TensorFlow. Just compare the GitHub stats of TensorFlow vs. other frameworks. Therefore, I feel that this article missed some critical benchmarks in that regard. Still, this is a fascinating article, and thank you for your work. I understand that Anandtech is still new to deep learning benchmarks compared to your decades of experience in CPU/Gaming benchmark. If possible, please do a future update!
  • Nate Oh - Tuesday, July 3, 2018 - link

    Several TensorFlow benchmarks did not make the cut for today :) We were very much interested in using it, because amongst other things it offers global environmental variables to govern tensor core math, and integrates somewhat directly with TensorRT. However, we've been having issues finding and using one that does all the things we need it to do (and also offers different results than just pure throughput), and I've gone so far as trying to rebuild various models/implementations directly in Python (obviously to no avail, as I am ultimately not an ML developer).

    According to people smarter than me (i.e. Chintala, and I'm sure many others), if it's only utilizing standard cuDNN operations then frameworks should perform about the same; if there are significant differences, a la the inaugural version of Deep Learning Frameworks Comparison, it is because it is poorly optimized for TensorFlow or whatever given framework. From a purely GPU performance perspective, usage of different frameworks often comes down to framework-specific optimization, and not all reference implementations or benchmark suite tests do what we need it to do out-of-the-box (not to mention third-party implementations). Analyzing the level of TF optimization is developer-level work, and that's beyond the scope of the article. But once benchmark suites hit their stride, that will resolve that issue for us.

    For Keras, I wasn't able to find anything that was reasonably usable by a non-developer, though I could've easily missed something (I'm aware of how it relates to TF, Theano, MXNet, etc). I'm sure that if we replaced PyTorch with Tensorflow implementations, we would get questions on 'Where's PyTorch?' :)

    Not to say your point isn't valid, it is :) We're going to keep on looking into it, rest assured.
  • SirPerro - Thursday, July 5, 2018 - link

    Keras has some nice examples in its github repo to be run with the tensorflow backend but for the sake of benchmarking it does not offer anything that it's not covered by the pure tensorflow examples, I guess
  • BurntMyBacon - Tuesday, July 3, 2018 - link

    I believe the GTX Titan with memory clock 6Gbps and memory bus width of 384 bits should have a memory bandwidth of 288GB/sec rather than the list 228GB/sec. Putting that aside, this is a nice review.
  • Nate Oh - Tuesday, July 3, 2018 - link

    Thanks, fixed
  • Jon Tseng - Tuesday, July 3, 2018 - link

    Don't be silly. All we care about is whether it can run Crysis at 8K.

Log in

Don't have an account? Sign up now