The NVIDIA Titan V Deep Learning Deep Dive: It's All About The Tensor Cores
by Nate Oh on July 3, 2018 10:15 AM ESTThe Test
For our purposes, we have utilized the full Baidu DeepBench for a single GPU, a reference benchmark from NVIDIA's Caffe2 Docker image, submissions for Stanford DAWNBench, and benchmarks from HPE DLBS. Altogether, this offers a low-level look into the Titan V, as well as real-world performance, as well as a glance at NVIDIA's TensorRT inference optimizer.
Outside of DeepBench, all tests were done in Docker images. Configuring and troubleshooting ROCm/HIP/MIOpen beyond DeepBench was beyond the scope of this article, and so the Radeon RX Vega 64 only features in the DeepBench tests.
Overview of Conducted Deep Learning Tests | |||||
Parent Suite/Test | Type | Dataset | Model | Framework | Tensor Core Aware |
DeepBench Dense Matrix Multiplies |
Training | N/A | Yes | ||
Inference | |||||
DeepBench Convolutions |
Training | N/A | Yes | ||
Inference | |||||
DeepBench Recurrent Layers |
Training | N/A | Yes | ||
Inference | |||||
DeepBench Sparse Ops | Inference | N/A | N/A | ||
NVIDIA Caffe2 Docker ImageNet Training |
Training | ILSVRC2012 (ImageNet) | ResNet-50 (CNN) | Caffe2 | Yes |
HPE DLBS Caffe2 | Training | ILSVRC2012 (ImageNet) | ResNet-50 | Caffe2 | Yes |
Inference | |||||
HPE DLBS TensorRT | Inference | ILSVRC2012 (ImageNet) |
ResNet-50 | TensorRT | Yes |
DAWNBench CIFAR10 Image Classification |
Training | CIFAR10 | Custom ResNet34 | PyTorch | No |
Custom ResNet18 |
For one, we are limited by our single-node, single-GPU configuration, as well as the need for regression testing. In that sense, multi-day training runtimes are not ideal, particularly as on older hardware this might translate into multi-week runtimes and non-convergence.
As our first foray into deep learning performance on GPUs, we do not expect this to be the most optimal test lineup, and we welcome constructive criticism on our ongoing deep learning investigations.
Software Configurations
The testbed was put in non-graphical mode when running benchmarks, so that the GPU was not additionally rendering a desktop environment. For the implementations of the two DAWNBench CIFAR10 submissions, we utilized later versions and lightly modified them for easier logging/use (models, optimizers, parameters, etc., were untouched). Docker images were pulled from NVIDIA GPU Cloud (NGC).
Deep Learning Tests Comparison | |||
Test | Software Versions | ||
DeepBench | NVIDIA | CUDA 9.1.85 CuDNN 7.1.3 NVIDIA Driver 390.30 |
|
AMD | ROCm 1.8.118 MIOpen-HIP 1.3.0 rocBLAS 0.13.2.1 |
||
NVIDIA Caffe2 Docker ImageNet Training |
NGC Docker Image: Caffe 18.04-py2 | ||
DAWNBench Image Classification Submissions | NGC Docker Image: PyTorch 18.04-py3 | ||
HPE DLBS | NGC Docker Image: Caffe2 18.04-py2 PyTorch 18.04-py3 |
Citations
Baidu DeepBench
Baidu Research. DeepBench: Benchmarking Deep Learning operations on different hardware. https://github.com/baidu-research/DeepBench
ImageNet (ILSVRC2012)
Olga Russakovsky and Jia Deng (equal contribution), Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV). 2014, 115, 211-252. https://arxiv.org/abs/1409.0575
Stanford DAWNBench
Cody A. Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. DAWNBench: An End-to-End Deep Learning Benchmark and Competition. NIPS ML Systems Workshop 2017. https://dawn.cs.stanford.edu/benchmark/papers/nips17-dawnbench.pdf
CIFAR10
Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. University of Toronto, 2009.
KervResNet
Chen Wang. https://github.com/wang-chen/KervNets
Basenet (ResNet18 with Modifications)
Ben Johnson. https://github.com/bkj/basenet/
65 Comments
View All Comments
mode_13h - Wednesday, July 4, 2018 - link
It's not that hard, really. They're just saying Nvidia made a library (cuDNN), so that different deep learning frameworks don't each have to hand-optimize code for things like its exotic tensor cores.For their part, AMD has a similar library they call MIOpen.
philehidiot - Wednesday, July 4, 2018 - link
Why thank you. That now does help it make a little more sense. The maths does make sense but the computer science is generally beyond me.aelizo - Wednesday, July 4, 2018 - link
At that price point, I would have liked to see some comparison to 2xTinan Xp, or even some comparison to 3x1080Ti's.Last year I saw some comparison between this sets on pytorch:
https://medium.com/@u39kun/titan-v-vs-1080-ti-head...
mode_13h - Wednesday, July 4, 2018 - link
I'm suspicious that he's not actually using the tensor cores. The V100/GV100 also has double-rate fp16, like the P100/GP100 before it. So, a < 2x improvement from going to 16-bit suggests it might only be using the packed half-precision instructions, rather than the tensor cores.Either that or he's not using batching and is completely limited by memory bottlenecks.
aelizo - Wednesday, July 4, 2018 - link
I suspect something similar, that is Why Nate could have done a great job with a similar comparison.Nate Oh - Monday, July 9, 2018 - link
Unfortunately, we only have 1 Titan Xp, which is actually on loan from TH. These class of devices are (usually) not sampled by NVIDIA so we could not have pursued what you suggest. We split custody of Titan V, and that alone was not an insignificant investment.Additionally, mGPU DL analysis introduces a whole new can of worms. As some may have noticed, I have not mentioned NCCL/MPI, NVLink, Volta mGPU enhancements, All Reduce, etc. It's definitely a topic for further investigation if the demand and resources match.
mode_13h - Tuesday, July 10, 2018 - link
Multi-GPU scaling is getting somewhat esoteric, but perhaps a good topic for future articles.Would be cool to see the effect of NVLink, if you can get access to such a system in the cloud. Maybe Nvidia will give you some sort of "press" access to their cloud?
ballsystemlord - Saturday, July 7, 2018 - link
Here are some spelling/grammar corrections. You write far fewer than most of the other authors at anandtech ( If Ian had written this I would have need 2 pages for all the corrections :) ). Good job!"And Volta does has those separate INT32 units."
You mean "have".
And Volta does have those separate INT32 units.
"For our purposes, the tiny image dataset of CIFAR10 works fine as running a single-node on a dataset like ImageNet with non-professional hardware that could be old as Kepler"...
Missing "as".
For our purposes, the tiny image dataset of CIFAR10 works fine as running a single-node on a dataset like ImageNet with non-professional hardware that could be as old as Kepler...
"Moving forward, we're hoping that MLPerf and similar efforts make good headway, so that we can tease out a bit more secrets from GPUs."
Grammar error.
Moving forward, we're hoping that MLPerf and similar efforts make good headway, so that we can tease out a bit more of the secrets from GPUs.
mode_13h - Saturday, July 7, 2018 - link
Yeah, if that's the worst you found, no one would even *suspect* him for being a lolcat.Vanguarde - Monday, July 9, 2018 - link
I purchased this card to get better frames in Witcher 3 at 4K everything maxed out, heavily modded. Never dips below 60fps and usually near 80-100fps