The NVIDIA Titan V Deep Learning Deep Dive: It's All About The Tensor Cores
by Nate Oh on July 3, 2018 10:15 AM ESTBenchmarking Testbed Setup
Our hardware has been modified for deep learning workloads with a larger SSD and more RAM.
CPU: | Intel Core i7-7820X @ 4.3GHz |
Motherboard: | Gigabyte X299 AORUS Gaming 7 |
Power Supply: | Corsair AX860i |
Hard Disk: | Intel 1.1TB |
Memory: | G.Skill TridentZ RGB DDR4-3200 4 x 16GB (15-15-15-35) |
Case: | NZXT Phantom 630 Windowed Edition |
Monitor: | LG 27UD68P-B |
Video Cards: | NVIDIA Titan V NVIDIA Titan Xp NVIDIA GeForce GTX Titan X (Maxwell) AMD Radeon RX Vega 64 |
Video Drivers: | NVIDIA: Release 390.30 for Linux x64 AMD: |
OS: | Ubuntu 16.04.4 LTS |
With deep learning benchmarking requiring some extra hardware, we must give thanks to the following that made this all happen.
Many Thanks To...
Much thanks to our patient colleagues over at Tom's Hardware, for both splitting custody of the Titan V and lending us their Titan Xp and Quadro P6000. None of this would have been possible without their support.
And thank you to G.Skill for providing us with a 64GB set of DDR4 memory suitable for deep learning workloads, not a small feat in these DDR4 price-inflated times. G.Skill has been a long-time supporter of AnandTech over the years, for testing beyond our CPU and motherboard memory reviews. We've reported on their high capacity and high-frequency kits, and every year at Computex G.Skill holds a world overclocking tournament with liquid nitrogen right on the show floor.
Further Reading: AnandTech's Memory Scaling on Haswell Review, with G.Skill DDR3-3000
65 Comments
View All Comments
krazyfrog - Saturday, July 7, 2018 - link
I don't think so.https://www.anandtech.com/show/12170/nvidia-titan-...
mode_13h - Saturday, July 7, 2018 - link
Yeah, I mean why else do you think they built the DGX Station?https://www.nvidia.com/en-us/data-center/dgx-stati...
They claim "AI", but I'm sure it was just an excuse they told their investors.
keg504 - Tuesday, July 3, 2018 - link
"With Volta, there has little detail of anything other than GV100 exists..." (First page)What is this sentence supposed to be saying?
Nate Oh - Tuesday, July 3, 2018 - link
Apologies, was a brain fart :)I've reworked the sentence, but the gist is: GV100 is the only Volta silicon that we know of (outside of an upcoming Drive iGPU)
junky77 - Tuesday, July 3, 2018 - link
ThanksAny thoughts about Google TPUv2 in comparison?
mode_13h - Tuesday, July 3, 2018 - link
TPUv2 is only 45 TFLOPS/chip. They initially grabbed a lot of attention with a 180 TFLOPS figure, but that turned out to be per-board.I'm not sure if they said how many TFLOPS/w.
SirPerro - Thursday, July 5, 2018 - link
TPUv3 was announced in May with 8x the performance of TPUv2 for a total of a 1 PF per podtuxRoller - Tuesday, July 3, 2018 - link
Since utilization is, apparently, an issue with these workloads, I'm interested in seeing how radically different architectures, such as tpu2+ and the just announced ibm ai accelerator (https://spectrum.ieee.org/tech-talk/semiconductors... which looks like a monster.MDD1963 - Wednesday, July 4, 2018 - link
4 ordinary people will buy this....by mistake, thinking it is a gamer. :)philehidiot - Wednesday, July 4, 2018 - link
"With DL researchers and academics successfully using CUDA to train neural network models faster, it was only a matter of time before NVIDIA released their cuDNN library of optimized deep learning primitives, of which there was ample precedent with the HPC-focused BLAS (Basic Linear Algebra Subroutines) and corresponding cuBLAS. So cuDNN abstracted away the need for researchers to create and optimize CUDA code for DL performance. As for AMD’s equivalent to cuDNN, MIOpen was only released last year under the ROCm umbrella, though currently is only publicly enabled in Caffe."Whatever drugs you're on that allow this to make any sense, I need some. Being a layman, I was hoping maybe 1/5th of this might make sense. I'm going back to the porn. </headache>