Compute Performance: Geekbench 4

In the most recent version of its cross-platform Geekbench benchmark suite, Primate Labs added CUDA and OpenCL GPU benchmarks. This isn’t normally a test we turn to for GPUs, but for the Titan V launch it offers us another perspective on performance.

Compute: Geekbench 4 - GPU Compute - Total Score

The results here are interesting. We’re not the only site to run Geekbench 4, and I’ve seen other sites with much different scores. But as we haven’t used this benchmark in great depth before, I’m hesitant to read too much into it. What it does show us, at any rate, is that the Titan V is well ahead of the Titan Xp here,  more than doubling the latter’s score.

NVIDIA Titan Cards GeekBench 4 Subscores
  Titan V Titan Xp GTX Titan X GTX Titan
Sobel
(GigaPixels per second)
35.1 24.9 16.5 9.4
Histogram Equalization
(GigaPixels per second)
21.2 9.43 5.58 4.27
SFFT
(GFLOPS)
180 136.5 83 60.3
Gaussian Blur
(GigaPixels per second)
23.9 2.67 1.57 1.45
Face Detection
(Msubwindows per second)
21.7 12.4 8.66 4.92
RAW
(GigaPixels per second)
18.2 10.8 5.63 4.12
Depth of Field
(GigaPixels per second)
3.31 2.74 1.35 0.72
Particle Physics
(FPS)
83885 30344 18725 18178

Looking at the subscores, the Titan V handily outperforms the Titan Xp on all of the subtests. However it’s one test in particular that stands out here, and is likely responsible for the huge jump in the overall score, and that’s the Gaussian Blur, where the Titan V is 9x (!) faster than the Titan Xp. I am honestly not convinced that this isn’t a driver or benchmark bug of some sort, but it may very well be that Primate Labs has hit on a specific workload or scenario that sees some rather extreme benefits from the Volta architecture.

Folding @ Home

Up next we have the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, giving us a good opportunity to let Titan V flex its FP64 muscles.

Compute: Folding @ Home, Double and Single Precision

A CUDA-backed benchmark, this is the first sign that Titan V’s performance lead over the Titan Xp won’t be consistent. And more specifically that existing software and possibly even NVIDIA’s drivers aren’t well-tuned to take advantage of the Volta architecture just yet.

In this case the Titan V actually loses to the Titan Xp ever so slightly. The scores are close enough that this is within the usual 3% margin of error, which is to say that it’s a wash overall. But it goes to show that Titan V isn’t going to be an immediate win everywhere for existing software.

CompuBench

Our final set of compute benchmarks is another member of our standard compute benchmark suite: CompuBench 2.0, the latest iteration of Kishonti's GPU compute benchmark suite. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on level set segmentation, optical flow modeling, and N-Body physics simulations.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024K

Compute: CompuBench 2.0 - Optical Flow

It’s interesting how the results here are all over the place. The Titan V shows a massive performance improvement in both N-Body simulations and Optical Flow, once again leading to the Titan V punching well above its weight. But then the Level Set Segmentation benchmark is practically tied with the Titan Xp. Suffice it to say that this puts the Titan V in a great light, and conversely makes one wonder how the Titan Xp was (apparently) so inefficient. The flip side is that it’s going to be a while until we fully understand why certain workloads seem to benefit more from Volta than other workloads.

Compute Performance: GEMM & SiSoft Sandra Synthetic Graphics Performance
Comments Locked

111 Comments

View All Comments

  • Notmyusualid - Friday, December 22, 2017 - link

    Eth, simple O/C 82MH/s.

    I bow before thee...
  • Dugom - Saturday, December 23, 2017 - link

    Will you test the 388.71 ?

    The 388.59 doesn't support officialy the TITAN V...
  • Nate Oh - Saturday, December 23, 2017 - link

    Yes, it does. On page 7 of 388.59 Release Notes: "New Product Support: Added support for the NVIDIA TITAN V" [1].

    [1] https://us.download.nvidia.com/Windows/388.59/388....
  • karthik.hegde - Sunday, December 24, 2017 - link

    Why no one is talking about the Actual FLOPS/Peak FLOPS ? Clearly, achieving a constant 110TFLOPs that Titan has at disposal is simply not possible. What's the consistent FLOPS it can achieve before Memory Bandwidth becomes a bottleneck? When 12GB of VRAM isn't enough to hold all your data (Neural net training), then you're doing as good as previous gens.
  • mode_13h - Wednesday, December 27, 2017 - link

    That's why you use batching, sampling, and ultimately pay the big bucks for their Tesla hardware.
  • Shaklee3 - Wednesday, December 27, 2017 - link

    To the authors: what matrix size and what sample application did you use to hit 100TFLOPS on the tensor benchmark?
  • mode_13h - Thursday, December 28, 2017 - link

    You might have better luck getting a response either on Twitter or perhaps this thread:

    https://forum.beyond3d.com/threads/nvidia-volta-sp...

    In fact, the first post on that page seems to answer your question.
  • linksys - Saturday, January 6, 2018 - link

    nice post it is.
    <a href="https://www.interspire.com/forum/member.php?u=5179... Router Customer Service</a>

Log in

Don't have an account? Sign up now