Compute Performance: Geekbench 4

In the most recent version of its cross-platform Geekbench benchmark suite, Primate Labs added CUDA and OpenCL GPU benchmarks. This isn’t normally a test we turn to for GPUs, but for the Titan V launch it offers us another perspective on performance.

Compute: Geekbench 4 - GPU Compute - Total Score

The results here are interesting. We’re not the only site to run Geekbench 4, and I’ve seen other sites with much different scores. But as we haven’t used this benchmark in great depth before, I’m hesitant to read too much into it. What it does show us, at any rate, is that the Titan V is well ahead of the Titan Xp here,  more than doubling the latter’s score.

NVIDIA Titan Cards GeekBench 4 Subscores
  Titan V Titan Xp GTX Titan X GTX Titan
Sobel
(GigaPixels per second)
35.1 24.9 16.5 9.4
Histogram Equalization
(GigaPixels per second)
21.2 9.43 5.58 4.27
SFFT
(GFLOPS)
180 136.5 83 60.3
Gaussian Blur
(GigaPixels per second)
23.9 2.67 1.57 1.45
Face Detection
(Msubwindows per second)
21.7 12.4 8.66 4.92
RAW
(GigaPixels per second)
18.2 10.8 5.63 4.12
Depth of Field
(GigaPixels per second)
3.31 2.74 1.35 0.72
Particle Physics
(FPS)
83885 30344 18725 18178

Looking at the subscores, the Titan V handily outperforms the Titan Xp on all of the subtests. However it’s one test in particular that stands out here, and is likely responsible for the huge jump in the overall score, and that’s the Gaussian Blur, where the Titan V is 9x (!) faster than the Titan Xp. I am honestly not convinced that this isn’t a driver or benchmark bug of some sort, but it may very well be that Primate Labs has hit on a specific workload or scenario that sees some rather extreme benefits from the Volta architecture.

Folding @ Home

Up next we have the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, giving us a good opportunity to let Titan V flex its FP64 muscles.

Compute: Folding @ Home, Double and Single Precision

A CUDA-backed benchmark, this is the first sign that Titan V’s performance lead over the Titan Xp won’t be consistent. And more specifically that existing software and possibly even NVIDIA’s drivers aren’t well-tuned to take advantage of the Volta architecture just yet.

In this case the Titan V actually loses to the Titan Xp ever so slightly. The scores are close enough that this is within the usual 3% margin of error, which is to say that it’s a wash overall. But it goes to show that Titan V isn’t going to be an immediate win everywhere for existing software.

CompuBench

Our final set of compute benchmarks is another member of our standard compute benchmark suite: CompuBench 2.0, the latest iteration of Kishonti's GPU compute benchmark suite. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on level set segmentation, optical flow modeling, and N-Body physics simulations.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024K

Compute: CompuBench 2.0 - Optical Flow

It’s interesting how the results here are all over the place. The Titan V shows a massive performance improvement in both N-Body simulations and Optical Flow, once again leading to the Titan V punching well above its weight. But then the Level Set Segmentation benchmark is practically tied with the Titan Xp. Suffice it to say that this puts the Titan V in a great light, and conversely makes one wonder how the Titan Xp was (apparently) so inefficient. The flip side is that it’s going to be a while until we fully understand why certain workloads seem to benefit more from Volta than other workloads.

Compute Performance: GEMM & SiSoft Sandra Synthetic Graphics Performance
Comments Locked

111 Comments

View All Comments

  • mode_13h - Wednesday, December 27, 2017 - link

    It's true. All they had to do was pay some grad students to optimize HPC and deep learning software for their GPUs. They could've done that for the price of only a couple marketing persons' salaries.
  • CiccioB - Monday, January 1, 2018 - link

    That would not be a surprise.
    AMD strategy on SW support has always been leaving others (usually not professionist) do the job at their own cost. Results is that AMD HW has never had a decent SW support other than for gaming (and that's only because Sony and MS spend money for improving gaming performances for their consoles).
  • tipoo - Friday, December 22, 2017 - link

    Sarcasm? There's no Vega built up to this scale.
  • mode_13h - Wednesday, December 27, 2017 - link

    It *is* pretty big and burns about as much power. Yet, it's nowhere near as fast at deep learning. Even with its lower purchase price, it's still not operationally cost-competitive with GV100.

    If you look at its feature set, it was really aimed at HPC and deep learning. In the face of Volta's tensor cores, it kinda fell flat, on the latter front.
  • Keermalec - Wednesday, December 20, 2017 - link

    What about mining benchmarks?
  • tipoo - Friday, December 22, 2017 - link

    Would be in line with the CUDA improvements. I.e, two 1080s would be much better at mining. Most of the uplift is in tensor performance, which no algo uses.
  • Cryio - Wednesday, December 20, 2017 - link

    Wait wait wait.

    Crysis Warhead at 4K, Very High with 4 times Supersampling? I think you mean Multisampling.

    I don't think this could manage 4K60 at max settings with 4xSSAA, lol.
  • Ryan Smith - Thursday, December 21, 2017 - link

    "I think you mean Multisampling."

    Nope, supersampling.=)
  • mode_13h - Wednesday, December 27, 2017 - link

    Tile rendering FTMFW.
  • Kevin G - Wednesday, December 20, 2017 - link

    "For our full review hopefully we can track down a Quadro GP100"

    YES. The oddity here is that the GP100 might end up being better than the Titan V at gaming due to having 128 ROPs vs. 96 ROPs and even higher memory bandwidth.

    Outside of half precision matrix multiplication, the GP100 should be roughly ~43% faster due mainly to the difference in ALU counts in professional workloads. Boost clocks are a meager 25 Mhz difference. Major deviations beyond that 43% difference would be where the architectures differ. There is a chance benchmarks would come in below that 43% mark if memory bandwidth comes into play.

Log in

Don't have an account? Sign up now