Compute

Shifting gears, let’s take a look at compute performance on the GTX 1070 Ti.

As the GTX 1070 Ti is another GP104 SKU – and a fairly straightforward one at that – there shouldn’t be any surprises here. Relative to the GTX 1070, all of NVIDIA's performance improvements actually favor compute performance, so we should see a decent bump in performance here. However it won't change the fact that ultimately the GTX 1070 Ti will still come in below the GTX 1080, which has more SMs and a higher average clockspeed (never mind the benefits of more memory bandwidth).

Starting us off for our look at compute is Blender, the popular open source 3D modeling and rendering package. To examine Blender performance, we're running BlenchMark, a script and workload set that measures how long it takes to render a scene. BlechMark uses Blender's internal Cycles render engine, which is GPU accelerated on both NVIDIA (CUDA) and AMD (OpenCL) GPUs.

Compute: Blender 2.79 - BlenchMark

As you might expect, the GTX 1070 Ti's performance shoots ahead of the GTX 1070's due to the additional enabled SMs of this new video card SKU. In fact it technically outpaces the GTX 1080 by a single second, which although eye-popping, is within our margin of error. However what it can't do is overtake AMD's lead here, with the NVIDIA cards trailing the Vega family by quite a bit.

For our second set of compute benchmarks we have CompuBench 2.0, the latest iteration of Kishonti's GPU compute benchmark suite. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on level set segmentation, optical flow modeling, and N-Body physics simulations.

Compute: CompuBench 2.0 - Level Set Segmentation 256Compute: CompuBench 2.0 - N-Body Simulation 1024KCompute: CompuBench 2.0 - Optical Flow

In all 3 sub-tests, the GTX 1070 Ti makes modest gains. Overall, performance is now quite close to the GTX 1080, which makes sense given the relatively small gap in on-paper compute performance between the two cards. This also means that at least in the case of these benchmarks, the lack of additional memory bandwidth isn't hurting the GTX 1070 Ti too much. However looking at the broader picture, all of the NVIDIA GP104 cards are trailing AMD's Vega family outside of the more equitable level set segmentation sub-test.

Moving on, our 3rd compute benchmark is the next generation release of FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance.

Compute: Folding @ Home Single Precision

The GTX 1080 and GTX 1070 were already fairly close on this benchmark, so there's not a lot of room for the GTX 1070 Ti to stand out. Interestingly this is another case where performance actually slightly exceeds the GTX 1080 – though again within the margin of error – which further affirms just how close the compute performance of the new card is to the GTX 1080.

Our final compute benchmark is Geekbench 4's GPU compute suite. A multi-faceted test suite, Geekbench 4 runs seven different GPU sub-tests, ranging from face detection to FFTs, and then averages out their scores via their geometric mean. As a result Geekbench 4 isn't testing any one workload, but rather is an average of many different basic workloads.

Compute: Geekbench 4 - GPU Compute - Total Score

As with our other benchmarks, the GTX 1070 Ti more or less bridges the gap between the GTX 1080 and GTX 1070, falling just a few percent short of the GTX 1080 in performance. This is a test where NVIDIA was already doing better than average at, and now with its increased SM count, the GTX 1070 Ti has enough compute performance to surpass AMD's RX Vega 64, something the regular GTX 1070 could not do.

Total War: Warhammer Synthetics
Comments Locked

78 Comments

View All Comments

  • jrs77 - Friday, November 3, 2017 - link

    God those GPU-prices are extreme. All across the board they're some 30% too high imho.

    The last GPU I bought was a GTX660, which cost just a tad under $200. Now the 1060 costs $100 more than that.

    The GTX1070 should cost $350 and the GTX1080 $500. These are the price-brackets that existet just a couple years ago.
  • Yojimbo - Friday, November 3, 2017 - link

    You're exaggerating the price differences a little. You can get a 1060 6 GB for $260, and the 1080 does start at $500. And if you had a 2 GB 660 that's a bit like the 3 GB 1060, which can be had for $205.

    DDR3 prices per bit are close to where they were 3 years ago, and significantly higher than 5 years ago. I'm not sure how GDDR5 prices compare for then and now, but it's a safe bet that demand outstripping supply in the memory market has affected GDDR5 as well. Then consider that the GTX 1070 has twice the VRAM as the GTX 970.

    I think cryptocurrency was responsible for the high prices of graphics cards earlier this year, but as of now perhaps it's memory prices that are keeping them high. The closeness in price of the 1080 and 1070 and the big difference in price of the 1060 3 GB and 1060 6 GB probably still have to do with cryptocurrency. The 1080 and 1070 are based on the same GPU, and the pricing is affected by the yield and the demand for each card.Cryptominers demand the 1070,but not the 1080. If the 1070 is in demand relative to the the 1080 at a high ratio than the yield ratio of the GPU, it makes economic sense for the price of the 1070 to move upwards. Perhaps this situation is one reason the 1070 Ti only has one SM disabled. A similar situation exists for the 1060 3 GB/6 GB pair, pushing the 6 GB version up in relation to the 3 GB (cryptominers demand the 6 GB, I believe)
  • damianrobertjones - Friday, November 3, 2017 - link

    I NEED to see the minimum frame rate on each of the games. It's pretty much silly to show me 565+ fps when it dips to 42fps.
  • CiccioB - Friday, November 3, 2017 - link

    Tons of graphs are presented for each game and they all measure the timings on average and minimum. Have a look at the small pics below the big graphs.
  • letmepicyou - Friday, November 3, 2017 - link

    Not a big fan of how nVidia is creating facial recognition technology to help usher in the police state...
  • Yojimbo - Saturday, November 4, 2017 - link

    Which world is freer, one where we spend our time thinking of which technologies to ban and how to ban them, then implementing bans and spending our efforts enforcing the ban? Or one where we let technology progress and then work to integrate the technology in a beneficial way (which may require changes in laws after a period of disruption)? I'd argue that the second way is freer. How we use the technology is up to us. One could argue, I guess, that the Amish face no dangers regarding facial recognition policing (although maybe that's one technology they would like, because I think the reason they reject technologies is because they don't want individuals to have the power to be free from the Amish group structure).
  • r13j13r13 - Tuesday, November 7, 2017 - link

    5 fps
  • Gastec - Tuesday, February 20, 2018 - link

    The link for EVGA GTX 1070 Ti FTW2 opens up an Amazon page that in turn leads to the Buying Option FULLFILLED BY AMAZON of only $1,099.99. That way we get to save 0.01 cents. Isn't it Amaz ing?

Log in

Don't have an account? Sign up now