Compute

Shifting gears, let’s take a look at compute performance on the GTX 1070 Ti.

As the GTX 1070 Ti is another GP104 SKU – and a fairly straightforward one at that – there shouldn’t be any surprises here. Relative to the GTX 1070, all of NVIDIA's performance improvements actually favor compute performance, so we should see a decent bump in performance here. However it won't change the fact that ultimately the GTX 1070 Ti will still come in below the GTX 1080, which has more SMs and a higher average clockspeed (never mind the benefits of more memory bandwidth).

Starting us off for our look at compute is Blender, the popular open source 3D modeling and rendering package. To examine Blender performance, we're running BlenchMark, a script and workload set that measures how long it takes to render a scene. BlechMark uses Blender's internal Cycles render engine, which is GPU accelerated on both NVIDIA (CUDA) and AMD (OpenCL) GPUs.

Compute: Blender 2.79 - BlenchMark

As you might expect, the GTX 1070 Ti's performance shoots ahead of the GTX 1070's due to the additional enabled SMs of this new video card SKU. In fact it technically outpaces the GTX 1080 by a single second, which although eye-popping, is within our margin of error. However what it can't do is overtake AMD's lead here, with the NVIDIA cards trailing the Vega family by quite a bit.

For our second set of compute benchmarks we have CompuBench 2.0, the latest iteration of Kishonti's GPU compute benchmark suite. CompuBench offers a wide array of different practical compute workloads, and we’ve decided to focus on level set segmentation, optical flow modeling, and N-Body physics simulations.

Compute: CompuBench 2.0 - Level Set Segmentation 256Compute: CompuBench 2.0 - N-Body Simulation 1024KCompute: CompuBench 2.0 - Optical Flow

In all 3 sub-tests, the GTX 1070 Ti makes modest gains. Overall, performance is now quite close to the GTX 1080, which makes sense given the relatively small gap in on-paper compute performance between the two cards. This also means that at least in the case of these benchmarks, the lack of additional memory bandwidth isn't hurting the GTX 1070 Ti too much. However looking at the broader picture, all of the NVIDIA GP104 cards are trailing AMD's Vega family outside of the more equitable level set segmentation sub-test.

Moving on, our 3rd compute benchmark is the next generation release of FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance.

Compute: Folding @ Home Single Precision

The GTX 1080 and GTX 1070 were already fairly close on this benchmark, so there's not a lot of room for the GTX 1070 Ti to stand out. Interestingly this is another case where performance actually slightly exceeds the GTX 1080 – though again within the margin of error – which further affirms just how close the compute performance of the new card is to the GTX 1080.

Our final compute benchmark is Geekbench 4's GPU compute suite. A multi-faceted test suite, Geekbench 4 runs seven different GPU sub-tests, ranging from face detection to FFTs, and then averages out their scores via their geometric mean. As a result Geekbench 4 isn't testing any one workload, but rather is an average of many different basic workloads.

Compute: Geekbench 4 - GPU Compute - Total Score

As with our other benchmarks, the GTX 1070 Ti more or less bridges the gap between the GTX 1080 and GTX 1070, falling just a few percent short of the GTX 1080 in performance. This is a test where NVIDIA was already doing better than average at, and now with its increased SM count, the GTX 1070 Ti has enough compute performance to surpass AMD's RX Vega 64, something the regular GTX 1070 could not do.

Total War: Warhammer Synthetics
Comments Locked

78 Comments

View All Comments

  • Nfarce - Friday, November 3, 2017 - link

    "Vega 56 beats 1070 in almost all games."

    It beats the reference GTX 1070, but a factory overclocked 1070 pulls ahead again, especially overclocked on top of the factory overclock. Vega 56 (or 64 for that matter) does not have that type of overclocking headroom. This has long been an advantage for Nvidia. AMD GPUs have a history of being terrible overclockers. My old EVGA GTX 970 SSC ACX 2.0+ could be overclocked to 980 performance without even touching the voltage for example.
  • LastQuark - Saturday, November 4, 2017 - link

    Vega 56 BIOS can be upgraded to the Vega 64 BIOS for another 20% speed boost. Vega can only do 5% max. With Vega 56 lower price point by over $50, cheaper Freesync monitor options, and 2x the availability of GSync, Vega 56 is still making a lot of sense for new buyers.
  • B-Real - Thursday, November 2, 2017 - link

    According to this theory, you can't compare the 1070 to the Vega56, as the 1070 is 5 degrees Celsius hotter...
  • damonlynch - Thursday, November 2, 2017 - link

    It should be "nonetheless", not "none the less", in the introduction to the Final Words ;-)
  • jardows2 - Thursday, November 2, 2017 - link

    Good showing. At MSRP, a good argument (from these tests at least) can still be made for Vega 56. Not sure if 1070ti is worth $50.00 more, but you do get a little bit better performance, and most important to me, a lower noise profile at load. Keeping it interesting for sure!
  • CaedenV - Thursday, November 2, 2017 - link

    So.... it is essentially a direct Vega 56 competitor except that it will be available on store shelves for purchase?
    Really hoping that this will cause the normal 1070 prices to drop a bit *fingers crossed*. I picked up a 4k monitor last year and my gaming has been quite limited on it with my GTX 960. A 1070 will fill in quite nicely for now, and next year when the new cards come out I'll pick up a 2nd 1070 for SLi to really make 4K gaming smooth.
  • BrokenCrayons - Thursday, November 2, 2017 - link

    SLI doesn't seem like a good solution these days given tepid support from the GPU manufacturers and very few modern titles that are optimized to take advantage of a 2nd graphics card. You might have a better experience if you set aside the first 1070's cost until next year and then use the funding from both to purchase a 1080 or just hang on to see what happens with Volta since there'll likely be consumer GPUs available sometime in 2018.
  • vladx - Thursday, November 2, 2017 - link

    AMD has no chance at all, a RX Vega 56 in my country is $150 more expensive than the newly released GTX 1070 Ti.
  • Sorjal - Thursday, November 2, 2017 - link

    Easiest check to see potential mining impact is toss a ti and non in a pc and test with some miner program. Nicehash legacy is probably the best for comparative stats. Run their benchmark on both cards on precise and compare the results. There will be some variance, but it should provide a decent reference. Nvidias are typically used on ones that are more gpu intensive, with Zcash probably the largest. AMD works off their bandwidth and are favored for corresponding currencies like Etherium and Monero. The nice hash legacy will test against most of the major ones including Ethereum, Monero, and ZCash.
    Given the increased CPU performance, it is probably being tested mining wise as we speak
  • Sorjal - Thursday, November 2, 2017 - link

    Remember to overclock and undervolt. Energy effeciency is a large factor. 1070's seem to be run between 65-70% power limit. A voltage meter on the outlet may be useful in this case as you can look at the voltage change for each test for each card

Log in

Don't have an account? Sign up now