Compute & Synthetics

Shifting gears, we'll look at the compute and synthetic aspects of the GTX 1660 Ti.

Beginning with CompuBench 2.0, the latest iteration of Kishonti's GPU compute benchmark suite offers a wide array of different practical compute workloads, and we’ve decided to focus on level set segmentation, optical flow modeling, and N-Body physics simulations.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024K

Compute: CompuBench 2.0 - Optical Flow

On paper, the GTX 1660 Ti looks to provide around 85% of the RTX 2060's compute and shading throughput; for Compubench, we see it achieving around 82% of the latter's performance.

Moving on, we'll also look at single precision floating point performance with FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance.

Compute: Folding @ Home Single Precision

Next is Geekbench 4's GPU compute suite. A multi-faceted test suite, Geekbench 4 runs seven different GPU sub-tests, ranging from face detection to FFTs, and then averages out their scores via their geometric mean. As a result Geekbench 4 isn't testing any one workload, but rather is an average of many different basic workloads.

Compute: Geekbench 4 - GPU Compute - Total Score

In lieu of Blender, which has yet to officially release a stable version with CUDA 10 support, we have the LuxRender-based LuxMark (OpenCL) and V-Ray (OpenCL and CUDA).

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: V-Ray Benchmark 1.0.8

We'll also take a quick look at tessellation performance.

Synthetic: TessMark, Image Set 4, 64x Tessellation

Finally, for looking at texel and pixel fillrate, we have the Beyond3D Test Suite. This test offers a slew of additional tests – many of which we use behind the scenes or in our earlier architectural analysis – but for now we’ll stick to simple pixel and texel fillrates.

Synthetic: Beyond3D Suite - Pixel Fillrate

Synthetic: Beyond3D Suite - Integer Texture Fillrate (INT8)

Synthetic: Beyond3D Suite - Floating Point Texture Fillrate (FP32)

The practically identical pixel fill rates for the GTX 1660 Ti and RTX 2060 might seem odd at first blush, but it is an entirely expected result as both GPUs have the same number of ROPs, similar clockspeeds, same GPC/TPC setup, and similar memory configurations. And being the same generation/architecture, there aren't any changes or improvements to DCC. In the same vein, the RTX 2060 puts up a 25% higher texture fillrate over the GTX 1660 Ti as a consequence of having 25% more TMUs (96 vs 120).

 

Total War: Warhammer II Power, Temperature, and Noise
Comments Locked

157 Comments

View All Comments

  • GreenReaper - Friday, February 22, 2019 - link

    Your point is a lie, though, as you clearly didn't buy it on his recommendation. How can we believe anything you say after that?
  • Questor - Wednesday, March 6, 2019 - link

    Not criticizing, simply adding:
    Several times in the past, honest review sites did comparisons of electrical costs in several places around the States and a few other countries with regard to brand A video card at a lower power draw than brand B video card. The idea was to calculate a reasonable overall cost for the extra power draw and if it was worth worrying about/worth specifically buying the lower draw card. In each case it was negligible in terms of addition power use by dollar (or whatever currency). A lot of these great sites have died out or been bought out and are gone now. It a darned shame. We used to actually real useful information about products and what all these values actually mean to the user/customer/consumer. We used to see the same for power supplies too. I haven't seen anything like that in years now. Too bad. It proved how little a lot of the numbers mattered in real life to real bill paying consumers.
  • Icehawk - Friday, February 22, 2019 - link

    Man this sucks, clearly this card isn't enough for 4k and I'm not willing to spend on a RTX 2070. Can I hope for a GTX 1170 at like $399? 8gb of RAM please. I'm not buying a new card until it's $400 or less and has 8gb+, my 970 runs 1440p maxed or close to it in almost all AAA games and even 4k in some (like Overwatch) so I'm not going for a small improvement - after 2 gens I should be looking at close to double the performance but it sure doesn't look like that's happening currently.
  • eva02langley - Friday, February 22, 2019 - link

    Navi is your only hope.
  • CiccioB - Friday, February 22, 2019 - link

    And I think he will be even more disappointed if he's looking for a 4K card that is able to play with <b>modern</b> games.

    BTW: No 1170 will be made. This card is the top Turing without RT+TC and so it's the best performance you can get at lowest the price. Other Turing with no RT+TC will be slower (though probably cheaper, but you are not looking for just a cheap card, you are looking for a x2 the performace of your actual one).
  • catavalon21 - Sunday, February 24, 2019 - link

    I am curious, what are you basing "no 1170" on?
  • CiccioB - Monday, February 25, 2019 - link

    Huh, let's see...
    designing a new chip costs a lot of money, especially when it is not that tiny.
    A chip bigger than this TU116 will be just faster than the 2060, which has a 445mm^2 die size which has to be sold with some margins (unlike AMD that sells Vega GPU+HBM at the price of bread slices and at the end of the quarter reports gains in the amount of the fractions number of nvidia, but that's good for AMD fans, it is good that the company looses money to make them happy with oversized and HW that performs like mainstream competition one).
    So creating a 1170 simply means killing the 2060 (and probably 2070), just defeating the original purpose of these cards as first lower HW (possible mainstream) capable of RT.

    Unless you are supposing nvidia is going to scrap completely their idea that RT is the future and it's support will be expanded in future generations, there's no a valid, rationale reason for them to create a new GPU that will replace the cut version of TU106.

    All this without considering that AMD is probably not going to compete on 7nm as with that PP they will probably manage to reach Pascal performance while at 7nm nvidia is going to blow any AMD solution away under the point of view of absolute performance, performance per W and performance per mm^2 (despite the addition of the new computational units that will find more and more usage in the future.. none still has thought of using tensor core for advanced AI, for example).

    So, no, there will be no a 1170 unless it will be a further cut of TU106 that at the end will perform just like TU116 but will be just a mere recycle of broken silicon.

    Now, let me hear what makes you believe that a 1170 will be created.
  • catavalon21 - Tuesday, February 26, 2019 - link

    I do not know if they will create an 1170 or not; to be fair, I am surprised they even created the 1160. You have a very good point, upon reflection, it is quite likely such a product would impact RTX sales. I was just curious what had you thinking that way.

    Thank you for the response.
  • Oxford Guy - Saturday, February 23, 2019 - link

    Our only hope is capitalism.

    That's not going to happen, though.

    Instead, we get duopoly/quasi-monopoly.
  • douglashowitzer - Friday, February 22, 2019 - link

    Hey not sure if you're opposed to used GPUs... but you can get a used, overclocked, 3rd party GTX 1080 with 8GB vram on eBay for about $365-$400. In my opinion it's an amazing deal and I can tell you from experience that it would satisfy the performance jump that you're looking for. It's actually the exact situation I was in back in June of 2016 when I upgraded my 970 to a 1080. Being a proper geek, I maintained a spreadsheet of my benchmark performance improvements and the LOWEST improvement was an 80% gain. The highest was a 122% gain in Rise of the Tomb Raider (likely VRAM related but impressive nonetheless). Honestly I don't believe I've ever experienced a performance improvement that felt so "game changing" as when I went from my 970 to the 1080. Maybe waaay back when I upgraded my AMD 6950 to a GTX 670 :). If "used" doesn't turn you off, the upgrade of your dreams is waiting for you. Good luck to you!

Log in

Don't have an account? Sign up now