Compute & Synthetics

Shifting gears, we'll look at the compute and synthetic aspects of the GTX 1660 Ti.

Beginning with CompuBench 2.0, the latest iteration of Kishonti's GPU compute benchmark suite offers a wide array of different practical compute workloads, and we’ve decided to focus on level set segmentation, optical flow modeling, and N-Body physics simulations.

Compute: CompuBench 2.0 - Level Set Segmentation 256

Compute: CompuBench 2.0 - N-Body Simulation 1024K

Compute: CompuBench 2.0 - Optical Flow

On paper, the GTX 1660 Ti looks to provide around 85% of the RTX 2060's compute and shading throughput; for Compubench, we see it achieving around 82% of the latter's performance.

Moving on, we'll also look at single precision floating point performance with FAHBench, the official Folding @ Home benchmark. Folding @ Home is the popular Stanford-backed research and distributed computing initiative that has work distributed to millions of volunteer computers over the internet, each of which is responsible for a tiny slice of a protein folding simulation. FAHBench can test both single precision and double precision floating point performance, with single precision being the most useful metric for most consumer cards due to their low double precision performance.

Compute: Folding @ Home Single Precision

Next is Geekbench 4's GPU compute suite. A multi-faceted test suite, Geekbench 4 runs seven different GPU sub-tests, ranging from face detection to FFTs, and then averages out their scores via their geometric mean. As a result Geekbench 4 isn't testing any one workload, but rather is an average of many different basic workloads.

Compute: Geekbench 4 - GPU Compute - Total Score

In lieu of Blender, which has yet to officially release a stable version with CUDA 10 support, we have the LuxRender-based LuxMark (OpenCL) and V-Ray (OpenCL and CUDA).

Compute/ProViz: LuxMark 3.1 - LuxBall and Hotel

Compute/ProViz: V-Ray Benchmark 1.0.8

We'll also take a quick look at tessellation performance.

Synthetic: TessMark, Image Set 4, 64x Tessellation

Finally, for looking at texel and pixel fillrate, we have the Beyond3D Test Suite. This test offers a slew of additional tests – many of which we use behind the scenes or in our earlier architectural analysis – but for now we’ll stick to simple pixel and texel fillrates.

Synthetic: Beyond3D Suite - Pixel Fillrate

Synthetic: Beyond3D Suite - Integer Texture Fillrate (INT8)

Synthetic: Beyond3D Suite - Floating Point Texture Fillrate (FP32)

The practically identical pixel fill rates for the GTX 1660 Ti and RTX 2060 might seem odd at first blush, but it is an entirely expected result as both GPUs have the same number of ROPs, similar clockspeeds, same GPC/TPC setup, and similar memory configurations. And being the same generation/architecture, there aren't any changes or improvements to DCC. In the same vein, the RTX 2060 puts up a 25% higher texture fillrate over the GTX 1660 Ti as a consequence of having 25% more TMUs (96 vs 120).

 

Total War: Warhammer II Power, Temperature, and Noise
POST A COMMENT

157 Comments

View All Comments

  • C'DaleRider - Friday, February 22, 2019 - link

    Good read. Thx. Reply
  • Opencg - Saturday, February 23, 2019 - link

    gtx at rtx prices. not really a fan of that graph at the end. I mean 1080 ti were about 500 about half a year ago. the perf/dollar is surely less than -7% more like -30%. as well due to the 36% perf gain quoted being inflated as hell. double the price and +20% perf is not -7% anand Reply
  • eddman - Saturday, February 23, 2019 - link

    They are comparing them based on their launch MSRP, which is fair.

    Actually, it seems they used the cut price of $500 for 1080 instead of the $600 launch MSRP. The perf/$ increases by ~15% if we use the latter, although it's still a pathetic generational improvement, considering 1080's perf/$ was ~55% better than 980.
    Reply
  • close - Saturday, February 23, 2019 - link

    In all fairness when comparing products from 2 different generations that are both still on the market you should compare on both launch price and current price. The purpose is to know which is the better choice these days. To know the historical launch prices and trends between generation is good for conformity but very few readers care about it for more than curiosity and theoretical comparisons. Reply
  • jjj - Friday, February 22, 2019 - link

    The 1060 has been in retail for 2.5 years so the perf gains offered here a lot less than what both Nvidia and AMD need to offer.
    They are pushing prices up and up but that's not a long term strategy.

    Then again, Nvidia doesn't care much about this market, they are shifting to server, auto and cloud gaming. In 5 years from now, they can afford to sell nothing in PC, unlike both AMD and Intel.
    Reply
  • jjj - Friday, February 22, 2019 - link

    A small correction here, there is no perf gain here at all, in terms of perf per dollar. Reply
  • D. Lister - Friday, February 22, 2019 - link

    Did you actually read the article before commenting on it? It is right there, on the last page - 21% increase in performance/dollar, which added with the very decent gain in performance/watt would suggest the company is anything but just sitting on their laurels. Unlike another company, which has been brute-forcing an architecture that is more than a decade old, and squandering their intellectual resources to design budget chips for consoles. :P Reply
  • shabby - Friday, February 22, 2019 - link

    We didn't wait 2.5 years for such a meager performance increase. Architecture performance increases were much higher before Turing, Nvidia is milking us, can't you see? Reply
  • Smell This - Friday, February 22, 2019 - link

    DING !
    I know it's my own bias, but branding looks like a typical, on-going 'bait-and-switch' scam whereby nVidia moves their goal posts by whim -- and adds yet another $100 in retail price (for the last 2 generations?). For those fans who spent beeg-buckeroos on a GTX 1070 (or even a 1060 6GB), it's The Way You Meant to Be 'Ewed-Scrayed.
    Reply
  • haukionkannel - Saturday, February 23, 2019 - link

    Do you remember how much cpus used to improve From generation to generation... 3-5%...
    That was when there was no competition. Now when there is competition we see 15% increase between generations or less. Well come to the future of GPUs. 3-5 % of increase between generations if there is not competition. Maybe 15 or less if there is competition. The good point is that you can keep the same gpu 6 year and you have no need to upgrade and lose money.
    Reply

Log in

Don't have an account? Sign up now