Compute & Tessellation

Moving on from our look at gaming performance, we have our customary look at compute performance, bundled with a look at theoretical tessellation performance. Unlike our gaming benchmarks where NVIDIA’s architectural differences between GF114 and GF110 are largely irrelevant, they can become much more important under a compute-bound situation depending on just how much ILP can be extracted for the GTX 560 Ti.

Our first compute benchmark comes from Civilization V, which uses DirectCompute to decompress textures on the fly. Civ V includes a sub-benchmark that exclusively tests the speed of their texture decompression algorithm by repeatedly decompressing the textures required for one of the game’s leader scenes.

Under our Civilization 5 compute benchmark we have a couple of different things going on even when we just look at the NVIDIA cards. Compared to the GTX 460 1GB, the GTX 560 enjoys a 31% performance advantage; this is less than the theoretical maximum of 39%, but not far off from the performance advantages we’ve seen in most games. Meanwhile the GTX 470 is practically tied with the GTX 560 even though on paper the GTX 560 has around a 15% theoretical performance advantage. This ends up being a solid case of where the limitations of ILP come in to play, as clearly the GTX 560 isn’t maximizing the use of its superscalar shaders. Or to put it another way, it’s an example as to why NVIDIA isn’t using a superscalar design on their Tesla products.

Meanwhile this benchmark has always favored NVIDIA’s architectures, so in comparison to AMD’s cards there’s little to be surprised about. The GTX 560 Ti is well in the lead, with the only AMD card it can’t pass being the dual-GPU 5970.

Our second GPU compute benchmark is SmallLuxGPU, the GPU ray tracing branch of the open source LuxRender renderer. While it’s still in beta, SmallLuxGPU recently hit a milestone by implementing a complete ray tracing engine in OpenCL, allowing them to fully offload the process to the GPU. It’s this ray tracing engine we’re testing.

Small Lux GPU is the other test in our suite where NVIDIA’s drivers significantly revised our numbers. Where this test previously favored raw theoretical performance, giving the vector-based Radeons an advantage, NVIDIA has now shot well ahead. Given the rough state of both AMD and NVIDIA’s OpenCL drivers, we’re attributing this to bug fixes or possibly enhancements in NVIDIA’s OpenCL driver, with the former seeming particularly likely. However NVIDIA is not alone when it comes to driver fixes, and AMD has seem a similar uptick against the newly released 6900 series. It’s not nearly the leap NVIDIA saw, but it’s good for around 25%-30% more rays/second under SLG. This appears to be accountable to further refinement of AMD’s VLIW4 shader compiler, which as we have previously mentioned stands to gain a good deal of performance as AMD works on optimizing it.

So where does SLG stack up after the latest driver enhancements? With NVIDIA’s rocket to the top, they’re now easily dominating this benchmark. The GTX 560 Ti is now slightly ahead of the 6970, never mind the 6950 1GB where it has a 33% lead. Rather than being a benchmark that showed the advantage of having lots of theoretical compute performance, this is now a benchmark that seems to favor NVIDIA’s compute-inspired architecture.

Our final compute benchmark is a Folding @ Home benchmark. Given NVIDIA’s focus on compute for Fermi, cards such as the GTX 560 Ti can be particularly interesting for distributed computing enthusiasts, who are usually looking for a compute card first and a gaming card second.

Against the senior members of the GTX 500 series and even the GTX 480 the GTX 560 Ti is still well behind, but at the same time Folding @ Home does not look like it significantly penalizes GTX 560’s superscalar architecture.

At the other end of the spectrum from GPU computing performance is GPU tessellation performance, used exclusively for graphical purposes. With Fermi NVIDIA bet heavily on tessellation, and as a result they do very well at very high tessellation factors. With 2 GPCs the GTX 560 Ti can retire 2 triangles/clock, the same rate as the Radeon HD 6900 series, so this should be a good opportunity to look at theoretical architectural performance versus actual performance.

Against the AMD 5800 and 6800 series, the GTX 560 enjoys a solid advantage, as it’s able to retire twice as many triangles per clock as either architecture. And while it falls to both GTX 480 and GTX 580, the otherwise faster Radeon HD 6970 is close at times – at moderate tessellation it has quite the lead, but the two are neck-and-neck at extreme tessellation where triangle throughput and the ability to efficiently handle high tessellation factors counts for everything. Though since Heaven is a synthetic benchmark at the moment (the DX11 engine isn’t currently used in any games) we’re less concerned with performance relative to AMD’s cards and more concerned with performance relative to the other NVIDIA cards.

Microsoft’s Detail Tessellation sample program showcases NVIDIA’s bet on tessellation performance even more clearly. NVIDIA needs very high tessellation factors to shine compared to AMD’s cards. Meanwhile against the GTX 460 1GB our gains are a bit more muted; even though this is almost strictly a theoretical test, the GTX 560 only gains 30% on the GTX 460. Ultimately while the additional SM unlocks another tessellator on NVIDIA’s hardware, it does not unlock a higher triangle throughput rate, which is dictated by the GPCs.

Wolfenstein Power, Temperature, & Noise
Comments Locked

87 Comments

View All Comments

  • MeanBruce - Wednesday, January 26, 2011 - link

    Wonder if you can tune the fans separately in SmartDoctor? Damn cool Asus!
  • Burticus - Tuesday, January 25, 2011 - link

    I picked up a GTX460 768mb for $150 last summer. I assume the GTX560 will be down to that price point by this coming summer. I am very happy with the GTX460 except in Civ 5 and I think I am CPU limited there (Phenom II x3).

    So when this thing hits $150 I will sell my GTX460 on fleabay for $100 and upgrade, I guess. I wish I could buy one and stick it in my 360....
  • JimmiG - Tuesday, January 25, 2011 - link

    Looks like the video card market is picking up the pace again, which is both a good thing and not. I guess my GTX460 1GB from only 6 months ago now officially sucks and is only usable as a doorstop...a crippled, half-broken, semi-functional video card such as it is.

    On the other hand, it's great that technology is moving so fast. It just means that instead of buying a new video card and keeping it for 1.5 - 2 years, you once again have to upgrade every couple of months if you want to stay on top.

    Also, regardless of the marketing, anything below a 570 *sucks* for gaming above 1680x1050. Look at the results of Stalker, Metro 2033 and Warhead. You need to drop to 1680x1050 before the 560 Ti manager near 60 FPS which is the minimum for smooth gameplay.
  • Soldier1969 - Tuesday, January 25, 2011 - link

    Anything below $400 is a poor mans card period, I wouldnt stoop to that level of card running 2560 x 1600 display port max settings there is no substitute!
  • omelet - Wednesday, January 26, 2011 - link

    Congratulations.
  • silverblue - Thursday, January 27, 2011 - link

    I'm sorry to say, but knowing the 560 Ti is going to be a weaker and hence far cheaper part than the 580, why did you give it any thought? :)
  • otakuon - Tuesday, January 25, 2011 - link

    The GTX 460 is still the best card in nVidia's lineup with regards to price for performance. The 560 is just nVidia's standard interim update to keep itself relevent. I see no need for current GTX 460 owners to rush out and buy this card (or anyone who wants to replace a Fermi card for that matter) when the 600 series will be out this summer and will most likely have new arcitecture.
  • DeerDance - Tuesday, January 25, 2011 - link

    6850 beats them in price/performance, they are start at $150 at newegg
  • DeerDance - Tuesday, January 25, 2011 - link

    I was kinda surprised by final thoughts
    out of 34 pictures of fps in games, 17 won 6950, 12 gtx560 and 5 were in range of 1frame from each other (4 of those are for 6950) so I wonder why the final thoughts gave edge to GTX560.
  • omelet - Wednesday, January 26, 2011 - link

    He may have just done an average of the percentage differences between the two.So if, for instance, the 560 won by 50% in one test and lost by 10% in each of two tests, that method would call the 560 10% faster, even though it was slower in 2/3 of the tests.

    Don't get me wrong, I don't think the conclusion is accurate (I think 6950 looks more powerful overall from the benchmarks), I'm just saying how I think he might have come to his conclusion.

Log in

Don't have an account? Sign up now