Tessellation & PhysX

We’ll kick off our in-depth look at the performance of the GTX400 series with Tessellation and PhysX. These are two of the biggest features that NVIDIA is pushing with the GTX400 series, with tessellation in particular being the major beneficiary of NVIDIA’s PolyMorph Engine strategy.

As we covered in our GF100 Recap, NVIDIA seeks to separate themselves from AMD in spite of the rigid feature set imposed by DirectX 11. Tessellation is one of the ways they intend to do that, as the DirectX 11 standard leaves them plenty of freedom with respect to tessellation performance. To accomplish this goal, NVIDIA needs significantly better tessellation performance, which has lead to them having 14/15/16 tesselators through having that many PolyMorph Engines. With enough tessellation performance NVIDIA can create an obvious image quality improvement compared to AMD, all the while requiring very little on the part of developers to take advantage of this.

All things considered, NVIDIA’s claim of having superior tessellation performance is one of the easiest claims to buy, but all the same we’ve gone ahead and attempted to confirm it.

Our first tessellation test is the newly released Unigine Heaven 2.0 benchmark, which was released a few days ago. 2.0 added support for multiple levels of tessellation (with 1.0 having earned a reputation of using extreme levels of tessellation), which allows us to look at tessellation performance by varying tessellation levels. If the GTX 480’s tessellation capabilities are several times faster than the Radeon 5870’s as NVIDIA claims, then it should better handle the increased tessellation levels.

Since Heaven is a largely a synthetic benchmark at the moment (the DX11 engine isn’t currently used in any games) we’ll be focusing on the relative performance of cards to themselves in keeping with our editorial policy of avoiding synthetic GPU tests when possible.


Heaven: Moderate & Extreme Tessellation

Heaven has 4 tessellation levels: off, moderate, normal, extreme. For our test we’re using the moderate and extreme modes, comparing the performance of extreme as a percentage of moderate performance.

Starting with averages, the GTX 480 keeps 79% of its performance moving from moderate to extreme. On the Radeon 5870 however, the performance drop-off is much more severe, losing 42% of its performance to bring it down to 58%.

The minimum framerates are even more telling. The GTX 480 minimum framerates drop by 26% when switching to extreme tessellation. The Radeon 5870 is much worse off here, bringing in minimum framerates 69% lower when using extreme tessellation. From these numbers it’s readily apparent that the GTX 480 is much more capable of dealing with very high tessellation levels than the Radeon 5870 is.

Our second tessellation test is similar in nature, this time taken from one of Microsoft’s DX11 sample programs: Detail Tessellation. Detail Tessellation is a simple scene where tessellation plus displacement mapping is used to turn a flat rock texture in to a simulated field of rocks by using tessellation to create the geometry. Here we measure the average framerate at different tessellation factors (7 and 11) and compare the framerate at the higher tessellation factor to the lower tessellation factor.

Looking at just the averages (the framerate is rather solid) we see that the GTX 480 retains 65% of its performance moving from factor 7 to factor 11. The Radeon 5870 on the other hand only retains 38% of its performance. Just as what we saw in Unigine, the GTX 480 takes a much lighter performance hit from higher tessellation factors than the Radeon 5870 does, driving home the point that the GTX 480 has a much more powerful tessellator.

With the results of these tests, there’s no reason to doubt NVIDIA’s claims about GF100’s tessellation abilities. All the data we have points GF100/GTX 480 being much more powerful than the Radeon 5000 series when it comes to tessellation.

But with that said, NVIDIA having a more powerful tessellator doesn’t mean much on its own. Tessellation is wholly dependent on game developers to make use of it and to empower users to adjust the tessellation levels. Currently every DX11 game using tessellation uses a fixed amount of it, so NVIDIA’s extra tessellation abilities are going unused. This doesn’t mean that tessellation will always be used like this, but it means things have to change, and counting on change is a risky thing.

NVIDIA’s superior tessellation abilities will require that developers offer a variable degree of tessellation in order to fully utilize their tessellation hardware, and that means NVIDIA needs to convince developers to do the extra work to implement this. At this point there’s no way for us to tell how things will go: NVIDIA’s superior tessellation abilities could be the next big thing that seperates them from AMD like they’re shooting for, or it could be the next DirectX 10.1 by being held back by weaker hardware. Without some better sense of direction on the future use of tessellation, we can’t make any recommendations based on NVIDIA’s greater tessellation performance.

Moving on we have PhysX, NVIDIA’s in-house physics simulation middleware. After picking up PhysX and its developer AGEIA in 2008, NVIDIA re-implemented PhysX hardware acceleration as a CUDA application, allowing their GPUs to physics simulations in hardware. NVIDIA has been pushing it on developers and consumers alike with limited success, and PhysX only finally had a breakthrough title last year with the critically acclaimed Batman: Arkham Asylum.

With Fermi’s greatly enhanced compute abilities, NVIDIA is now pushing the idea that PhysX performance will be much better on Fermi cards, allowing developers to use additional and more complex physics actions than ever before. In particular, with the ability to use concurrent kernels and the ability to do fast context switching, PhysX should have a reduced degree of overhead on Fermi hardware than it did on GT200/G80 hardware.

To put this idea to the test, we will be using the Batman: Arkham Asylum benchmark to measure PhysX performance. If PhysX has less overhead on Fermi hardware then the framerate hit on the GTX 480 from enabling PhysX effects should be lower than the framerate hit on the GTX 285. For this test we are running at 2560x1600, comparing performance between PhysX being disabled and when it’s set on High.

If PhysX has less overhead on Fermi hardware, Batman is not the game to show it. On both the GTX 480 and the GTX 285, the performance hit on a percentage basis for enabling PhysX is roughly 47%. The GTX 480 may be faster overall, but it takes the same heavy performance hit for enabling PhysX. The SLI cards fare even worse here: the performance hit for enabling PhysX is 60% on both the GTX 480 SLI and the GTX 285 SLI.

PhysX unquestionably has the same amount of overhead on the GTX 480 as it does the GTX 285. If PhysX is going to take up less overhead, then from what we can gather it either will be a benefit limited to PhysX 3, or will require future PhysX 2.x updates that have yet to be delivered.

Or second PhysX test is a more generalized look at PhysX performance. Here we’re using NVIDIA’s Raging Rapids tech demo to measure PhysX performance. Raging Rapids is a water simulation demonstration that uses PhysX to simulate waves, waterfalls, and more. Here we are measuring the framerate in the demo’s benchmark mode.

Overall the Raging Rapids benchmark gives us mixed results. Out of all of the benchmarks we have run on the GTX 480, this is one of the larger performance jumps over the GTX 285. On the other hand, once we compensate for the GTX 480’s additional shaders, we end up with a result only around 10% faster than a strict doubling in performance. This is a sign of good scaling, but it isn’t a sign that the GTX 480 is significantly faster than the GTX 285 due to more efficient use of compute resources. Just having all of this extra compute power is certainly going to make a difference overall, but on an architectural level the GTX 480 doesn’t look to be significantly faster at PhysX than the GTX 285 on a per-clock/per-shader basis.

Odds & Ends: ECC & NVIDIA Surround Missing Compute
Comments Locked

196 Comments

View All Comments

  • deputc26 - Friday, March 26, 2010 - link

    "GTX 480 only has 11% more memory bandwidth than the GTX 285, and the 15% less than the GTX 285."

    and holy server lag batman.
  • 529th - Friday, March 26, 2010 - link

    Thanks for the review :)
  • ghost2code - Saturday, March 27, 2010 - link

    I'm really impressed by this article author made a great job;) But about Fermi It's seem to be really good product for scientific matters but for gamers I'm not so sure about that. The price tag, power consumption, noise! this all is to much for only 10-15% of power more than above the cheaper and much more reasonable in all this things Radeon. I guess Fermi need some final touch from Nvidia and for now it's not a final , well tested product. Temp around 100 it's not good for PCB, GPU and all electronic and I don't believe it want metter for time-life and stability of the card. I'm glad the Farmi finally came but I'm dissapointed at least for now.
  • LuxZg - Saturday, March 27, 2010 - link

    I just don't know why GTX480 is compared to HD5870, and same for GTX470 vs HD5850.. GTX470 is right in the middle between two single-GPU Radeons, and just the same can be said for GTX480 sitting right in between HD5970 & HD5870.

    Prices of this cards as presented by nVidia/ATI:
    HD5970 - 599$
    GTX480 - 499$
    HD5870 - 399$
    GTX470 - 349$
    HD5850 - 299$

    I know GTX480 is single GPU, so by this logic you'll compare it to HD5870. But GTX480 is top of the line nVidia graphics card, and HD5970 is top of the line ATI card. Besides, ATI's strategy for last 3 product cycles is producing small(er) chips and go multi-GPU, while nVidia wants to go single-monolitic-GPU way.. So following this logic, indeed GTX480 should be compared to HD5970 rather than HD5870.

    Anyway, conclusion of this article is all fine, telling both strengths and the weaknesses of solutions from both camps, but I believe readers weren't told straightforward enough that these cards don't cost the same... And HD5970 was left out of the most of the comparisions (textual ones).

    If I personaly look at these cards, they are all worth their money. nVidia cards are probably more future-proof with their commitment to future tech (tessellation, GPGPU) but AMD cards are better for older and current (and close future) titles. And they are less hot, and less noisy, which most gamers do pay a lot of attention to. Not to say - this is first review of new card in which no one mentioned GPU overclocking. I'm guessing that 90+C temperatures won't allow much better clocks in the near future ;)
  • Wwhat - Sunday, March 28, 2010 - link

    In regards to the temperature and noise: there's always watercooling to go to, I mean if you have so much money to throw at the latest card you might as well thrown in some watercooling too.
    It's too pricey for me though, I guess I'll wait for the 40nm process to be tweaked, spending so much money on a gfx card is silly if you know a while later something new will come around that's way better, and it's just not worth committing so much money to it in my view.
    It's a good card though (when watercooled), nice stuff in it and faster on all fronts, but it also seems an early sample of new roads nvidia went into and I expect they will have much improved stuff later on (if still in business)
  • LuxZg - Tuesday, March 30, 2010 - link

    Like I've said before - if you want FASTEST (and that's usually what you want if you have money to throw away), you'll be buying HD5970. Or you'll be buying HD5970+water cooling as well..
  • ViRGE - Saturday, March 27, 2010 - link

    I'm not sure where you're getting that the HD5970 is a $600 card. In the US at least, that's a $700 card (or more) everywhere.
  • wicko - Sunday, March 28, 2010 - link

    Honestly I don't even know if it should be mentioned at all even if it is 600, because there is almost no stock anywhere.
  • LuxZg - Tuesday, March 30, 2010 - link

    Oh, don't make me laugh, please! :D In that case this review shouldn't be up at all, or it should be called "PREview".. or have you actually seen any stock of GTX470/480 arround?
  • LuxZg - Sunday, March 28, 2010 - link

    It's AMD's & nVidia's recommended prices, and you can see them all in Anandtech's own articles:
    http://www.anandtech.com/video/showdoc.aspx?i=3783">http://www.anandtech.com/video/showdoc.aspx?i=3783 (nvidia prices)
    http://www.anandtech.com/video/showdoc.aspx?i=3746">http://www.anandtech.com/video/showdoc.aspx?i=3746 (ATI single-gpu cards)
    http://www.anandtech.com/video/showdoc.aspx?i=3679">http://www.anandtech.com/video/showdoc.aspx?i=3679 (ATI single/dual GPU cards)

    It is not my fault that your US shops bumped up the price in the complete absence of competition in the high end market. But US is not only market in the world either.

    You want to compare with real world prices? Here, prices from Croatia, Europe..

    HD5970 - 4290kn = 591€ (recommended is 599$, which is usually 599€ in EU)
    GTX480 - not listed, recommended is 499$/€
    HD5870 - 2530kn = 348€ (recommended is 399$/399€ in EU)
    GTX470 - not listed, recommended is 349$/€
    HD5850 - 1867kn = 257€ (recommended is 299$/299€ in EU)

    So let's say that European prices for GTX will be a bit lower than recommended ones, GTX480 would still be ~120-130€ pricier than HD5870, and HD5970 would be same ~120-130€ more expensive than GTX480.
    As for the lower priced nVidia card, it's again firmly in the middle between HD5850 & HD5870.

    Point is that there's no clear price comparision at the moment, and article's conclusion should be clear on that.
    Person that wants the FASTEST CARD will stretch for another 100$/€ to buy HD5970. Especially since this means lower noise, lower consumption, and lower heat. This all combined means you can save a few $/€ on PSU, case, cooling, and earplugs, throwing HD5970 in the arm reach of the GTX480 (price-wise) while allowing for better speeds.

    As for GTX470, again, lower consumption/heat/noise with ATI cards which means less expenses for PSU/cooling, and saving money on electrical bills. For me, well worth the 50€/$ difference in price, in fact, I'd rather spend 50$/€ more to buy HD5870 which is faster, less noisy, doesn't require me to buy new PSU (I own HD4890, which was overclocked for a while, so HD5870 would work fine just as well), and will save me 50W per hour of any game I play.. which will all make it CHEAPER than GTX470 in the long run.

    So let's talk again - why isn't conclusion made a bit more straightforward for end users, and why is HD5890 completely gone from the conclusion??

Log in

Don't have an account? Sign up now