Tessellation & PhysX

We’ll kick off our in-depth look at the performance of the GTX400 series with Tessellation and PhysX. These are two of the biggest features that NVIDIA is pushing with the GTX400 series, with tessellation in particular being the major beneficiary of NVIDIA’s PolyMorph Engine strategy.

As we covered in our GF100 Recap, NVIDIA seeks to separate themselves from AMD in spite of the rigid feature set imposed by DirectX 11. Tessellation is one of the ways they intend to do that, as the DirectX 11 standard leaves them plenty of freedom with respect to tessellation performance. To accomplish this goal, NVIDIA needs significantly better tessellation performance, which has lead to them having 14/15/16 tesselators through having that many PolyMorph Engines. With enough tessellation performance NVIDIA can create an obvious image quality improvement compared to AMD, all the while requiring very little on the part of developers to take advantage of this.

All things considered, NVIDIA’s claim of having superior tessellation performance is one of the easiest claims to buy, but all the same we’ve gone ahead and attempted to confirm it.

Our first tessellation test is the newly released Unigine Heaven 2.0 benchmark, which was released a few days ago. 2.0 added support for multiple levels of tessellation (with 1.0 having earned a reputation of using extreme levels of tessellation), which allows us to look at tessellation performance by varying tessellation levels. If the GTX 480’s tessellation capabilities are several times faster than the Radeon 5870’s as NVIDIA claims, then it should better handle the increased tessellation levels.

Since Heaven is a largely a synthetic benchmark at the moment (the DX11 engine isn’t currently used in any games) we’ll be focusing on the relative performance of cards to themselves in keeping with our editorial policy of avoiding synthetic GPU tests when possible.


Heaven: Moderate & Extreme Tessellation

Heaven has 4 tessellation levels: off, moderate, normal, extreme. For our test we’re using the moderate and extreme modes, comparing the performance of extreme as a percentage of moderate performance.

Starting with averages, the GTX 480 keeps 79% of its performance moving from moderate to extreme. On the Radeon 5870 however, the performance drop-off is much more severe, losing 42% of its performance to bring it down to 58%.

The minimum framerates are even more telling. The GTX 480 minimum framerates drop by 26% when switching to extreme tessellation. The Radeon 5870 is much worse off here, bringing in minimum framerates 69% lower when using extreme tessellation. From these numbers it’s readily apparent that the GTX 480 is much more capable of dealing with very high tessellation levels than the Radeon 5870 is.

Our second tessellation test is similar in nature, this time taken from one of Microsoft’s DX11 sample programs: Detail Tessellation. Detail Tessellation is a simple scene where tessellation plus displacement mapping is used to turn a flat rock texture in to a simulated field of rocks by using tessellation to create the geometry. Here we measure the average framerate at different tessellation factors (7 and 11) and compare the framerate at the higher tessellation factor to the lower tessellation factor.

Looking at just the averages (the framerate is rather solid) we see that the GTX 480 retains 65% of its performance moving from factor 7 to factor 11. The Radeon 5870 on the other hand only retains 38% of its performance. Just as what we saw in Unigine, the GTX 480 takes a much lighter performance hit from higher tessellation factors than the Radeon 5870 does, driving home the point that the GTX 480 has a much more powerful tessellator.

With the results of these tests, there’s no reason to doubt NVIDIA’s claims about GF100’s tessellation abilities. All the data we have points GF100/GTX 480 being much more powerful than the Radeon 5000 series when it comes to tessellation.

But with that said, NVIDIA having a more powerful tessellator doesn’t mean much on its own. Tessellation is wholly dependent on game developers to make use of it and to empower users to adjust the tessellation levels. Currently every DX11 game using tessellation uses a fixed amount of it, so NVIDIA’s extra tessellation abilities are going unused. This doesn’t mean that tessellation will always be used like this, but it means things have to change, and counting on change is a risky thing.

NVIDIA’s superior tessellation abilities will require that developers offer a variable degree of tessellation in order to fully utilize their tessellation hardware, and that means NVIDIA needs to convince developers to do the extra work to implement this. At this point there’s no way for us to tell how things will go: NVIDIA’s superior tessellation abilities could be the next big thing that seperates them from AMD like they’re shooting for, or it could be the next DirectX 10.1 by being held back by weaker hardware. Without some better sense of direction on the future use of tessellation, we can’t make any recommendations based on NVIDIA’s greater tessellation performance.

Moving on we have PhysX, NVIDIA’s in-house physics simulation middleware. After picking up PhysX and its developer AGEIA in 2008, NVIDIA re-implemented PhysX hardware acceleration as a CUDA application, allowing their GPUs to physics simulations in hardware. NVIDIA has been pushing it on developers and consumers alike with limited success, and PhysX only finally had a breakthrough title last year with the critically acclaimed Batman: Arkham Asylum.

With Fermi’s greatly enhanced compute abilities, NVIDIA is now pushing the idea that PhysX performance will be much better on Fermi cards, allowing developers to use additional and more complex physics actions than ever before. In particular, with the ability to use concurrent kernels and the ability to do fast context switching, PhysX should have a reduced degree of overhead on Fermi hardware than it did on GT200/G80 hardware.

To put this idea to the test, we will be using the Batman: Arkham Asylum benchmark to measure PhysX performance. If PhysX has less overhead on Fermi hardware then the framerate hit on the GTX 480 from enabling PhysX effects should be lower than the framerate hit on the GTX 285. For this test we are running at 2560x1600, comparing performance between PhysX being disabled and when it’s set on High.

If PhysX has less overhead on Fermi hardware, Batman is not the game to show it. On both the GTX 480 and the GTX 285, the performance hit on a percentage basis for enabling PhysX is roughly 47%. The GTX 480 may be faster overall, but it takes the same heavy performance hit for enabling PhysX. The SLI cards fare even worse here: the performance hit for enabling PhysX is 60% on both the GTX 480 SLI and the GTX 285 SLI.

PhysX unquestionably has the same amount of overhead on the GTX 480 as it does the GTX 285. If PhysX is going to take up less overhead, then from what we can gather it either will be a benefit limited to PhysX 3, or will require future PhysX 2.x updates that have yet to be delivered.

Or second PhysX test is a more generalized look at PhysX performance. Here we’re using NVIDIA’s Raging Rapids tech demo to measure PhysX performance. Raging Rapids is a water simulation demonstration that uses PhysX to simulate waves, waterfalls, and more. Here we are measuring the framerate in the demo’s benchmark mode.

Overall the Raging Rapids benchmark gives us mixed results. Out of all of the benchmarks we have run on the GTX 480, this is one of the larger performance jumps over the GTX 285. On the other hand, once we compensate for the GTX 480’s additional shaders, we end up with a result only around 10% faster than a strict doubling in performance. This is a sign of good scaling, but it isn’t a sign that the GTX 480 is significantly faster than the GTX 285 due to more efficient use of compute resources. Just having all of this extra compute power is certainly going to make a difference overall, but on an architectural level the GTX 480 doesn’t look to be significantly faster at PhysX than the GTX 285 on a per-clock/per-shader basis.

Odds & Ends: ECC & NVIDIA Surround Missing Compute
POST A COMMENT

197 Comments

View All Comments

  • ol1bit - Thursday, April 01, 2010 - link

    I thought it was a fare review. They talked about the heat issues, etc.

    You can't compare a 2 GPU card to a single GPU card. If they ever make a 2 core GF100, I'm sure Anandtech will do a review.
    Reply
  • IceDread - Tuesday, April 06, 2010 - link

    You are wrong. You can and you should compare single gpu cards with multi gpu cards. It does not matter if a card has one or 30 gpu's on the card. It's the performance / price that matters.

    These nvidia cards are very expensive in performance / price compared to the ATI cards, simple as that. It's obvious that nvidia dropped the ball with their new flagship. You even need 2 cards to be able to use 3 screens.

    This is bad for us customers, we are not getting any price pressure at all. These nvidia cards does not improve the market since they can not compete with the ATI card, only nvidia fans will purchase these cards or possibly some working with graphics.

    I hope nvidia will do better with their next series or cards and I hope that won't take to long because ATI will most likely release a new series in half a year or so.
    Reply
  • xxtypersxx - Sunday, March 28, 2010 - link

    I will be interested in seeing the performance gains that will likely come from revised Nvidia drivers in a month or two. In some of the tests the gtx470 is trading blows with the gtx285 despite having nearly double the compute power...I think there is a lot of room for optimization.

    I am no fanboy and even owned a 4850 for a while, but Nvidia's drivers have always been a big decision factor for me. I don't get any of the random issues that were common on catalyst and aside from the occasional hiccup (196.67 G92 fan bug) I don't worry about upgrades breaking things. I admit I don't know if all the 5xxx series driver issues have been fixed yet but I do look forward to driver parity, until then I think raw performance is only part of the equation.
    Reply
  • GourdFreeMan - Sunday, March 28, 2010 - link

    Ryan, have you checked performance and/or clocks to see if any of the cards you are testing are throttling under FurMark? I recall you mentioning in your 58xx review that ATi cards can throttle under FurMark to prevent damage, and while most of the power numbers look normal, I notice a few of the cards are consuming less power under FurMark than Crysis, unlike the majority of the cards which consume considerably more power running FurMark than Crysis... Reply
  • MojaMonkey - Sunday, March 28, 2010 - link

    I can turn off one light in my house and remove the power consumption difference between the GTX480 and the 5870.

    I thought this was an enthusiast site?

    I lol irl when people talk about saving 100 watts and buying a 5870. So saving 100 watts but building a 700 watt system? Are you saving the planet or something?

    I think nVidia is smart, if you fold or use cuda or need real time 3d performance from a quadro you will buy this card. That probably is a large enough market for a niche high end product like this.

    PS: 5870 is the best gaming card for the money!
    Reply
  • Paladin1211 - Sunday, March 28, 2010 - link

    No, the 5850 is.

    p/s: I misclicked the Report instead of Reply button, so pls ignore it T_T
    Reply
  • kallogan - Sunday, March 28, 2010 - link

    Seriously i wonder who'd want gpus that power angry, noisy and hot...Nvidia is out both on mobile and desktop market...The only pro for Nvidia i can see is the 3D support. Reply
  • beginner99 - Sunday, March 28, 2010 - link

    This is kind of bad for consumers. 0 pressure on ATI to do anything from lower price to anything else. they can just lay back and work on the next gen.
    Well, that at least made my decision easy. build now or wait for sandybridge. I will wait. hoepfully gpu marekt will be nicer then too (hard to be worse actually).
    Reply
  • C5Rftw - Sunday, March 28, 2010 - link

    I was waiting for the fermi cards to come out before my next high end build( looking for price drops), but I actually did not expect this card to be this fast. The GTX480 is ~15% faster than the 5870, but for $100 more, and it is just gonna be a Nvidia loyal card, and the 5870 will probably drop just a little if at all.. The 5850 and and 5830 should drop $25-50, hopefully more(2x5850 at ~250$ each would be FTW). Now, would I like to have a fermi?, well yeah for sure, but I would much rather have a 5870 and down the road add another. A GTX 480 uses the same, if not more power than (2) 5870's. Now this reminds me of the last gen of the P4's. or as we know em, the Preshots. Basically, Nvidia's idea of a huge chip approach, with yes impressive performance, was just the wrong approach. I mean, their next-gen, if based on this same "doubling" SPs, cuda cores, would draw 300w+ easily and almost require water cooling because the next TSMC process is going to be 32nm and that will not allow them to "cut the chip in half." ATI's theory started with the 4000 series has proven to be a much better/efficient design. I think they could make a 6870 using 40nm TSMC right now, but ofcourse it would be a hot chip. Now when they get the 32 TSMC FABs running, Nvidia has got to re-design their chips.. And with how hot the GTX 480 is, I dont see how they could make a GTX 495. Also, the 5890 is right around the corner and that should give the final punch to KO Nvidia in this GPU generation. On a side note, Thank " " that there is some healthy competion or AMD might pull what Nvidia did and rebrand the 8800 5 or 6 times. Reply
  • Belard - Sunday, March 28, 2010 - link

    Keep in mind, the GeForce 480 (GTX means nothing, see any GTX210 or GT 285?) is already the most power hungry card on the market, just under 300watts under full load.... if the GF480 had all 512 Cuda Cores running and clocked higher... the card will easily surpass 300watts!

    This in turn means MORE heat, more power, more noise. There are videos on the 480/470s & ATI cards... the 480's fan is running very fast and loud to keep it under 100c, about 2~3 times hotter than a typical CPU.

    We will see the ATI 6000 series on 40nm, but it may not be with TSMC.

    If the upcoming 5890 is 15% faster and can sell for $400~450, that would put some hurt on the GF480.

    Not sure how/why ATI would do re-branding. The 4670 is almost like a 3870, but is easily a more advanced and cheaper GPU. The bottom end GPUs have all changed. 2400 / 3450, 4350, 5450 - all different.

    Nvidia has been doing re-branding for quite a long time. The GF2mx was re-branded as the GF2MX 400 (These were bottom end $150~190 cards in 2001) and then for some bone-head reason, during the GF6 era - they brought back the GF2MX but added DX8. Huh? Add a function to an OLD bottom end GPU?

    The GF2-TI came out when GF3-TI series was launched... they wanted "TI" branding. The GF2-TI was a rebranded GF2-Pro with a slight clock upgrade.

    Then came the first big-branding/feature fiasco with Nvidia. The GF8 was the first DX8 cards. Then the GF 4 series came out. The GF4ti were the high end models. But the MX series were nothing more than GF2 (DX7) with optional DVI... to take care of the low end and shove the letter names to the front.

    GF4 mx420 = GF2mx, but a bit slower.
    GF4 mx440 = GF2 Pro/TI
    GF4 mx460 = ... faster DX7 card, but it was about $20~35 cheaper than the GF4-TI4200, a DX8 card. The Ti4200 was a #1 seller at about $200. Some of the 440se & 8x models may have 64 or 128bit RAM... ugh.

    Then they had fun with the TI series when AGP 8x came out... NEW models! Either thou the cards couldn't max out the AGP 4x bus. Even the future ATI 9800Pro only ran 1~3% faster with AGP 8x.

    GF4 Ti 4200 > GF4 Ti 4200 8x
    GF4 Ti 4400 > GF4 Ti 4800 SE
    GF4 Ti 4600 > GF4 Ti 4800

    Yep, same GPUs... new names. Some people would upgrade to nothing or worse. Some even went from the 4600 to the 4800SE which was a downgrade!

    GF5 5500 = 5200

    Since the GF5... er "FX" series, Nvidia kept the DX# and feature set within the series. All GF5 cards are DX9.

    But the 5200s were a joke. By the time they hit the market at $120, the Ti4200s were also $120 and the 4mx were reduced to $30~60. But the 5200 was HALF the performance of a 4200. People actually thought they were upgrading... returns happened.

    Funny thing once. A person bought a "5200" at walmart and was confused by the POST display of "4200". Luckily he had posted to us on the interent. We laughed our butts off...! What happened? Batch & switch... someone bought a 5200, took it home - switched cards, took it back to Walmart for a refund. hey, its usually a brick or a dead card, etc. he got used card, but a much better product.

    Like the ATI 5450 is too slow for gaming today for DX11, the GF5200 was horrible back in 2003 for DX9! The 5200 is still sold today, the only thing left.

    Pretty much the entire GF5 series was utter garbage. 4 versions of the GF5600 ($150~200) were slower than the previous $100 Ti 4200. It was sick. This allowed ATI to gain respect and marketshare with their ATI 9600 & 9700 cards. The GF 5700 series (2 out of 5 types) were good Ti4200 replacements. The 5900 went up against the ATI 9800. I've owned both.

    Since then, ATI pretty much had the upper hand in performance throughout the GF6 & GF7 era. AMD buys out ATI, then the GF8 and core2 wipes out ATI/AMD with faster products.

    While ATI had the faster cards during DX9.0c (really MS? Couldn't make 6.1, 6.2?) era over the GF6/7... Nvidia *HAD* the lower end market. The GF6600 and 7600GT were $150~200 products... ATI products in that price range were either too slow or cost too much.

    With GF 8800 & 8600s, ATI had lost high & mid-range markets. The HD 2000 series = too expensive, too hot and not fast enough... (sound familiar). The ATI 3000 series brought ATI back to competitive position where it counted. Meanwhile, Nvidia milked the G92~96 for the past 2+ years. They are code-name & model number crazy happy.

    As long as ATI continues doing engineering and management this way, nVidia will continue to be in trouble for a long time unless they get their act together or count on the server market to stay in business.

    End of short history lesson :0
    Reply

Log in

Don't have an account? Sign up now