AMD's RV770 vs. NVIDIA's GT200: Which one is More Efficient?

It is one thing to be able to sustain high levels of performance and altogether another to do it efficiently. AMD's architecture is clearly the more area efficient compared to NVIDIA.

Alright now, don't start yelling that RV770 is manufactured at 55nm while GT200 is a 65nm part: we're taking that into account. The die size of GT200 is 576mm^2, but if we look at scaling the core down to 55nm, we would end up with a 412mm^2 part with perfect scaling. This is being incredibly generous though, as we understand that TSMC's 55nm half-node process scales down die size much less efficiently one would expect. But lets go with this and give NVIDIA the benefit of the doubt.

First we'll look at area efficiency in terms of peak theoretical performance using GFLOPS/mm^2 (performance per area). Remember, these are just ratios of design and performance aspects; please don't ask me what an (operation / (s * mm * mm)) really is :)

  Normalized Die Size GFLOPS GFLOPS/mm^2
AMD RV770 260 mm^2 1200 4.62
NVIDIA GT200 412 mm^2 933 2.26

 

This shows us that NVIDIA's architecture requires more than 2x the die area of AMD's in order to achieve the same level of peak theoretical performance. Of course theoretical performance doesn't mean everything, especially in light of our previous discussion on extracting parallelism. So let's take a look at real performance per area and see what we get in terms of some of our benchmarks, specifically Bioshock, Crysis, and Oblivion. We chose these titles because relative performance of RV770 is best compared to GT200 in Bioshock and worst in Oblivion (RV770 actually leads the GT200 in bioshock performance while the GT200 crushes RV770 in Oblivion). We included Crysis because it's engine is quite a popular and stressful benchmark that falls somewhere near the middle of the range in performance difference between RV770 and GT200 in the tests we looked at.

These numbers look at performance per cm^2 (because the numbers look prettier when multiplied by 100). Again, this doesn't really show something that is a thing -- it's just a ratio we can use to compare the architectures.

Performance per Die Area Normalized Die Size in cm^2 Bioshock Crysis Oblivion
AMD RV770 2.6 27 fps/cm^2 11.42 fps/cm^2 10.23 fps/cm^2
NVIDIA GT200 4.12 15.51 fps/cm^2 8.33 fps/cm^2 8.93 fps/cm^2

 

While it doesn't tell the whole story, it's clear that AMD does have higher area efficiency relative to the performance they are able attain. Please note that comparing these numbers directly doesn't yield anything that can be easily explained (the percent difference in frames per second per millimeter per millimeter doesn't really make much sense as a concept), which is part of why these numbers aren't in a graph but are in a table. So while higher numbers show that AMD is more area efficient, this data really doesn't show how much of an advantage AMD really has. Especially since we are normalizing sizes and looking at game performance rather than microbenches.

Some of this efficiency may come from architectural design, while some may stem from time spent optimizing the layout. AMD said that some time was spent doing area optimization on their hardware, and that this is part of the reason they could get more than double the SPs in there without more than doubling the transistor count or building a ridiculously huge die. We could try to look at transistor density, but transistor counts from AMD and NVIDIA are both just estimates that are likely done very differently and it might not reflect anything useful.

We can talk about another kind of efficiency though. Power efficiency. This is becoming more important as power costs rise, as computers become more power hungry, and as there is a global push towards conservation. The proper way to look at power efficiency is to look at the amount of energy it takes to render a frame. This is a particularly easy concept to grasp unlike the previous monstrosities. It turns out that this isn't a tough thing to calculate.

To get this data we recorded both frame rate and watts for a benchmark run. Then we look at average frame rate (frames per second) and average watts (joules per second). We can then divide average watts by average frame rate and we end up with: average joules / frames. This is exactly what we need to see energy per frame for a given benchmark. And here's a look at Bioshock, Crysis and Oblivion.

Average energy per frame Bioshock Crysis Oblivion
AMD RV770 4.45 J/frame 10.33 J/frame 11.07 J/frame
NVIDIA GT200 5.37 J/frame 9.99 J/frame 9.57 J/frame

 

This is where things get interesting. AMD and NVIDIA trade off on power efficiency when it comes to the tests we showed here. Under Bioshock RV770 requires less energy to render a frame on average in our benchmark. The opposite is true for Oblivion, and NVIDIA does lead in terms of power efficiency under Crysis. Yes, RV770 uses less power to achieve it's lower performance in Crysis and Oblivion, but for the power you use NVIDIA gives you more. But RV770 leads GT200 in performance under Bioshock while drawing less power, which is quite telling about the potential of RV770.

The fact that this small subset of tests shows the potential of both architectures to have a performance per watt advantage under different circumstances means that as time goes on and games come out, optimizing for both architectures will be very important. Bioshock shows that we can achieve great performance per watt (and performance for that matter) on both platforms. The fact that Crysis is both forward looking in terms of graphics features and shows power efficiency less divergent than Bioshock and Oblivion is a good sign for (but not a guarantee of) consistent performance and power efficiency.

A Quick Primer on ILP and ILP vs. TLP Extraction Wrapping Up the Architecture and Efficiency Discussion
Comments Locked

215 Comments

View All Comments

  • Amiga500 - Wednesday, June 25, 2008 - link

    Apple has passed over control of Open CL to the Khronos group, which manage open sourced coding.

    To all intentions and purposes, it is open source. :-)
  • emergancyexit - Wednesday, June 25, 2008 - link

    i hope you do 3x crossfire can do. maybe a 4x 4850 vs 3x GTX 260 just to satisfy us readers for the moment would be lovely!
  • DerekWilson - Wednesday, June 25, 2008 - link

    i'm not sure if this is supported out of the box ... ill have to check it out ...
  • emergancyexit - Wednesday, June 25, 2008 - link

    i would really like to know what type of performance theese cards could get in an MMO. (and hopefully compare them to some cheaper cards) Games im interested in are some of the newer titles like Age of conan ( i hear it's graphics are great and is a workout for even a 8800 ultra) And Eve-online (thier new graphics engine works cards pretty hard too)

    MMO's Graphics usually get pretty intesive with some odd 200+ characters flying around shooting fireballs evrywhere with missles sailing through the air in a land of hundreds of monsters as far as the eye can see. it can get pretty demanding on a gameing computer, just as much (if not more) as a hit new title.

    for example, on my current Rig i can get around 50FPS steady at 1440x900 but on Eve-Online i get 35 at the most at peacefull times and 20 or even 15 in a large fight with FEW graphics options selected.
  • MIP - Wednesday, June 25, 2008 - link

    Great review, the 4870 looks to be fantastic value. However, we're missing the 'heat and noise' part.
  • skiboysteve - Wednesday, June 25, 2008 - link

    Not only do these cards rock, but I wouldn't be surprised if AMD has an ace up its sleeve with the 4870x2... with that crossfire interconnect directly connected to the data hub that you showed on the chart. That and the fact that they have been looking forward to this crossfire strategy of attacking the high end for quite some time so they might have some tricky driver stuff coming with it.

    I have been disappointed with the heat and power consumption of these cards. But:
    1) Someone said powerplay is getting a driver tweak and, I can always clock them lower in 2D than 500/1000 (which is insane for 2d)
    2) That hardware site someone linked earlier showed a more than 50% reduction in temperatures with an aftermarket cooler! Thats insane!!

    And finally, if I can get the 1 & 2 fixed... I want to know how well these babys overclock. If I can get a 4850 running like a 4870 or better... yum. And in that case, how high will a 4870 OC? And I want to know this with a non stock cooler, because apparently the stock ones suck. With a non stock cooler if the 4850 clocks up to 4870 level, but the 4870 clocks way up too... i'm gonna have to grab a 4870.

    So yeah, fix #1 and #2 and find me non-stock cooler OC #s and I'll go buy one (maybe two?) when nehalem comes out
  • Powered by AMD - Wednesday, June 25, 2008 - link

    Impressive review, Thanks :)
    A few glitches:
    It says "Power Consumption, Heat and Noise", but the graphs only shows Power Consuption.
    In Page 17 (The Witcher), in second paragraph, it says 390X2 instead of 3870.

    Thanks again.
    Cheers from Argentina.
  • Conscript - Wednesday, June 25, 2008 - link

    atleast that was the tile of the second to last page...but only see two power consumption graphs?
  • Proteusza - Wednesday, June 25, 2008 - link

    I quote one Kristopher Kubricki regarding whether the RV770 is inferior to the GT200:

    "It is. Even AMD isn't going to tell you otherwise. You can debate this all you want, but it's still a $200 video card."

    So, please tell me now why I should pay $650 for a GTX280. I'm struggling to see the logic here.

    Source: http://www.dailytech.com/Update+AMD+Preps+Radeon+4...">http://www.dailytech.com/Update+AMD+Pre...50+Launc...
    (near the bottom)
  • AbRASiON - Wednesday, June 25, 2008 - link

    I can live with a greedier card than my 8800GT but I refuse to put up with a noisy machine.

    Any comments on the heat and noise please? would be nice!

Log in

Don't have an account? Sign up now