AMD's RV770 vs. NVIDIA's GT200: Which one is More Efficient?

It is one thing to be able to sustain high levels of performance and altogether another to do it efficiently. AMD's architecture is clearly the more area efficient compared to NVIDIA.

Alright now, don't start yelling that RV770 is manufactured at 55nm while GT200 is a 65nm part: we're taking that into account. The die size of GT200 is 576mm^2, but if we look at scaling the core down to 55nm, we would end up with a 412mm^2 part with perfect scaling. This is being incredibly generous though, as we understand that TSMC's 55nm half-node process scales down die size much less efficiently one would expect. But lets go with this and give NVIDIA the benefit of the doubt.

First we'll look at area efficiency in terms of peak theoretical performance using GFLOPS/mm^2 (performance per area). Remember, these are just ratios of design and performance aspects; please don't ask me what an (operation / (s * mm * mm)) really is :)

  Normalized Die Size GFLOPS GFLOPS/mm^2
AMD RV770 260 mm^2 1200 4.62
NVIDIA GT200 412 mm^2 933 2.26

 

This shows us that NVIDIA's architecture requires more than 2x the die area of AMD's in order to achieve the same level of peak theoretical performance. Of course theoretical performance doesn't mean everything, especially in light of our previous discussion on extracting parallelism. So let's take a look at real performance per area and see what we get in terms of some of our benchmarks, specifically Bioshock, Crysis, and Oblivion. We chose these titles because relative performance of RV770 is best compared to GT200 in Bioshock and worst in Oblivion (RV770 actually leads the GT200 in bioshock performance while the GT200 crushes RV770 in Oblivion). We included Crysis because it's engine is quite a popular and stressful benchmark that falls somewhere near the middle of the range in performance difference between RV770 and GT200 in the tests we looked at.

These numbers look at performance per cm^2 (because the numbers look prettier when multiplied by 100). Again, this doesn't really show something that is a thing -- it's just a ratio we can use to compare the architectures.

Performance per Die Area Normalized Die Size in cm^2 Bioshock Crysis Oblivion
AMD RV770 2.6 27 fps/cm^2 11.42 fps/cm^2 10.23 fps/cm^2
NVIDIA GT200 4.12 15.51 fps/cm^2 8.33 fps/cm^2 8.93 fps/cm^2

 

While it doesn't tell the whole story, it's clear that AMD does have higher area efficiency relative to the performance they are able attain. Please note that comparing these numbers directly doesn't yield anything that can be easily explained (the percent difference in frames per second per millimeter per millimeter doesn't really make much sense as a concept), which is part of why these numbers aren't in a graph but are in a table. So while higher numbers show that AMD is more area efficient, this data really doesn't show how much of an advantage AMD really has. Especially since we are normalizing sizes and looking at game performance rather than microbenches.

Some of this efficiency may come from architectural design, while some may stem from time spent optimizing the layout. AMD said that some time was spent doing area optimization on their hardware, and that this is part of the reason they could get more than double the SPs in there without more than doubling the transistor count or building a ridiculously huge die. We could try to look at transistor density, but transistor counts from AMD and NVIDIA are both just estimates that are likely done very differently and it might not reflect anything useful.

We can talk about another kind of efficiency though. Power efficiency. This is becoming more important as power costs rise, as computers become more power hungry, and as there is a global push towards conservation. The proper way to look at power efficiency is to look at the amount of energy it takes to render a frame. This is a particularly easy concept to grasp unlike the previous monstrosities. It turns out that this isn't a tough thing to calculate.

To get this data we recorded both frame rate and watts for a benchmark run. Then we look at average frame rate (frames per second) and average watts (joules per second). We can then divide average watts by average frame rate and we end up with: average joules / frames. This is exactly what we need to see energy per frame for a given benchmark. And here's a look at Bioshock, Crysis and Oblivion.

Average energy per frame Bioshock Crysis Oblivion
AMD RV770 4.45 J/frame 10.33 J/frame 11.07 J/frame
NVIDIA GT200 5.37 J/frame 9.99 J/frame 9.57 J/frame

 

This is where things get interesting. AMD and NVIDIA trade off on power efficiency when it comes to the tests we showed here. Under Bioshock RV770 requires less energy to render a frame on average in our benchmark. The opposite is true for Oblivion, and NVIDIA does lead in terms of power efficiency under Crysis. Yes, RV770 uses less power to achieve it's lower performance in Crysis and Oblivion, but for the power you use NVIDIA gives you more. But RV770 leads GT200 in performance under Bioshock while drawing less power, which is quite telling about the potential of RV770.

The fact that this small subset of tests shows the potential of both architectures to have a performance per watt advantage under different circumstances means that as time goes on and games come out, optimizing for both architectures will be very important. Bioshock shows that we can achieve great performance per watt (and performance for that matter) on both platforms. The fact that Crysis is both forward looking in terms of graphics features and shows power efficiency less divergent than Bioshock and Oblivion is a good sign for (but not a guarantee of) consistent performance and power efficiency.

A Quick Primer on ILP and ILP vs. TLP Extraction Wrapping Up the Architecture and Efficiency Discussion
Comments Locked

215 Comments

View All Comments

  • shadowteam - Wednesday, June 25, 2008 - link

    Did you know these chips can do up to 125C? 90C is so common for ATI cards, I haven't had one since 2005 that didn't blow me hair dry. Your NV card was just a bad chip I suppose. Why do you think NV or ATI would spend a billion dollars in research work, then let its product burn away due to some crappy cooling? They won't give you more cooling than you actually need. It's the same very cards that go to places like Abu-Dhabi, where room temps. easily hit 50C+.
  • soloman02 - Wednesday, June 25, 2008 - link

    Sorry, but no human would survive a temp of 50C.
    http://en.wikipedia.org/wiki/Thermoregulation#Hot">http://en.wikipedia.org/wiki/Thermoregulation#Hot
    In fact the highest temp a human has survived was recorded by the Guinness book of world records as: 46.5C (115.7F). Keep in mind that was the internal temp of the guy. The temp on that day was 32.2C (90F).
    http://www.powells.com/biblio?show=0553587129&...">http://www.powells.com/biblio?show=0553587129&...
    http://www.time.com/time/magazine/article/0,9171,9...">http://www.time.com/time/magazine/article/0,9171,9...

    If it is 50C in those rooms, the people inside are dead or dying.

    The cards are probably fine. All it takes is to search google to back up your figures (or to disprove them like I just did).
  • shadowteam - Wednesday, June 25, 2008 - link

    You're just a dumb pissed off loser. There's a big difference in internal human temperature to its surroundings. In places like Sahara, temperatures routinely hit 45C, and max out @ 55C. But does that mean people living there just die? No they don't, because they drink a lot of water, which helps their bodies get rid of excess heat so to keep their internals at normal temperature (32C). You didn't have this knowledge to share so you decided to Google it instead, and make fool out of yourself. Here, let me break it down for you,

    You said: "Keep in mind that was the internal temp of the guy"

    Exactly, the guy was sick, and when you're sick, your body temperature rises, in which case 46C is the limit of survival. I suggest you take Bio-chemistry in college to learn more about human body, which is another 4 years before you finish school.
  • Ilmarin - Wednesday, June 25, 2008 - link

    I'm not talking about chips failing altogether... just stability issues, similar to what you experience from over-zealous overclocking. Lots of people have encountered artifacting/crashes with stock-cooled cards over the years. If these are just 'bad chips' that are experiencing stability issues at high temps, then there are a lot of them getting through quality control. Of course NV and ATI do enough to make most people happy... but many of us have good reason to be nervous about temperature. I think they can and should do better. Dual slot exhaust coolers should be mandatory for the enthusiast/performance cards, with full fan control capability. Often it's up to the partners to get that right, and often it doesn't happen for at least a couple of months.
  • shadowteam - Wednesday, June 25, 2008 - link

    I think it's more profitable for board partners to just roll out a stock card rather than go through the trouble of investing time/money into performance cooling. What I've seen thus far, and it's quite apparent, that newer companies tend to go exotic cooling to get themselves heard. Once they're in the game, it's back to stock cooling. For example, Palit and ECS came up with nice coolers for its 9600s. Remember Leadtek from past years? They don't even do custom coolers any more. ASUS, Powercolor, Gigabyte, Sapphire etc just find it easier to throw in a 3rd party cooler from ZM, TT TR, and call it a day.
  • DerekWilson - Wednesday, June 25, 2008 - link

    you know we actually received an updated bios for a certain vendors 4850 that speeds the fan up a bit and should reduce heat ...

    i suspect a lot of vendors will start adjusting their fan tables actually ...
  • shadowteam - Wednesday, June 25, 2008 - link

    I think this reply was meant for the guy right above me. I'm all for stock cooling :).
  • ImmortalZ - Wednesday, June 25, 2008 - link

    "Quake Wars once again shows the 4870 outperforming the GTX 280, but this time it offers essentially the same performance as the GTX 280 - but at half the price. "

    You mean the 260 in the first instance?

    No text in The Witcher page. I assume this is intentional.

    Also, I've heard on the web that the 48xx series has dual-link only on one of it's DVI ports. Is this true?

    Oh and another thing - why is the post comment page titled "Untitled Page"? :P
  • rahat5810 - Wednesday, June 25, 2008 - link

    Nice cards and nice article. But I would like to point out that there are some mistakes in the article, nothing fatal though. Like, not mentioning 4870 in the list of cards, writing 280 instead of 260, clicking on the picture to enlarge not working for some of the figures.
  • feelingshorter - Wednesday, June 25, 2008 - link

    AMD almost has a perfect card but the fact that the 4870 idles at 46.1 more watts than the 260 means the card will heat up people's room. At load, the difference of 16.1 watts more for the 4870 is forgivable.

    If its possible to overclock a card using software (without going into BIOS screen), then why isn't it possible to underclock a card also using software when the card's full potential isn't being used? I'd really be interested in knowing the answer, or maybe someone just hasn't asked the question?

    I hardly care about Crysis, its more a matter of will it run Starcraft II with 600 units on the map without overheating. Why doesn't anandtech also test how hot the 4870 runs? Although the 4850 numbers aren't pretty at all, the 4870 is a dual slot cooler and might give better numbers right? I only want to know because, like a lot of readers, i have doubts as to whether a card like the 4850 can run super hot and not die within 1+ years of hardcore gaming.

Log in

Don't have an account? Sign up now