Wrapping Up the Architecture and Efficiency Discussion

Engineering is all about tradeoffs and balance. The choice to increase capability in one area may decrease capability in another. The addition of a feature may not be worth the cost of including it. In the worst case, as Intel found with NetBurst, an architecture may inherently flawed and a starting over down an entirely different path might be the best solution.

We are at a point where there are quite a number of similarities between NVIDIA and AMD hardware. They both require maintaining a huge number of threads in flight to hide memory and instruction latency. They both manage threads in large blocks of threads that share context. Caching, coalescing memory reads and writes, and handling resource allocation need to be carefully managed in order to keep the execution units fed. Both GT200 and RV770 execute branches via dynamic predication of direction a thread does not branch (meaning if a thread in a warp or wavefront branches differently from others, all threads in that group must execute both code paths). Both share instruction and constant caches across hardware that is SIMD in nature servicing multiple threads in one context in order to effect hardware that fits the SPMD (single program multiple data) programming model.

But the hearts of GT200 and RV770, the SPA (Steaming Processor Array) and the DPP (Data Parallel Processing) Array, respectively, are quite different. The explicitly scalar one operation per thread at a time approach that NVIDIA has taken is quite different from the 5 wide VLIW approach AMD has packed into their architecture. Both of them are SIMD in nature, but NVIDIA is more like S(operation)MD and AMD is S(VLIW)MD.


AMD's RV770, all built up and pretty

Filling the execution units of each to capacity is a challenge but looks to be more consistent on NVIDIA hardware, while in the cases where AMD hardware is used effectively (like Bioshock) we see that RV770 surpasses GTX 280 in not only performance but power efficiency as well. Area efficiency is completely owned by AMD, which means that their cost for performance delivered is lower than NVIDIA's (in terms of manufacturing -- R&D is a whole other story) since smaller ICs mean cheaper to produce parts.


NVIDIA's GT200, in all its daunting glory

While shader/kernel length isn't as important on GT200 (except that the ratio of FP and especially multiply-add operations to other code needs to be high to extract high levels of performance), longer programs are easier for AMD's compiler to extract ILP from. Both RV770 and GT200 must balance thread issue with resource usage, but RV770 can leverage higher performance in situations where ILP can be extracted from shader/kernel code which could also help in situations where the GT200 would not be able to hide latency well.

We believe based on information found on the CUDA forums and from some of our readers that G80's SPs have about a 22 stage pipeline and that GT200 is also likely deeply piped, and while AMD has told us that their pipeline is significantly shorter than this they wouldn't tell us how long it actually is. Regardless, a shorter pipeline and the ability to execute one wavefront over multiple scheduling cycles means massive amounts of TLP isn't needed just to cover instruction latency. Yes massive amounts of TLP are needed to cover memory latency, but shader programs with lots of internal compute can also help to do this on RV770.

All of this adds up to the fact that, despite the advent of DX10 and the fact that both of these architectures are very good at executing large numbers of independent threads very quickly, getting the most out of GT200 and RV770 requires vastly different approaches in some cases. Long shaders can benefit RV770 due to increased ILP that can be extracted, while the increased resource use of long shaders may mean less threads can be issued on GT200 causing lowered performance. Of course going the other direction would have the opposite effect. Caches and resource availability/management are different, meaning that tradeoffs and choices must be made in when and how data is fetched and used. Fixed function resources are different and optimization of the usage of things like texture filters and the impact of the different setup engines can have a large (and differing with architecture) impact on performance.

We still haven't gotten to the point where we can write simple shader code that just does what we want it to do and expect it to perform perfectly everywhere. Right now it seems like typical usage models favor GT200, while relative performance can vary wildly on RV770 depending on how well the code fits the hardware. G80 (and thus NVIDIA's architecture) did have a lead in the industry for months before R600 hit the scene, and it wasn't until RV670 that AMD had a real competitor in the market place. This could be part of the reason we are seeing fewer titles benefiting from the massive amount of compute available on AMD hardware. But with this launch, AMD has solidified their place in the market (as we will see the 4800 series offers a lot of value), and it will be very interesting to see what happens going forward.

AMD's RV770 vs. NVIDIA's GT200: Which one is More Efficient? One, er, Hub to Rule them All?
Comments Locked

215 Comments

View All Comments

  • Amiga500 - Wednesday, June 25, 2008 - link

    Apple has passed over control of Open CL to the Khronos group, which manage open sourced coding.

    To all intentions and purposes, it is open source. :-)
  • emergancyexit - Wednesday, June 25, 2008 - link

    i hope you do 3x crossfire can do. maybe a 4x 4850 vs 3x GTX 260 just to satisfy us readers for the moment would be lovely!
  • DerekWilson - Wednesday, June 25, 2008 - link

    i'm not sure if this is supported out of the box ... ill have to check it out ...
  • emergancyexit - Wednesday, June 25, 2008 - link

    i would really like to know what type of performance theese cards could get in an MMO. (and hopefully compare them to some cheaper cards) Games im interested in are some of the newer titles like Age of conan ( i hear it's graphics are great and is a workout for even a 8800 ultra) And Eve-online (thier new graphics engine works cards pretty hard too)

    MMO's Graphics usually get pretty intesive with some odd 200+ characters flying around shooting fireballs evrywhere with missles sailing through the air in a land of hundreds of monsters as far as the eye can see. it can get pretty demanding on a gameing computer, just as much (if not more) as a hit new title.

    for example, on my current Rig i can get around 50FPS steady at 1440x900 but on Eve-Online i get 35 at the most at peacefull times and 20 or even 15 in a large fight with FEW graphics options selected.
  • MIP - Wednesday, June 25, 2008 - link

    Great review, the 4870 looks to be fantastic value. However, we're missing the 'heat and noise' part.
  • skiboysteve - Wednesday, June 25, 2008 - link

    Not only do these cards rock, but I wouldn't be surprised if AMD has an ace up its sleeve with the 4870x2... with that crossfire interconnect directly connected to the data hub that you showed on the chart. That and the fact that they have been looking forward to this crossfire strategy of attacking the high end for quite some time so they might have some tricky driver stuff coming with it.

    I have been disappointed with the heat and power consumption of these cards. But:
    1) Someone said powerplay is getting a driver tweak and, I can always clock them lower in 2D than 500/1000 (which is insane for 2d)
    2) That hardware site someone linked earlier showed a more than 50% reduction in temperatures with an aftermarket cooler! Thats insane!!

    And finally, if I can get the 1 & 2 fixed... I want to know how well these babys overclock. If I can get a 4850 running like a 4870 or better... yum. And in that case, how high will a 4870 OC? And I want to know this with a non stock cooler, because apparently the stock ones suck. With a non stock cooler if the 4850 clocks up to 4870 level, but the 4870 clocks way up too... i'm gonna have to grab a 4870.

    So yeah, fix #1 and #2 and find me non-stock cooler OC #s and I'll go buy one (maybe two?) when nehalem comes out
  • Powered by AMD - Wednesday, June 25, 2008 - link

    Impressive review, Thanks :)
    A few glitches:
    It says "Power Consumption, Heat and Noise", but the graphs only shows Power Consuption.
    In Page 17 (The Witcher), in second paragraph, it says 390X2 instead of 3870.

    Thanks again.
    Cheers from Argentina.
  • Conscript - Wednesday, June 25, 2008 - link

    atleast that was the tile of the second to last page...but only see two power consumption graphs?
  • Proteusza - Wednesday, June 25, 2008 - link

    I quote one Kristopher Kubricki regarding whether the RV770 is inferior to the GT200:

    "It is. Even AMD isn't going to tell you otherwise. You can debate this all you want, but it's still a $200 video card."

    So, please tell me now why I should pay $650 for a GTX280. I'm struggling to see the logic here.

    Source: http://www.dailytech.com/Update+AMD+Preps+Radeon+4...">http://www.dailytech.com/Update+AMD+Pre...50+Launc...
    (near the bottom)
  • AbRASiON - Wednesday, June 25, 2008 - link

    I can live with a greedier card than my 8800GT but I refuse to put up with a noisy machine.

    Any comments on the heat and noise please? would be nice!

Log in

Don't have an account? Sign up now