Final Words

All but two.

That's how many benchmarks in which our 1GHz/1.2GHz (core/mem) Radeon HD 4890 lead the stock NVIDIA GeForce GTX 285. That's nothing to sneeze at. Certainly it doesn't mean that the 4890 is faster or better than the GTX 285, especially because the GTX 285 can be overclocked as well to improve performance. What this does mean is that for about $100 less we have the potential to achieve the stock performance of NVIDIA's flagship single GPU part with a highly overclocked AMD GPU. From an end user value perspective, that extra $100 is there to ensure you get at least the performance of the GTX 285 along with any potential overclocking benefits you might have from the higher end part. There is still reason to buy the GTX 285 if you need even more power. But this is quite intriguing from an architectural perspective.

These tests show that there is the potential for a 959 Million transistor AMD GPU to consistently outperform a 1.4 Billion transistor NVIDIA GPU in the same power envelope at 55nm with similar memory bandwidth.

Yields and business being what they are, it doesn't make sense for AMD to push out a part at the extreme clock speeds we tested. But from an engineering standpoint, even with the smaller die, less is more, multiGPU at the top end strategy, AMD has built a part that can (when overclocked) best the stock performance of top of the line NVIDIA hardware designed to pack as much power into a single GPU as possible.

And that seems pretty significant.

At the same time, while we don't have any solid standardized OpenCL tests to run as of yet, it appears from some limited applications like folding@home and others that NVIDIA's approach may be better suited to GPU computing or more general purpose or flexible applications beyond gaming. We can't really confirm this theory yet, as there isn't a wide enough range of GPU computing applications, but it might not be that NVIDIA has been pushing CUDA so hard because they know it to be an advantage, not just in terms of software support and a feature check box, but in terms of a fundamental performance or architectural edge for these algorithms. The architectural path NVIDIA has chosen may well prove useful when DX11 hits and we see a further push away from DX9 towards really deep programmability and flexibility. Only time will tell on that front, though.

In the meantime, NVIDIA's margins are much tighter on their larger GPUs and now their single GPU performance advantage has started to erode. It seems the wonders of the RV7xx series have yet to exhaust themselves. Competition is indeed a wonderful thing, and we can't wait to see what comes out of the upcoming DX11 hardware battle.

For now, at resolutions below 2560x1600, the Radeon HD 4890 has the advantage. At 2560x1600, the lines become a little more blurry. For stock hardware the GTX 285 is still the fastest thing around in most cases. But if you want to take your chances with overclocking, 30" gaming on a single AMD GPU just got a lot more potentially attractive.

Power Consumption
Comments Locked

61 Comments

View All Comments

  • walp - Thursday, April 30, 2009 - link

    I have a gallery how to fit the Accelero S1 to 4890
    (in swedish though):
    http://www.sweclockers.com/album/?id=3916">http://www.sweclockers.com/album/?id=3916

    Ah, here's the translated version: =)
    http://translate.google.se/translate?js=n&prev...">http://translate.google.se/translate?js...mp;sl=sv...

    You can change the volt with every 4890-card without bios-modding since they all are the same piece of hardware:
    http://vr-zone.com/articles/increasing-voltages--e...

    Its very easy that it is so fortunate, cause ASUS Smartdoctor sucks ass since it doesnt work on my computer anymore.
    (CCCP:Crappy-Christmas-Chinese-Programmers...no pun intended ;)

    \walp


  • kmmatney - Thursday, April 30, 2009 - link

    Cool - thanks for the guide. I ordered the Accelero S1 yesterday. Nice how you got heatsinks on all the power circuitry.
  • balancedthinking - Wednesday, April 29, 2009 - link

    Nice, Derek is still able to write decent articles. Bad for the somewhat stripped-down 4770 review but good to see it does not stay that way.
  • DerekWilson - Wednesday, April 29, 2009 - link

    Thanks :-)

    I suppose I just thought the 4770 article was straight forward enough to be stripped down -- that I said the 4770 was the part to buy and that the numbers backed that up enough that I didn't need to dwell on it.

    But I do appreciate all the feedback I've been getting and I'll certainly keep that in mind in the future. More in depth and more enthusiastic when something is a clear leader are on my agenda for similar situations in the future.
  • JanO - Wednesday, April 29, 2009 - link

    Hello there,

    I really like the fact that you only present us with one graph at a time and let us choose the resolution we want to see in this article...

    Now if we only could specify what resolution matters to us once and have Anandtech remember so it presents it to us by default every time we come back, now wouldn't that be great?

    Thanks & keep up that great work!
  • greylica - Wednesday, April 29, 2009 - link

    Sorry for AMD, but even with a super powerful card in Direct-X, their OPenGL implementation is still bad, and Nvidia Rocks in professional applications running on Linux. We saw the truth when we put an Radeon 4870 in front of an GTX 280. The GTX 280 Rocks, in redraw mode, in interactive rendering, and in OpenGL composition. Nvidia is a clear winner in OpenGL apps. Maybe it´s because the extra transistor count, that allows the hardware to outperform any Radeon in OPenGL implementation, whereas AMD still have driver problems (Bunch of them ), in both Linux and Mac.
    But Windows Gamers are the Market Niche AMD cards are targeting...
  • RagingDragon - Wednesday, May 13, 2009 - link

    WTF? Windows gamers aren't a niche market, they're the majority market for high end graphics cards.

    Professional OpenGL users are buying Quadro and FireGL cards, not Geforces and Radeons. Hobbiests and students using professional GL applications on non-certified Geforce and Radeon cards are a tiny niche, and it's doubtful anyone targets that market. Nvidia's advantage in that niche is probably an extension of their advantage in Professional GL cards (Quadro vs. FireGL), essentially a side effect of Nvidia putting more money/effort into their professional GL cards than AMD does.
  • ltcommanderdata - Wednesday, April 29, 2009 - link

    I don't think nVIdia having a better OpenGL implementation is necessarily true anymore, at least on Mac.

    http://www.barefeats.com/harper22.html">http://www.barefeats.com/harper22.html

    For example, in Call of Duty 4, the 8800GT performs significantly worse in OS X than in Windows. And you can tell the problem is specific to nVidia's OS X drivers rather than the Mac port since ATI's HD3870 performs similarly whether in OS X or Windows.

    http://www.barefeats.com/harper21.html">http://www.barefeats.com/harper21.html

    Another example is Core Image GPU acceleration. The HD3870 is still noticeably faster than the 8800GT even with the latest 10.5.6 drivers even though the 8800GT is theoretically more powerful. The situation was even worse when the 8800GT was first introduced with the drivers in 10.5.4 where even the HD2600XT outperformed the 8800GT in Core Image apps.

    Supposedly, nVidia has been doing a lot of work on new Mac drivers coming in 10.5.7 now that nVIdia GPUs are standard on the iMac and Mac Pro too. So perhaps the situation will change. But right now, nVidia's OpenGL drivers on OS X aren't all they are made out to be.
  • CrystalBay - Wednesday, April 29, 2009 - link

    I'd like to see some benches of highly clocked 4770's XFired.
  • entrecote - Wednesday, April 29, 2009 - link

    I can't read graphs where multiple GPU solutions are included. Since this article mostly talks about single GPU solutions I actually processed the images and still remember what I just read.

    I have an X58/core i7 system and I looked at the crossfire/SLI support as negative features (cost without benefit).

Log in

Don't have an account? Sign up now