When we first heard about the overclocking potential of the 4890 from AMD, we were a bit skeptical. At the same time, the numbers we were hearing were impressive and AMD doesn't have a history of talking up that sort of thing to us. There have already been some investigations around the web that do point to the 4890 as having some healthy overclocking potential, so we decided to try our hand at it and see what we could come up with.

We are testing review samples, which means that our parts may have more overclockability than off the shelf cards, but we can't attest to that at this point. What we do want to explore are the overclocking characteristics of the 4890 and how different adjustments may or may not affect performance. From what we are seeing around the web, many people are getting fairly close to the speeds we tested. Every part is different, but while clock speeds may vary, the general performance you can expect at any given point will not.

So what's so special about this AMD part that we are singling it out for overclocking anaysis? Well, the GPU has been massaged to allow for more headroom, some of which hasn't been exploited at stock clock speeds. This is the first time in a long time (or is it ever?) we are seeing multiple manufacturers bring out overclocked parts based on an AMD GPU at launch. With this as the flagship AMD GPU, we also want to see what kind of potential it has to compete with NVIDIA's top of the line GPU.

But it's more than just the chip. We also are also interested in how well the resources on the board are balanced. Core voltages and clock speeds must be selected along with framebuffer size and memory clock. These considerations must account for a target power, heat, noise and price. For high end parts, we see the emphasis on performance over other factors, but there will still be hard limits to work within.

Because of all this, balancing hardware specifications is very important. Memory bandwidth needs to be paired well with core speed in order to maximize performance. It doesn't do us as much good to have an infinitely fast core if we have slow memory that limits performance. We also aren't well served by really ridiculously fast memory if the core can't consume data quick enough. Using resources appropriately is key. And AMD did a good job balancing resources with the 4890.

Rather than just test the semi-official overclock (which is just a 50MHz core clock boost to 900MHz), we decided to test multiple core and memory overclocks (and one core + memory overclock) to better understand the performance characteristics of this beast. As expected, overclocking both core and memory saw the best results followed by only overclocking the core. Just boosting memory speed on its own didn't seem to have a significant impact on performance despite the large overclock that was possible.

So why not sell every chip at the "overclocked" speed? Well, it's all about yield. Our guess is that while the change that AMD made were certainly good enough to boost clock speed over the 4870 by a healthy margin that there were a good number of parts that couldn't be pushed up to 900MHz and AMD really didn't want to sell them as cheaper hardware. We haven't heard that endorsing the idea overclocked parts is really a policy change for AMD, so it might just be that previous layout, routing, and design choices provided for a narrower range of overclockability around the target clock frequency.

What ever the reason for it, we now have overclockable hardware from AMD. Our analysis starts with an in depth look at percent increase in performance, but if all you care about is raw performance data, we've got plenty of that in the second half. And with it comes a surprise in our conclusion we never expected.

Cranking GDDR5 All the Way Up


View All Comments

  • walp - Thursday, April 30, 2009 - link

    I have a gallery how to fit the Accelero S1 to 4890
    (in swedish though):

    Ah, here's the translated version: =)

    You can change the volt with every 4890-card without bios-modding since they all are the same piece of hardware:

    Its very easy that it is so fortunate, cause ASUS Smartdoctor sucks ass since it doesnt work on my computer anymore.
    (CCCP:Crappy-Christmas-Chinese-Programmers...no pun intended ;)


  • kmmatney - Thursday, April 30, 2009 - link

    Cool - thanks for the guide. I ordered the Accelero S1 yesterday. Nice how you got heatsinks on all the power circuitry. Reply
  • balancedthinking - Wednesday, April 29, 2009 - link

    Nice, Derek is still able to write decent articles. Bad for the somewhat stripped-down 4770 review but good to see it does not stay that way. Reply
  • DerekWilson - Wednesday, April 29, 2009 - link

    Thanks :-)

    I suppose I just thought the 4770 article was straight forward enough to be stripped down -- that I said the 4770 was the part to buy and that the numbers backed that up enough that I didn't need to dwell on it.

    But I do appreciate all the feedback I've been getting and I'll certainly keep that in mind in the future. More in depth and more enthusiastic when something is a clear leader are on my agenda for similar situations in the future.
  • JanO - Wednesday, April 29, 2009 - link

    Hello there,

    I really like the fact that you only present us with one graph at a time and let us choose the resolution we want to see in this article...

    Now if we only could specify what resolution matters to us once and have Anandtech remember so it presents it to us by default every time we come back, now wouldn't that be great?

    Thanks & keep up that great work!
  • greylica - Wednesday, April 29, 2009 - link

    Sorry for AMD, but even with a super powerful card in Direct-X, their OPenGL implementation is still bad, and Nvidia Rocks in professional applications running on Linux. We saw the truth when we put an Radeon 4870 in front of an GTX 280. The GTX 280 Rocks, in redraw mode, in interactive rendering, and in OpenGL composition. Nvidia is a clear winner in OpenGL apps. Maybe it´s because the extra transistor count, that allows the hardware to outperform any Radeon in OPenGL implementation, whereas AMD still have driver problems (Bunch of them ), in both Linux and Mac.
    But Windows Gamers are the Market Niche AMD cards are targeting...
  • RagingDragon - Wednesday, May 13, 2009 - link

    WTF? Windows gamers aren't a niche market, they're the majority market for high end graphics cards.

    Professional OpenGL users are buying Quadro and FireGL cards, not Geforces and Radeons. Hobbiests and students using professional GL applications on non-certified Geforce and Radeon cards are a tiny niche, and it's doubtful anyone targets that market. Nvidia's advantage in that niche is probably an extension of their advantage in Professional GL cards (Quadro vs. FireGL), essentially a side effect of Nvidia putting more money/effort into their professional GL cards than AMD does.
  • ltcommanderdata - Wednesday, April 29, 2009 - link

    I don't think nVIdia having a better OpenGL implementation is necessarily true anymore, at least on Mac.


    For example, in Call of Duty 4, the 8800GT performs significantly worse in OS X than in Windows. And you can tell the problem is specific to nVidia's OS X drivers rather than the Mac port since ATI's HD3870 performs similarly whether in OS X or Windows.


    Another example is Core Image GPU acceleration. The HD3870 is still noticeably faster than the 8800GT even with the latest 10.5.6 drivers even though the 8800GT is theoretically more powerful. The situation was even worse when the 8800GT was first introduced with the drivers in 10.5.4 where even the HD2600XT outperformed the 8800GT in Core Image apps.

    Supposedly, nVidia has been doing a lot of work on new Mac drivers coming in 10.5.7 now that nVIdia GPUs are standard on the iMac and Mac Pro too. So perhaps the situation will change. But right now, nVidia's OpenGL drivers on OS X aren't all they are made out to be.
  • CrystalBay - Wednesday, April 29, 2009 - link

    I'd like to see some benches of highly clocked 4770's XFired. Reply
  • entrecote - Wednesday, April 29, 2009 - link

    I can't read graphs where multiple GPU solutions are included. Since this article mostly talks about single GPU solutions I actually processed the images and still remember what I just read.

    I have an X58/core i7 system and I looked at the crossfire/SLI support as negative features (cost without benefit).

Log in

Don't have an account? Sign up now