Cranking GDDR5 All the Way Up

The first stop on our overclocking tour is in the memory subsystem. We will be increasing the memory clock frequency which reduces latency slightly and increases bandwidth significantly. The stock clock speed is 975MHz with 1ns devices (which means they are rated at 1GHz). AMD mentioned that signaling and interference (caused by the graphics hardware) are bigger problems with 1GHz GDDR5 than actually running the memory at that speed, which is why they went with the 25MHz lower clock speed.

Even with the 975MHz default clock speed, we already have a data rate of 3.9GHz. Which is pretty intense. We found in playing with ATI's built in overclocking tools (overdrive), we were able achieve stable performance at the maximum clock speed the driver allowed: 1200MHz. Doing the math gives us a massive 4.8GHz of data rate. This means, with a 256-bit wide bus, we're talking about almost 154 GB/s of bandwidth. This is more memory bandwidth than the NVIDIA GeForce GTX 280 and just a little less than the GTX 285 (which both use GDDR3 but on 512-bit busses).

So armed with 1.2GHz GDDR5, what can the 850MHz core of the Radeon HD 4890 accomplish now? Let's take a look at percent increase in performance per game when just increasing memory clock.




1680x1050    1920x1200    2560x1600


Apparently not that much more, even at 2560x1600.

Because our tests are not 100% deterministic, there is some variability in our results. Generally, this is very low, though it does vary from game to game and benchmark to benchmark. We have a hard time calling anything less than a 3% difference significant, as it could be due to fluctuations in the tests. These numbers may indicate some positive change in performance, but not one that would matter. At 2560x1600, only Call of Duty showed a performance improvement that mattered. And this is from a 225MHz overclock (just about a 23.1% increase in clock speed), which is pretty large.

There really isn't a huge need to delve into the raw numbers here, as they are just not that different. We'll hold off on that until it matters. Next up, we're going to look at increasing only the core clock speed.

Index Exploring Core Overclocking
Comments Locked

61 Comments

View All Comments

  • walp - Thursday, April 30, 2009 - link

    I have a gallery how to fit the Accelero S1 to 4890
    (in swedish though):
    http://www.sweclockers.com/album/?id=3916">http://www.sweclockers.com/album/?id=3916

    Ah, here's the translated version: =)
    http://translate.google.se/translate?js=n&prev...">http://translate.google.se/translate?js...mp;sl=sv...

    You can change the volt with every 4890-card without bios-modding since they all are the same piece of hardware:
    http://vr-zone.com/articles/increasing-voltages--e...

    Its very easy that it is so fortunate, cause ASUS Smartdoctor sucks ass since it doesnt work on my computer anymore.
    (CCCP:Crappy-Christmas-Chinese-Programmers...no pun intended ;)

    \walp


  • kmmatney - Thursday, April 30, 2009 - link

    Cool - thanks for the guide. I ordered the Accelero S1 yesterday. Nice how you got heatsinks on all the power circuitry.
  • balancedthinking - Wednesday, April 29, 2009 - link

    Nice, Derek is still able to write decent articles. Bad for the somewhat stripped-down 4770 review but good to see it does not stay that way.
  • DerekWilson - Wednesday, April 29, 2009 - link

    Thanks :-)

    I suppose I just thought the 4770 article was straight forward enough to be stripped down -- that I said the 4770 was the part to buy and that the numbers backed that up enough that I didn't need to dwell on it.

    But I do appreciate all the feedback I've been getting and I'll certainly keep that in mind in the future. More in depth and more enthusiastic when something is a clear leader are on my agenda for similar situations in the future.
  • JanO - Wednesday, April 29, 2009 - link

    Hello there,

    I really like the fact that you only present us with one graph at a time and let us choose the resolution we want to see in this article...

    Now if we only could specify what resolution matters to us once and have Anandtech remember so it presents it to us by default every time we come back, now wouldn't that be great?

    Thanks & keep up that great work!
  • greylica - Wednesday, April 29, 2009 - link

    Sorry for AMD, but even with a super powerful card in Direct-X, their OPenGL implementation is still bad, and Nvidia Rocks in professional applications running on Linux. We saw the truth when we put an Radeon 4870 in front of an GTX 280. The GTX 280 Rocks, in redraw mode, in interactive rendering, and in OpenGL composition. Nvidia is a clear winner in OpenGL apps. Maybe it´s because the extra transistor count, that allows the hardware to outperform any Radeon in OPenGL implementation, whereas AMD still have driver problems (Bunch of them ), in both Linux and Mac.
    But Windows Gamers are the Market Niche AMD cards are targeting...
  • RagingDragon - Wednesday, May 13, 2009 - link

    WTF? Windows gamers aren't a niche market, they're the majority market for high end graphics cards.

    Professional OpenGL users are buying Quadro and FireGL cards, not Geforces and Radeons. Hobbiests and students using professional GL applications on non-certified Geforce and Radeon cards are a tiny niche, and it's doubtful anyone targets that market. Nvidia's advantage in that niche is probably an extension of their advantage in Professional GL cards (Quadro vs. FireGL), essentially a side effect of Nvidia putting more money/effort into their professional GL cards than AMD does.
  • ltcommanderdata - Wednesday, April 29, 2009 - link

    I don't think nVIdia having a better OpenGL implementation is necessarily true anymore, at least on Mac.

    http://www.barefeats.com/harper22.html">http://www.barefeats.com/harper22.html

    For example, in Call of Duty 4, the 8800GT performs significantly worse in OS X than in Windows. And you can tell the problem is specific to nVidia's OS X drivers rather than the Mac port since ATI's HD3870 performs similarly whether in OS X or Windows.

    http://www.barefeats.com/harper21.html">http://www.barefeats.com/harper21.html

    Another example is Core Image GPU acceleration. The HD3870 is still noticeably faster than the 8800GT even with the latest 10.5.6 drivers even though the 8800GT is theoretically more powerful. The situation was even worse when the 8800GT was first introduced with the drivers in 10.5.4 where even the HD2600XT outperformed the 8800GT in Core Image apps.

    Supposedly, nVidia has been doing a lot of work on new Mac drivers coming in 10.5.7 now that nVIdia GPUs are standard on the iMac and Mac Pro too. So perhaps the situation will change. But right now, nVidia's OpenGL drivers on OS X aren't all they are made out to be.
  • CrystalBay - Wednesday, April 29, 2009 - link

    I'd like to see some benches of highly clocked 4770's XFired.
  • entrecote - Wednesday, April 29, 2009 - link

    I can't read graphs where multiple GPU solutions are included. Since this article mostly talks about single GPU solutions I actually processed the images and still remember what I just read.

    I have an X58/core i7 system and I looked at the crossfire/SLI support as negative features (cost without benefit).

Log in

Don't have an account? Sign up now