Pulling it Back Apart: Performance Interactions

Rather than test everything combination of clock speeds and look at scaling as we did in our Radeon HD 4890 overclocking article, we wanted a streamlined way to get a better idea of how combinations clock domain overclocking could help. Our solution was to add only one test configuration and use multiple comparison points to get a better idea of the overall impact of changing multiple clocks at a time.

Testing our hardware while overclocking both the core clock and the shader clock gives us four more key comparisons that fill in the gaps between what we've already seen and how the different aspects of the hardware interact with each other. First, and most obviously, we can see how much performance improvement we get beyond stock when overclocking both core and shader clocks.




1680x1050    1920x1200    2560x1600


We see higher performance improvement for overclocking both of these at the same time than we do for just overclocking one at a time. And we can break this down into two components in order to answer two different questions: how much faster does overclocking the shaders make the GPU when the core is overclocked, and how much faster does overclocking the core make the GPU when the shaders are already overclocked? These two graphs are very closely related, but they can further help make a decision on how to balance your overclock on NVIDIA hardware.




1680x1050    1920x1200    2560x1600





1680x1050    1920x1200    2560x1600


If we look back and compare our additional performance improvements from increasing either core or shader clock while the other is at maximum, we can get a good idea of how scaling translates in a different landscape. In fact, we see that increasing shader clock speed generally has a larger impact when the core is already overclocked than if the core is set to stock speeds. This could indicate that an increased core clock alleviates some bottleneck on the shader hardware that allows it more breathing room.

We see a similar relationship between core scaling with everything else stock and core scaling with overclocked shaders. This could indicate a reciprocal relationship between the core and shader clocks, meaning that users may typically get a larger benefit from overclocking both at the same time rather than pushing one higher at the expense of the other.

The last question we want to answer with this test is about memory. We saw that overclocking the GTX 275's RAM didn't return much of our investment. But what if both core and memory are overclocked, would memory speed have a larger impact on performance when? If the bottleneck for performance scaling with memory overclocking is in how fast the GPU can consume data, than we might see better performance improvement from memory overclocking when the core and shader are running faster.




1680x1050    1920x1200    2560x1600


These results are certainly interesting, showing, in general, less benefit from moving to 2560x1600 when the GPU is overclocked. We also see less improvement at lower resolution where memory performance isn't as large an issue in the first place (it seems to become even less important). But at 1920x1200, overclocking memory has a higher impact when the GPU is fully overclocked. So at lower resolutions, memory speed isn't as important anyway and the GPU overclock has the prevailing benefit on overall speed. This makes sense. So does the increasing performance at 1920x1200. But the fact that performance improvement we can attribute to faster memory at 2560x1600 is lower with a faster core and shader clocks is a bit of an enigma.

While we can get a better feel for the effects of tweaking different aspects of the chip through these glimpses into scaling, it's still not possible from this data to definitively pin down the interactions between core, shader and memory clock speed. The benefit to different games is dependent on their demand for resources, and there's no real formula for knowing what you will get out.

But the thing to take away is that overclocking the GTX 275 should be done with balance between the three clocks in mind. No single aspect is a magic bullet, and NVIDIA has balanced things pretty well already. Maintaining the balance is the key to extracting good performance improvement when overclocking the GTX 275.

That sums up our analysis of overclocking the GTX 275. The following pages are our raw data for those more interested in direct/absolute comparisons.

Bringing it All Together: Everything OC'd Raw Performance Data
Comments Locked

43 Comments

View All Comments

  • Shadowmage - Thursday, June 4, 2009 - link

    Evenly matched? The 4890 OCed beats the GTX275 OCed in almost all benchmarks and wins considerably in every game at the resolution that I play at: 1680x1050. It also uses substantially less power and costs less than $200 (eg. ewiz deal at $160, newegg deal at $180), whereas the GTX275 still costs upwards of $220.
  • walp - Thursday, June 4, 2009 - link

    I just wanted to be polite. :

    4890 @ 1\1.2 is a really nice overclock. They do mention that the GTX275 did'nt overclock that well.
    So I prefer (to be aside of the fanboyism-spectra) to call them evenly matched when talking about performance.

    Good for you that 4890 is so cheap over there, here they cost about the same as the GTX275. (280$) :/

    Powerdraw from a electrical cost-point of view is unimportant for me, since I have free electricity. (Long live the swedish King! lol..) ;)

    But it is better from a heat-point-of-view to have less power-draw of course, yeah, so 4890 is (again:slightly) better than GTX275 at load. Its the other way around for idle though. (I would sincerely call this evenly matched in powerdraw).

    I have no clue whatsoever how they compare when it comes to noise, but 4890 is really loud at load, thats for sure. ('But not anymore its not!')

    \walp
  • Carfax - Thursday, June 4, 2009 - link

    Except that the GTX 275 OC had a very moderate overclock compared to the greater overclock on the HD 4890.

    I don't see how Anandtech only got 700mhz out of the core.
  • li3k - Thursday, June 4, 2009 - link

    well...

    A cursory search on google yieled the highest core overclocks obtainable on gtx 275 boards to be between 700 and 745mhz. If you can show us otherwise, please do.

    As for myself, and other hardware enthusiasts I'd imagine, the maximum potential of a card comes from its maximum overclocked performance. The fact that the gtx 275 had a "moderate" maximum overclock compared to the 4890 should not come at the cost of the 4890 in a potential comparison.

    I stand by my point.
  • Carfax - Thursday, June 4, 2009 - link

    I just googled "GTX 275 overclock" and the first article that pops up is from Guru3d which shows the GTX 275 overclocking to 743mhz.

    Tweaktown did another one and got 715mhz, but they had no clue what they were doing and left the shaders linked.

    Anyway, the point is though, if you're going to do an article on overclocking the GTX 275, why bother with a card that has such poor overclocking capability?

    Anandtech's HD 4890 OC article specifically used an HD 4890 that was capable of hitting 1ghz on the core, because not all HD 4890s are capable of attaining such a high core speed.

    Why couldn't they do the same for the GTX 275?

    This article is B.S..
  • SiliconDoc - Saturday, June 6, 2009 - link

    LOL
    You can buy a GTX275 retail at 713 core - and they got theirs all the way up to 703 here ! roflmao
    Worse yet they use their 4890 numbers from their specially delivered non retail "golden ATI secret channel" - as Derek the red rooster says here in their 4890 oc extrava article ! - LOL
    " We absolutely must caution our readers once again that these are not off-the-shelf retail parts. These are parts sent directly to us from manufacturers and could very likely have a higher overclocking potential than retail parts. "
    http://www.anandtech.com/video/showdoc.aspx?i=3555...">http://www.anandtech.com/video/showdoc.aspx?i=3555...
    ----
    SO THE BIAS IS BLEEDING OUT LIKE A BIG FAT STUCK PIG... IF YOU HAVE ANY SENSE WHATSOEVER THAT IS...
    ---
    a red rooster fanboy like Derek and all his little raging red roosters here love it.
  • li3k - Thursday, June 4, 2009 - link

    I have to agree...

    vote me down if you like, but the way this article is worded just reinforces the commonly held assumption that anandtech is biased towards intel/nvidia.

  • DerekWilson - Thursday, June 4, 2009 - link

    thanks for the feedback ... but I'm not touching overclocked SLI and CF ... ugh!

    I didn't include the 4890 1/1.2 in idle power because it is redundant as it doesn't affect idle power. and came in at the same idle power as the other two. I wanted to save on graph space where i could because there was so much data -- plus we already covered that in the 4890 overclocking article. Sorry if I dropped too much out.


  • walp - Thursday, June 4, 2009 - link

    Hmm, mkay.
    Was just confused by the fact that the sligthly overclocked 4890 wanted less juice than the original version in idle.
    Maybe due to better VRM\mosfet underclocking or whatever. :)

    At least do GTX275 SLI vs. 4890 CF, (and while doing that, just overclock them slightly, plz ;)
    I have my finger on the 'ordering-another-4890-button', but wont buy another until anandtech.com reviews 4890 CF!

    \walp
  • SiliconDoc - Monday, June 22, 2009 - link

    Yes the gtx 275 wins even in overclocking... i wonder what went wrong with dereks tests...( no i don't !)
    ...
    http://www.techspot.com/review/164-radeon-4890-vs-...">http://www.techspot.com/review/164-radeon-4890-vs-...

Log in

Don't have an account? Sign up now