For our single discrete GPU testing, rather than the 7970s which normally adorn my test beds (and were being used for other testing), I plumped for one of the HD 6950 cards I have.  This ASUS DirectCU II card I purchased pre-flashed to 6970 specifications, giving a little more oomph.  Typically discrete GPU options are not often cited as growth areas of memory testing, however we will let the results speak for themselves.

Dirt 3: Average FPS

Dirt 3 commonly benefits from boosts in both CPU and GPU power, showing near-perfect scaling in multi-GPU configurations.  When using our HD6950 however there seems to be little difference between memory settings with no trend.

Dirt 3: Minimum FPS

Minimum frame rates show a different story – Dirt 3 seems to prefer setups with a lower CL – MHz does not seem to have any effect.

Bioshock Infinite: Average FPS

Single GPU frame rates for Bioshock has no direct effect for memory changes with less than 2% covering our range of tests.

Bioshock Infinite: Minimum FPS

One big sink in frame rates seems to be for 1333 C7, although given that C8 and C9 do not have this effect, I would presume that this is more a statistical outlier than an obvious trend.

Tomb Raider: Average FPS

Again, we see no obvious trend in average frame rates for a discrete GPU.

Tomb Raider: Minimum FPS

While minimum frame rates for Tomb Raider seem to have a peak (1600 C8) and a sink (2400 C12), this looks to be an exception rather than the norm, with minimum frame rates typically showing 35.8 – 36.0 FPS.

Sleeping Dogs: Average FPS

Frame rates for Sleeping Dogs vary between 49.3 FPS and 49.6 FPS, showing no distinct improvement for certain memory timings.

Sleeping Dogs: Minimum FPS

The final discrete GPU test shows a small 5% difference from 1600 C11 to 2400 C11, although other kits perform roughly in the middle.

Memory Scaling on Haswell: IGP Gaming Memory Scaling on Haswell: Tri-GPU CrossFireX Gaming
Comments Locked

89 Comments

View All Comments

  • MrSpadge - Thursday, September 26, 2013 - link

    Is your HDD scratching because you're running out of RAM? Then an upgrade is worth it, otherwise not.
  • nevertell - Thursday, September 26, 2013 - link

    Why does going from 2933 to 3000, with the same latencies, automatically make the system run slower on almost all of the benchmarks ? Is it because of the ratio between cpu, base and memory clock frequencies ?
  • IanCutress - Thursday, September 26, 2013 - link

    Moving to the 3000 MHz setting doesn't actually move to the 3000 MHz strap - it puts it on 2933 and adds a drop of BCLK, meaning we had to drop the CPU multiplier to keep the final CPU speed (BCLK * multi) constant. At 3000 MHz though, all the subtimings in the XMP profile are set by the SPD. For the other MHz settings, we set the primaries, but we left the motherboard system on auto for secondary/tertiary timings, and it may have resulted in tighter timings under 2933. There are a few instances where the 3000 kit has a 2-3% advantage, a couple where it's at a disadvantage, but the rest are around about the same (within some statistical variance).

    Ian
  • mikk - Thursday, September 26, 2013 - link

    What a stupid nonsense these iGPU Benchmarks. Under 10 fps, are you serious? Do it with some usable fps and not in a slide show.
  • MrSpadge - Thursday, September 26, 2013 - link

    Well, that's the reality of gaming on these iGPUs in low "HD" resolution. But I actually agree with you: running at 10 fps is just not realistic and hence not worth much.

    The problem I see with these benchmarks is that at maximum detail settings you're putting en emphasis on shaders. By turning details down you'd push more pixels and shift the balance towards needing more bandwidth to achieve just that. And since in any real world situation you'd see >30 fps, you ARE pushing more pixels in these cases.
  • RYF - Saturday, September 28, 2013 - link

    The purpose was to put the iGPU into strain and explore the impacts of having faster memory in improving the performance.

    You seriously have no idea...
  • MrSpadge - Thursday, September 26, 2013 - link

    Your benchmark choices are nice, but I've seen quite a few "real world" applications which benefit far more from high-performance memory:
    - matrix inversion in Matlab (Intel MKL), probably in other languages / libs too
    - crunching Einstein@Home (BOINC) on all 8 threads
    - crunching Einstein@Home on 7 threads and 2 Einstein@Home tasks on the iGPU
    - crunching 5+ POEM@Home (BOINC) tasks on a high end GPU

    It obviously depends on the user how real the "real world" applications are. For me they are far more relevant than my occasional game, which is usually fast enough anyway.
  • MrSpadge - Thursday, September 26, 2013 - link

    Edit: in fact, I have set a maximum of 31 fps in PrecisionX for my nVidia, so that the games don't eat up too much crunching time ;)
  • Oscarcharliezulu - Thursday, September 26, 2013 - link

    Yep it'd be interesting to understand where extra speed does help, eg database, j2ee servers, cad, transactional systems of any kind, etc. otherwise great read and a great story idea, thanks.
  • willis936 - Thursday, September 26, 2013 - link

    SystemCompute - 2D Ex CPU 1600CL10. Nice.

Log in

Don't have an account? Sign up now