Our final set of tests are a little more on the esoteric side, using a tri-GPU setup with a HD5970 (dual GPU) and a HD5870 in tandem.  While these cards are not necessarily the newest, they do provide some interesting results – particularly when we have memory accesses being diverted to multiple GPUs (or even to multiple GPUs on the same PCB).  The 5970 GPUs are clocked at 800/1000, with the 5870 at 1000/1250.

Dirt 3: Average FPS

It is pretty clear that memory has an effect: +13% moving from 1333 C9 to 2133 C9/2400 C10.  In fact, that 1333 C9 seems to be more of a sink than anything else – above 2133 MHz memory the performance benefits are minor at best.  It all depends if 186.53 FPS is too low for you and you need 200+.

Dirt 3: Minimum FPS

We see a similar trend in minimum FPS for Dirt3: 1333 C9 is a sink, but moving to 2133 C9/2400 C10 gives at least a 20% jump in minimum frame rates.

Bioshock Infinite: Average FPS

While differences in Bioshock Infinite Minimum FPS are minor at best, 1333 MHz and 1600 C10/C11 are certainly at the lower end.  Anything 1866 MHz or 2133 MHz seems to be the best bet here, especially in our case if we wanted to push for 120 FPS gaming.

Bioshock Infinite: Minimum FPS

Similar to Bioshock on IGP, minimum frame rates across the board seem to be very low, with minor differences giving large % rises.

Tomb Raider: Average FPS

Tomb Raider remains resilient to change across our benchmarks, with 1 FPS difference between the top and bottom average FPS results in our tri-GPU setup.

Tomb Raider: Minimum FPS

With our tri-GPU setup being a little odd (two GPUs on one PCB), Tomb Raider cannot seem to find much consistency for minimum frame rates, showing up to a 15% difference when compared to our 1600 C10 result which seems to be a lot lower than the rest.

Sleeping Dogs: Average FPS

Similar to other results, 1333 and 1600 MHz results give lower frame rates, along with the slower 1866 MHz C10/C11 options.  Anything 2133 MHz and above gives up to 8% more performance than 1333 C9.

Sleeping Dogs: Minimum FPS:

Minimum frame rates are a little random in our setup, except for one constant – 1333 MHz memory does not perform.  Everything beyond that seems to be at the whim of statistical variance.

Memory Scaling on Haswell: Single dGPU Gaming Pricing and the Effect of the Hynix Fire
Comments Locked

89 Comments

View All Comments

  • MrSpadge - Thursday, September 26, 2013 - link

    Is your HDD scratching because you're running out of RAM? Then an upgrade is worth it, otherwise not.
  • nevertell - Thursday, September 26, 2013 - link

    Why does going from 2933 to 3000, with the same latencies, automatically make the system run slower on almost all of the benchmarks ? Is it because of the ratio between cpu, base and memory clock frequencies ?
  • IanCutress - Thursday, September 26, 2013 - link

    Moving to the 3000 MHz setting doesn't actually move to the 3000 MHz strap - it puts it on 2933 and adds a drop of BCLK, meaning we had to drop the CPU multiplier to keep the final CPU speed (BCLK * multi) constant. At 3000 MHz though, all the subtimings in the XMP profile are set by the SPD. For the other MHz settings, we set the primaries, but we left the motherboard system on auto for secondary/tertiary timings, and it may have resulted in tighter timings under 2933. There are a few instances where the 3000 kit has a 2-3% advantage, a couple where it's at a disadvantage, but the rest are around about the same (within some statistical variance).

    Ian
  • mikk - Thursday, September 26, 2013 - link

    What a stupid nonsense these iGPU Benchmarks. Under 10 fps, are you serious? Do it with some usable fps and not in a slide show.
  • MrSpadge - Thursday, September 26, 2013 - link

    Well, that's the reality of gaming on these iGPUs in low "HD" resolution. But I actually agree with you: running at 10 fps is just not realistic and hence not worth much.

    The problem I see with these benchmarks is that at maximum detail settings you're putting en emphasis on shaders. By turning details down you'd push more pixels and shift the balance towards needing more bandwidth to achieve just that. And since in any real world situation you'd see >30 fps, you ARE pushing more pixels in these cases.
  • RYF - Saturday, September 28, 2013 - link

    The purpose was to put the iGPU into strain and explore the impacts of having faster memory in improving the performance.

    You seriously have no idea...
  • MrSpadge - Thursday, September 26, 2013 - link

    Your benchmark choices are nice, but I've seen quite a few "real world" applications which benefit far more from high-performance memory:
    - matrix inversion in Matlab (Intel MKL), probably in other languages / libs too
    - crunching Einstein@Home (BOINC) on all 8 threads
    - crunching Einstein@Home on 7 threads and 2 Einstein@Home tasks on the iGPU
    - crunching 5+ POEM@Home (BOINC) tasks on a high end GPU

    It obviously depends on the user how real the "real world" applications are. For me they are far more relevant than my occasional game, which is usually fast enough anyway.
  • MrSpadge - Thursday, September 26, 2013 - link

    Edit: in fact, I have set a maximum of 31 fps in PrecisionX for my nVidia, so that the games don't eat up too much crunching time ;)
  • Oscarcharliezulu - Thursday, September 26, 2013 - link

    Yep it'd be interesting to understand where extra speed does help, eg database, j2ee servers, cad, transactional systems of any kind, etc. otherwise great read and a great story idea, thanks.
  • willis936 - Thursday, September 26, 2013 - link

    SystemCompute - 2D Ex CPU 1600CL10. Nice.

Log in

Don't have an account? Sign up now