Gaming Benchmarks: Integrated Graphics Overclocked

Given the disappointing results on the Intel HD 530 graphics when the processor was overclocked, the tables were turned and we designed a matrix of both CPU and IGP overclocks to test in our graphics suite. So for this we still take the i7-6700K at 4.2 GHz to 4.8 GHz, but then also adjust the integrated graphics from 'Auto' to 1250, 1300, 1350 and 1400 MHz as per the automatic overclock options found on the ASRock Z170 Extreme7+.

Technically Auto should default to 1150 MHz in line with what Intel has published as the maximum speed, however the results on the previous page show that this is more of a see-saw operation when it come to power distribution of the processor. With any luck, actually setting the integrated graphics frequency should maintain that frequency throughout the benchmarks.  With the CPU overclocks as well, we can see how it scales with added CPU frequency.

Results for these benchmarks will be provided in matrix form, both as absolute numbers and as a percentage compared to the 4.2 GHz CPU + Auto IGP reference value. Test settings are the same as the previous set of data for IGP.

Absolute numbers

Percentage Deviation from 4.2 GHz / Auto

Conclusions on Overclocking the IGP

It becomes pretty clear that by fixing the frequency of the integrated graphics, there is for the most part no longer the detrimental effect when you overclock the processor, or at least the reduction in performance to the same degree (which falls within standard error). On three of the games, fixing the integrated graphics to 1250 Mhz nets ~10% boost in performance, which for titles like Attila extends to 23% at 1400 MHz. By contrast, GTA V shows only a small gain, indicating that we are perhaps limited in other ways.

Gaming Benchmarks: Integrated Graphics Gaming Benchmarks: Discrete Graphics
Comments Locked

103 Comments

View All Comments

  • bill.rookard - Friday, August 28, 2015 - link

    I wonder if not having the FIVR on-die has to do with the difference between the Haswell voltage limits and the Skylake limits?
  • Communism - Friday, August 28, 2015 - link

    Highly doubtful, as Ivy Bridge has relatively the same voltage limits.
  • Morawka - Saturday, August 29, 2015 - link

    yea thats a crazy high voltage.. that was even high for 65nm i7 920's
  • kuttan - Sunday, August 30, 2015 - link

    i7 920 is 45nm not 65nm
  • Cellar Door - Friday, August 28, 2015 - link

    Ian, so it seems like the memory controller - even though capable of driving DDR4 to some insane frequencies seems to error out with large data sets?

    It would interesting to see this behavior with Skylake and DDR3L.

    Also it would be interesting to see in the i56600k, lacking the hyperthreading would run into same issues.
  • Communism - Friday, August 28, 2015 - link

    So your sample definitively wasn't stable above 4.5ghz after all then.......

    Haswell/Broadwell/Skylake dud confirmed. Waiting for Skylake-E where the "reverse hyperthreading" will be best leveraged with the 6/8 core variant with proper quad channel memory bandwidth.
  • V900 - Friday, August 28, 2015 - link

    Nope, it was stable above 4.5 Ghz...

    And no dud confirmed in Broadwell/Skylake.

    There is just one specific scenario (4K/60 encoding) where the combination of the software and the design of the processor makes overclocking unfeasible.

    Not really a failure on Intels part, since it's not realistic to expect them to design a mass-market CPU according to the whims of the 0.5% of their customers who overclock.
  • Gigaplex - Saturday, August 29, 2015 - link

    If you can find a single software load that reliably works at stock settings, but fails at OC, then the OC by definition is not 100% stable. You might not care and are happy to risk using a system configured like that, but I sure as hell wouldn't.
  • Oxford Guy - Saturday, August 29, 2015 - link

    Exactly. Not stable is not stable.
  • HollyDOL - Sunday, August 30, 2015 - link

    I have to agree... While we are not talking about server stable with ECC and things, either you are rock stable on desktop use or not stable at all. Already failing on one of test scenarios is not good at all. I wouldn't be happy if there were some hidden issues occuring during compilations, or after few hours of rendering a scene... or, let's be honest, in the middle of gaming session with my online guild. As such I am running my 2500k half GHz lower than stability testing shown as errorless. Maybe it's excessively much, but I like to be on a safe side with my OC, especially since the machine is used for wide variety of purposes.

Log in

Don't have an account? Sign up now