Conclusions

The how, what and why questions that surround overclocking often result in answers that either confuse or dazzle, depending on the mind-set of the user listening. At the end of the day, it originated from trying to get extra performance for nothing. Buying the low-end, cheaper processors and changing a few settings (or an onboard timing crystal) would result in the same performance as a more expensive model. When we were dealing with single core systems, the speed increase was immediate. With dual core platforms, there was a noticeable difference as well, and overclocking gave the same performance as a high end component. This was noticeable particularly in games which would have CPU bottlenecks due to single/dual core design. However in recent years, this has changed.

Intel sells mainstream processors in both dual and quad core flavors, each with a subset that enable hyperthreading and some other distinctions. This affords five platforms – Celeron, Pentium, i3, i5 and i7 going from weakest to strongest.  Overclocking is now enabled solely reserved for the most extreme i5 and i7 processors.  Overclocking in this sense now means taking the highest performance parts even further, and there is no recourse to go from low end to high end – extra money has to be spent in order to do so.

As an aside, in 2014, Intel released the Pentium G3258, an overclockable dual core processor without hyperthreading. When we tested, it overclocked to a nice high frequency and it performed in single threaded workloads as expected. However, a dual core processor is not a quad core, and even with a +50% increase in frequency, it will not escape a +100% or +200% increase in threads over the i5 or i7 high end processors. With software and games now taking advantage of multiple cores, having too few cores is the bottleneck, not frequency. Unfortunately you cannot graft on extra silicon as easily as pressing a few buttons.

One potential avenue is to launch an overclockable i3 processor, using a dual core with hyperthreading, which might play on par with an i5 even though we have hyperthreads compared to actual core count. But if it performed, it might draw away sales from the high end overclocking processors, and Intel does not have competition in this space, so I doubt we would see it any time soon.

But what exactly does overclocking the highest performing processor actually achieve? Our results, including all the ones in Bench not specifically listed in this piece, show improvements across the board in all our processor tests.

Here we get three very distinct categories of results. The move of +200 MHz is rounded to about a 5% jump, and with our CPU tests it is more nearer 4% for each step up and slightly less in our Linux Bench. In both of these there were benchmarks that bought the average down due to other bottlenecks in the system: Photoscan Stage 2 (the complex multithreaded stage) was variable and in Linux Bench both NPB and Redis-1 gave results that were more DRAM limited. Remove these and the results get closer to the true % gain.

Meanwhile, all of our i7-6700K overclocked testing are now also available in Bench, allowing direct comparison to other processors. Other CPUs when overclocked will be updated in due course.

Moving on, with our discrete testing on a GTX 980, our series of games had little impact on increased frequency at 1080p or even SoM at 4K. Some might argue that this is to be expected, because at high settings the onus is more on the graphics card – but ultimately with a GTX 980 you would be running at 1080p or better at maximum settings where possible.

Finally, the integrated graphics results are a significantly different ball game. When we left the IGP at default frequencies, and just overclocked the processor. The results give a decline in average frame rates, despite the higher frequency, which is perhaps counterintuitive and not expected. The explanation here is due to power delivery budgets – when overclocked, the majority of the power pushes through to the CPU and items are processed quicker. This leaves less of a power budget within the silicon for the integrated graphics, either resulting in lower frequencies to maintain the status quo or by the increase in graphical data occurring over the DRAM-to-CPU bus causing a memory latency bottleneck. Think of it like a see-saw: when you push harder on the CPU side, the IGP side effect is lower. Normally this would be mitigated by increasing the power limit on the processor as a whole in the BIOS, however in this case this had no effect.

When we fixed the integrated graphics frequencies however, this issue disappeared.

Taking Shadow of Mordor as the example, raising the graphics frequency not only gave a boost in performance when we used the presets provided on the ASRock motherboard, but also the issue of balancing power between the processor and the graphics disappeared and our results were within expected variance.

Gaming Benchmarks: Discrete Graphics
Comments Locked

103 Comments

View All Comments

  • MrSpadge - Thursday, September 3, 2015 - link

    If HB exposes errors which the other programs do not find, it is a stress test. Just a different one. It's not about the highest power draw & temperature, but about a code path which apparently takes a bit longer to complete than others and hence can't be pushed to such high frequencies.
  • Dr_Orgo - Sunday, August 30, 2015 - link

    The conditionally stable overclocking results were pretty interesting. When I overclocked my GTX 970, I primarily used Unigine Heaven to stress test. Got to 1500 MHz stable with voltage maxed in Precision X. Used it in a number of games with zero crashing even with sustained 100% usage, seemed completely stable. Running the unit preloader (loads all units/annimations) in Starcraft 2 would make the game crash every time. Dropping the overclock to 1460 MHz made it stable. I'm not sure what specifically makes that unit preloader less overclock friendly.
  • LemmingOverlord - Monday, August 31, 2015 - link

    I think the premise behind the Discrete Graphics tests are incorrect. If you max out the settings you are capping the performance of the system by the graphics card. If you lower the settings just a bit, you'll definitely see how the CPU influences overall game performance. I know this is a mini-test, but these discrete tests prove absolutely nothing on how the overclock impacted the game performance.

    Either lower the detail on these tests, or test with a game that is non-GPU Intensive. Civ V is an excellent benchmark for CPU tests, because it really is CPU-intensive...
  • dimonakid - Sunday, September 6, 2015 - link

    In the past couple of months, we see alot of BSOD and freez and what not from media encoding softwares.
    Some of our friends mentioned (while they were testing), that this maybea a global XMP issue.
    Same resaults regarding handbrake were showing on z77 z68.
    Just to comment
  • SeanJ76 - Monday, September 14, 2015 - link

    Not impressed, my 4690k does 4.8ghz with a $29 Hyper Evo....
  • InstinctXIV - Friday, November 20, 2015 - link

    I would love to see your 4690K do this http://imgur.com/U6ZZ1Ll (It is my PC)
  • gravy1958 - Monday, October 19, 2015 - link

    I have a 6700k with an asus maximus ranger viii MB and an hours gaming produces regular clock_watchdog_timeout errors if I use the overclocking... and frequent boot fails with the random overclocking failed press F1 to enter setup 8^(
  • gravy1958 - Monday, October 19, 2015 - link

    should add it is set at 4.6Ghz and all advice points to the voltage being too low.
  • jonainpdx - Thursday, May 5, 2016 - link

    It's pretty obvious that overclocking a new, state of the art CPU is nothing more than a waste of money for a gamer.

Log in

Don't have an account? Sign up now