Conclusions

The how, what and why questions that surround overclocking often result in answers that either confuse or dazzle, depending on the mind-set of the user listening. At the end of the day, it originated from trying to get extra performance for nothing. Buying the low-end, cheaper processors and changing a few settings (or an onboard timing crystal) would result in the same performance as a more expensive model. When we were dealing with single core systems, the speed increase was immediate. With dual core platforms, there was a noticeable difference as well, and overclocking gave the same performance as a high end component. This was noticeable particularly in games which would have CPU bottlenecks due to single/dual core design. However in recent years, this has changed.

Intel sells mainstream processors in both dual and quad core flavors, each with a subset that enable hyperthreading and some other distinctions. This affords five platforms – Celeron, Pentium, i3, i5 and i7 going from weakest to strongest.  Overclocking is now enabled solely reserved for the most extreme i5 and i7 processors.  Overclocking in this sense now means taking the highest performance parts even further, and there is no recourse to go from low end to high end – extra money has to be spent in order to do so.

As an aside, in 2014, Intel released the Pentium G3258, an overclockable dual core processor without hyperthreading. When we tested, it overclocked to a nice high frequency and it performed in single threaded workloads as expected. However, a dual core processor is not a quad core, and even with a +50% increase in frequency, it will not escape a +100% or +200% increase in threads over the i5 or i7 high end processors. With software and games now taking advantage of multiple cores, having too few cores is the bottleneck, not frequency. Unfortunately you cannot graft on extra silicon as easily as pressing a few buttons.

One potential avenue is to launch an overclockable i3 processor, using a dual core with hyperthreading, which might play on par with an i5 even though we have hyperthreads compared to actual core count. But if it performed, it might draw away sales from the high end overclocking processors, and Intel does not have competition in this space, so I doubt we would see it any time soon.

But what exactly does overclocking the highest performing processor actually achieve? Our results, including all the ones in Bench not specifically listed in this piece, show improvements across the board in all our processor tests.

Here we get three very distinct categories of results. The move of +200 MHz is rounded to about a 5% jump, and with our CPU tests it is more nearer 4% for each step up and slightly less in our Linux Bench. In both of these there were benchmarks that bought the average down due to other bottlenecks in the system: Photoscan Stage 2 (the complex multithreaded stage) was variable and in Linux Bench both NPB and Redis-1 gave results that were more DRAM limited. Remove these and the results get closer to the true % gain.

Meanwhile, all of our i7-6700K overclocked testing are now also available in Bench, allowing direct comparison to other processors. Other CPUs when overclocked will be updated in due course.

Moving on, with our discrete testing on a GTX 980, our series of games had little impact on increased frequency at 1080p or even SoM at 4K. Some might argue that this is to be expected, because at high settings the onus is more on the graphics card – but ultimately with a GTX 980 you would be running at 1080p or better at maximum settings where possible.

Finally, the integrated graphics results are a significantly different ball game. When we left the IGP at default frequencies, and just overclocked the processor. The results give a decline in average frame rates, despite the higher frequency, which is perhaps counterintuitive and not expected. The explanation here is due to power delivery budgets – when overclocked, the majority of the power pushes through to the CPU and items are processed quicker. This leaves less of a power budget within the silicon for the integrated graphics, either resulting in lower frequencies to maintain the status quo or by the increase in graphical data occurring over the DRAM-to-CPU bus causing a memory latency bottleneck. Think of it like a see-saw: when you push harder on the CPU side, the IGP side effect is lower. Normally this would be mitigated by increasing the power limit on the processor as a whole in the BIOS, however in this case this had no effect.

When we fixed the integrated graphics frequencies however, this issue disappeared.

Taking Shadow of Mordor as the example, raising the graphics frequency not only gave a boost in performance when we used the presets provided on the ASRock motherboard, but also the issue of balancing power between the processor and the graphics disappeared and our results were within expected variance.

Gaming Benchmarks: Discrete Graphics
Comments Locked

103 Comments

View All Comments

  • MikeMurphy - Saturday, January 30, 2016 - link

    Tragically, few stress testing programs cycle power states during testing, which is important. Most just place CPU under continuous load.
  • 0razor1 - Friday, August 28, 2015 - link

    Didn't mean memcheck - it's error check on regular OCCT :GPU.
  • Xenonite - Saturday, August 29, 2015 - link

    Different parts of the CPU are fabricated to withstand different voltages. The DRAM controller is only optimised for the lowest possible power/performance, so the silicon is designed with low leakage in mind.

    As another example, the integrated PLL runs at 1.8V.

    Electron migration is indeed the main failure mechanism in most modern processors, however, the metal interconnect fabric has not been shinking by the same factors as the CMOS logic has. That means these 14nm processors can take more voltage than you would expect from a simple linear geometric relationship.

    What exactly the maximum safe limits are will probably never be known to those outside of Intel, but just as with Haswell, I've been running at a 24/7 1.4V core voltage, which I don't believe will significantly shorten the life of the CPU (especially if you have the cooling capacity to up the voltage to around 1.55V as the CPU degrades over the following decade).

    In any case, NOT running the CPU at at least 4.6GHz would mean that it wasn't a worthwhile upgrade from my 5960X, so the safety question is pretty much moot in my case.
  • Oxford Guy - Saturday, August 29, 2015 - link

    Unless that worthwhile upgrade burns up in which case it's a non-worthwhile downgrade.
  • 0razor1 - Sunday, August 30, 2015 - link

    Hey @Xenonite, sorry to quote you directly..
    'Electron migration is indeed the main failure mechanism in most modern processors, however, the metal interconnect fabric has not been shinking by the same factors as the CMOS logic has'
    -> I'd say spot on. But I thought that's what Intel 14nm was all about -they had the metal shrunk down to 14nm as well, as opposed to what Samsung as a pseudo 14nm (20nm metal interconnect).

    just as with Haswell, I've been running at a 24/7 1.4V core voltage
    -> Intel has specified 1.3VCore as being the max safe voltage. I'd pay heed :)

    NOT running the CPU at at least 4.6GHz would mean that it wasn't a worthwhile upgrade from my 5960X, so the safety question is pretty much moot in my case.
    -> You're right. But then everyone can't afford upgrades. I've just come from a Phenom 2 @4GHz (1.41VCore) to a 4670k @ 4.6GHz (1.29/1.9 core/VDDIN). What I did do was once the Ph2 was out of warranty, lapped it and OC'd it as far as it would go. Tried my hands at an FX -6300 for a week and was disappointed to say the least.

    Long story short and back to my point, P95.

    If it doesn't P95, it corrupts. /motto
  • varg14 - Friday, September 4, 2015 - link

    I agree if you have good cooling like my Cooler Master Nepton 140XL on my 4.6-5.1ghz 2600k that uses motherboard VRM's and keep temps under 70c always I see no reason to see any chip degradation or failures. I really only heard of chip degradation and failures when they started putting the VRM's on the CPU die itself with haswell adding more heat to the CPU. Now with Skylake using motherboard VRM;s again everything should be peachy keen.
  • StevoLincolnite - Saturday, August 29, 2015 - link

    Electromigration.

    I was going those volts at 40nm...
  • 0razor1 - Sunday, August 30, 2015 - link

    Lol I topped out @ 1.41Vcore on 45nm Phenom 2 ( with max temps on the core of 60C @ 4GHz).

    Earlier 24x7 for three years was 1.38Vcore for 3.8GHz.
  • JKflipflop98 - Sunday, September 6, 2015 - link

    "If it's OK, then can someone illustrate why one should not go over say 1.6V on the DRAM in 22nm, why stick to 1.35V for 14nm? Might as well use standard previous generation voltages and call it a day?"

    Because the DRAM's line voltage goes straight into the integrated memory controller within the cpu. While the chunky circuits in your ram modules can handle 1.6V, the tiny little logical transistors on the CPU can only handle 1.35 before vaporizing.
  • Zoeff - Friday, August 28, 2015 - link

    Yeah that's what I thought as well but apparently the voltage in the silicon is lower compared to what the input voltage is which is what you can control as the user. At least, this is what I read on overclock.net. Right now CPUz reports ~1.379v (flicking, +/- 0.01v) and that's with EIST, C-States and SVID Support disabled. Different monitoring software sometimes reports different voltages too so I find it hard to check what my CPU is actually doing.

Log in

Don't have an account? Sign up now