Conclusions

The how, what and why questions that surround overclocking often result in answers that either confuse or dazzle, depending on the mind-set of the user listening. At the end of the day, it originated from trying to get extra performance for nothing. Buying the low-end, cheaper processors and changing a few settings (or an onboard timing crystal) would result in the same performance as a more expensive model. When we were dealing with single core systems, the speed increase was immediate. With dual core platforms, there was a noticeable difference as well, and overclocking gave the same performance as a high end component. This was noticeable particularly in games which would have CPU bottlenecks due to single/dual core design. However in recent years, this has changed.

Intel sells mainstream processors in both dual and quad core flavors, each with a subset that enable hyperthreading and some other distinctions. This affords five platforms – Celeron, Pentium, i3, i5 and i7 going from weakest to strongest.  Overclocking is now enabled solely reserved for the most extreme i5 and i7 processors.  Overclocking in this sense now means taking the highest performance parts even further, and there is no recourse to go from low end to high end – extra money has to be spent in order to do so.

As an aside, in 2014, Intel released the Pentium G3258, an overclockable dual core processor without hyperthreading. When we tested, it overclocked to a nice high frequency and it performed in single threaded workloads as expected. However, a dual core processor is not a quad core, and even with a +50% increase in frequency, it will not escape a +100% or +200% increase in threads over the i5 or i7 high end processors. With software and games now taking advantage of multiple cores, having too few cores is the bottleneck, not frequency. Unfortunately you cannot graft on extra silicon as easily as pressing a few buttons.

One potential avenue is to launch an overclockable i3 processor, using a dual core with hyperthreading, which might play on par with an i5 even though we have hyperthreads compared to actual core count. But if it performed, it might draw away sales from the high end overclocking processors, and Intel does not have competition in this space, so I doubt we would see it any time soon.

But what exactly does overclocking the highest performing processor actually achieve? Our results, including all the ones in Bench not specifically listed in this piece, show improvements across the board in all our processor tests.

Here we get three very distinct categories of results. The move of +200 MHz is rounded to about a 5% jump, and with our CPU tests it is more nearer 4% for each step up and slightly less in our Linux Bench. In both of these there were benchmarks that bought the average down due to other bottlenecks in the system: Photoscan Stage 2 (the complex multithreaded stage) was variable and in Linux Bench both NPB and Redis-1 gave results that were more DRAM limited. Remove these and the results get closer to the true % gain.

Meanwhile, all of our i7-6700K overclocked testing are now also available in Bench, allowing direct comparison to other processors. Other CPUs when overclocked will be updated in due course.

Moving on, with our discrete testing on a GTX 980, our series of games had little impact on increased frequency at 1080p or even SoM at 4K. Some might argue that this is to be expected, because at high settings the onus is more on the graphics card – but ultimately with a GTX 980 you would be running at 1080p or better at maximum settings where possible.

Finally, the integrated graphics results are a significantly different ball game. When we left the IGP at default frequencies, and just overclocked the processor. The results give a decline in average frame rates, despite the higher frequency, which is perhaps counterintuitive and not expected. The explanation here is due to power delivery budgets – when overclocked, the majority of the power pushes through to the CPU and items are processed quicker. This leaves less of a power budget within the silicon for the integrated graphics, either resulting in lower frequencies to maintain the status quo or by the increase in graphical data occurring over the DRAM-to-CPU bus causing a memory latency bottleneck. Think of it like a see-saw: when you push harder on the CPU side, the IGP side effect is lower. Normally this would be mitigated by increasing the power limit on the processor as a whole in the BIOS, however in this case this had no effect.

When we fixed the integrated graphics frequencies however, this issue disappeared.

Taking Shadow of Mordor as the example, raising the graphics frequency not only gave a boost in performance when we used the presets provided on the ASRock motherboard, but also the issue of balancing power between the processor and the graphics disappeared and our results were within expected variance.

Gaming Benchmarks: Discrete Graphics
Comments Locked

103 Comments

View All Comments

  • kmmatney - Friday, August 28, 2015 - link

    The G3258 is fun to overclock. The overclock on my Devils Canyon i5 made a difference on my server, which runs 3 minecraft servers at the same time. I needed the overclock to make up for the lousy optimization of Java on the WHS 2011 OS.
  • StrangerGuy - Saturday, August 29, 2015 - link

    Yeah, spend $340 on a 6700K, $200 on a mobo, $100 on a cooler for measly 15% CPU OC, all for a next to zero real world benefit in single GPU gaming loads compared to a $250 locked i5 / budget mobo.

    Who cares about how easy you can perform the OC when the value for money is rubbish.
  • hasseb64 - Saturday, August 29, 2015 - link

    You nailed it stranger!
  • Beaver M. - Saturday, August 29, 2015 - link

    You should have known that before, that even without overclock your CPU will be so fast that you wont be seeing any difference in most games when overclocking.
  • Deders - Saturday, August 29, 2015 - link

    If you intend to keep the hardware for a long period of time it can help. My previous i5-750's 3800MHz overclock made it viable as a gaming processor for the 5 or so years I was using it.

    For example it allowed me to play Arkham City with full PhysX on a single 560TI, at stock speeds it wasn't playable with these settings. The most recent Batman game was no problem for it even though many people were having issues, same goes for Watchdogs.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Sure, and my 50% overclock on my i7 920 made it viable for my main gaming PC for a few years longer than it otherwise would have been, but a 10-15% overclock? With a <1% gaming performance increase? Meh.
  • Impulses - Saturday, August 29, 2015 - link

    You're exaggerating the basic requirements tho, I'm sure some do that, but I've never paid $200 or $100 for a cooler ($160/65 tops)... And if you spent more on the i7 it damn better had been for HT or you've no clue what you're doing...
  • Xenonite - Saturday, August 29, 2015 - link

    @V900: "Today, processors have gotten so fast, that even the cheap 200$ CPUs are "fast enough" for most tasks."

    For almost all non-gaming tasks (except realtime frame interpolation) this is most certainly true. The thing is, CPUs are NOT even nearly fast enough to game at 140+ fps with the 99% frame latency at a suitable <8mS value.

    I realize that no one cares about >30fps gaming and that most people even condemn it as "looking to real" (in the same sentence as "your eyes can't even see a difference anyway"), therefore games aren't being optimised for low 99% frame latencies, and neither are modern CPUs.

    But for the few of us who are sensitive to 1ms frametime variances in sub-10ms average frame latency streams, overclocking to high clock speeds is the only way to approach relatively smooth frame delivery.
    On the same note, I would really have loved to see an FCAT or at least a FRAPS comparison of 99% frametimes between the different overclocked states, with a suitably overclocked GTX 980ti and some high-speed DDR4 memory along with the in-game settings being dialed back a bit (at least for the 1080p test).
  • EMM81 - Saturday, August 29, 2015 - link

    "CPUs are NOT even nearly fast enough to game at 140+ fps" "But for the few of us who are sensitive to 1ms frametime variances in sub-10ms average frame latency streams"

    1) People may care about 30-60+fps but where do you get 140 from???
    2) You are not sensitive to 1ms frametime variations...going from 33.3ms-16.7ms(30-60fps) makes only a very subtle difference to most people and that is a 16.6ms(0.5x) delta. There is zero possible way you can perceive anywhere near that level of difference. Even if we were talking about running at 60fps with a variation of 2ms and you could somehow stare at a side by side comparison until you maybe were able to pick out the difference why does it matter??? You care more about what the numbers say and how cool you think it is...
  • Communism - Saturday, August 29, 2015 - link

    Take your pseudo-intellectualism elsewhere.

Log in

Don't have an account? Sign up now