Conclusions

The how, what and why questions that surround overclocking often result in answers that either confuse or dazzle, depending on the mind-set of the user listening. At the end of the day, it originated from trying to get extra performance for nothing. Buying the low-end, cheaper processors and changing a few settings (or an onboard timing crystal) would result in the same performance as a more expensive model. When we were dealing with single core systems, the speed increase was immediate. With dual core platforms, there was a noticeable difference as well, and overclocking gave the same performance as a high end component. This was noticeable particularly in games which would have CPU bottlenecks due to single/dual core design. However in recent years, this has changed.

Intel sells mainstream processors in both dual and quad core flavors, each with a subset that enable hyperthreading and some other distinctions. This affords five platforms – Celeron, Pentium, i3, i5 and i7 going from weakest to strongest.  Overclocking is now enabled solely reserved for the most extreme i5 and i7 processors.  Overclocking in this sense now means taking the highest performance parts even further, and there is no recourse to go from low end to high end – extra money has to be spent in order to do so.

As an aside, in 2014, Intel released the Pentium G3258, an overclockable dual core processor without hyperthreading. When we tested, it overclocked to a nice high frequency and it performed in single threaded workloads as expected. However, a dual core processor is not a quad core, and even with a +50% increase in frequency, it will not escape a +100% or +200% increase in threads over the i5 or i7 high end processors. With software and games now taking advantage of multiple cores, having too few cores is the bottleneck, not frequency. Unfortunately you cannot graft on extra silicon as easily as pressing a few buttons.

One potential avenue is to launch an overclockable i3 processor, using a dual core with hyperthreading, which might play on par with an i5 even though we have hyperthreads compared to actual core count. But if it performed, it might draw away sales from the high end overclocking processors, and Intel does not have competition in this space, so I doubt we would see it any time soon.

But what exactly does overclocking the highest performing processor actually achieve? Our results, including all the ones in Bench not specifically listed in this piece, show improvements across the board in all our processor tests.

Here we get three very distinct categories of results. The move of +200 MHz is rounded to about a 5% jump, and with our CPU tests it is more nearer 4% for each step up and slightly less in our Linux Bench. In both of these there were benchmarks that bought the average down due to other bottlenecks in the system: Photoscan Stage 2 (the complex multithreaded stage) was variable and in Linux Bench both NPB and Redis-1 gave results that were more DRAM limited. Remove these and the results get closer to the true % gain.

Meanwhile, all of our i7-6700K overclocked testing are now also available in Bench, allowing direct comparison to other processors. Other CPUs when overclocked will be updated in due course.

Moving on, with our discrete testing on a GTX 980, our series of games had little impact on increased frequency at 1080p or even SoM at 4K. Some might argue that this is to be expected, because at high settings the onus is more on the graphics card – but ultimately with a GTX 980 you would be running at 1080p or better at maximum settings where possible.

Finally, the integrated graphics results are a significantly different ball game. When we left the IGP at default frequencies, and just overclocked the processor. The results give a decline in average frame rates, despite the higher frequency, which is perhaps counterintuitive and not expected. The explanation here is due to power delivery budgets – when overclocked, the majority of the power pushes through to the CPU and items are processed quicker. This leaves less of a power budget within the silicon for the integrated graphics, either resulting in lower frequencies to maintain the status quo or by the increase in graphical data occurring over the DRAM-to-CPU bus causing a memory latency bottleneck. Think of it like a see-saw: when you push harder on the CPU side, the IGP side effect is lower. Normally this would be mitigated by increasing the power limit on the processor as a whole in the BIOS, however in this case this had no effect.

When we fixed the integrated graphics frequencies however, this issue disappeared.

Taking Shadow of Mordor as the example, raising the graphics frequency not only gave a boost in performance when we used the presets provided on the ASRock motherboard, but also the issue of balancing power between the processor and the graphics disappeared and our results were within expected variance.

Gaming Benchmarks: Discrete Graphics
Comments Locked

103 Comments

View All Comments

  • Oxford Guy - Tuesday, September 1, 2015 - link

    (says the guy with a name like Communism)
  • HollyDOL - Sunday, August 30, 2015 - link

    Well, you might have a point with something here. Even though eye itself can take information very fast and optical nerve itself can transmit them very fast, the electro-chemical bridge (Na-K bridge) between them needs "very long" time before it stabilises chemical levels to be able to transmit another impulse between two nerves. Afaic it takes about 100ms to get the levels back (though I currently have no literature to back that value up) so the next impulse can be transmitted. I suspect there are multitudes of lanes so they are being cycled to get better "frame rate" and other goodies that make it up (tunnel effect for example - narrow field of vision to get more frames with same bandwidth?)...
    Actually I would like to see a science based article on that topic that would make things straight on this play field. Maybe AT could make an article (together with some opthalmologist/neurologist) to clear that up?
  • Communism - Monday, August 31, 2015 - link

    All latencies between the input and the output directly to the brain add up.

    Any deviation on top of that is an error rate that is added on top of that.

    Your argument might as well be "Light between the monitor and your retina is very fast traveling, so why would it matter?"

    One must take everything into consideration when talking about latency and temporal error.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Not to mention, even the best monitors have more than 2ms variance in response time depending on what colours they're showing.
  • Nfarce - Sunday, August 30, 2015 - link

    As one who has been building gaming rigs and overclocking since the Celeron 300->450MHz days of the late 1990s, I'd +1 that. Over the past 15+ years, every new build I did with a new chipset (about every 2-3 years) has shown a diminished return in overclock performance for gaming. And my resolutions have increased over that period as well, further putting more demand on the GPU than CPU (going from 1280x1024 in 1998 to 1600x1200 in 2001 to 1920x1080 in 2007 to 2560x1440p in 2013). So here I am today with an i5 4690K which has successfully been gamed on at 4.7GHz, yet I'm only running it at stock speed because there is ZERO improvement on frames in my benchmarked games (Witcher III, BF4, Crysis 3, Alien Isolation, Project Cars, DiRT Rally). It's just a waste of power and heat and wear and tear. I will overclock it however when running video editing software and other CPU-intensive apps which noticeably helps.
  • TallestJon96 - Friday, August 28, 2015 - link

    Scalling seems pretty good, I'd love to see analysis on the i5-6600k as well.
  • vegemeister - Friday, August 28, 2015 - link

    Not stable for all possible inputs == not stable. And *especially* not stable when problematic inputs are present in production software that actually does something useful.
  • Beaver M. - Saturday, August 29, 2015 - link

    Exactly. Fact of the matter is that proper overclocking takes a LONG LONG time to get stable, unless you get extremely lucky. I sometimes spend months to get it stable. Even when testing with Prime95 like theres no tomorrow, it still wont prove that the system is 100% stable. You also have to test games for hours for several days and of course other applications. But you cant really play games 24/7, so it takes quite some time.
  • sonny73n - Sunday, August 30, 2015 - link

    If you have all power saving features disabled, you only have to worry about stability under load. Otherwise, as CPU voltage and frequency fluctuate depend on each application, it maybe a pain. Also most mobos have issues with RAM together with CPU OCed to certain point.
  • V900 - Saturday, August 29, 2015 - link

    Thats an extremely theoretical definition of "production software".

    No professional or production company would ever overclock their machines to begin with.

    For the hobbyist overclocker who on a rare occasion needs to encode something in 4K60, the problem is solved by clicking a button in his settings and rebooting.

    I really don't see the big deal here.

Log in

Don't have an account? Sign up now