Frequency Scaling

Below is an example of our results from overclock testing in a table that we publish in with both processor and motherboard. Our tests involve setting a multiplier and a frequency, some stress tests, and either raising the multiplier if successful or increasing the voltage at the point of failure/a blue screen. This methodology has worked well as a quick and dirty method to determine frequency, though lacks the subtly that seasoned overclockers might turn to in order to improve performance.

This was done on our ASUS Z170-A sample while it was being tested for review. When we applied ASUS's automatic overclock software tool, Auto-OC, it finalized an overclock at 4.8 GHz. This was higher than what we had seen with the same processor previously (even with the same cooler), so in true fashion I was skeptical as ASUS Auto-OC has been rather hopeful in the past. But it sailed through our standard stability tests easily, without reducing in overclocking once, meaning that it was not overheating by any means. As a result, I applied our short-form CPU tests in a recently developed automated script as an extra measure of stability.

These tests run in order of time taken, so last up was Handbrake converting a low quality film followed by a high quality 4K60 film. In low quality mode, all was golden. At 4K60, the system blue screened. I triple-checked with the same settings to confirm it wasn’t going through, and three blue screens makes a strike out. But therein is a funny thing – while this configuration was stable with our regular mixed-AVX test, the large-frame Handbrake conversion made it fall over.

So as part of this testing, from 4.2 GHz to 4.8 GHz, I ran our short-form CPU tests over and above the regular stability tests. These form the basis of the results in this mini-test. Lo and behold, it failed at 4.6 GHz as well in similar fashion – AVX in OCCT OK, but HandBrake large frame not so much. I looped back with ASUS about this, and they confirmed they had seen similar behavior specifically with HandBrake as well.

Users and CPU manufacturers tend to view stability in one of two ways. The basic way is as a pure binary yes/no. If the CPU ever fails in any circumstance, it is a no. When you buy a processor from Intel or AMD, that rated frequency is in the yes column (if it is cooled appropriately). This is why some processors seem to overclock like crazy from a low base frequency – because at that frequency, they are confirmed as working 100%. A number of users, particularly those who enjoy strangling a poor processor with Prime95 FFT torture tests for weeks on end, also take on this view. A pure binary yes/no is also hard for us to test in a time limited review cycle.

The other way of approaching stability is the sliding scale. At some point, the system is ‘stable enough’ for all intents and purposes. This is the situation we have here with Skylake – if you never go within 10 feet of HandBrake but enjoy gaming with a side of YouTube and/or streaming, or perhaps need to convert a few dozen images into a 3D model then the system is stable.

To that end, ASUS is implementing a new feature in its automatic overclocking tool. Along with the list of stress test and OC options, an additional checkbox for HandBrake style data paths has been added. This will mean that a system needs more voltage to cope, or will top out somewhere else. But the sliding scale has spoken.

Incidentally at IDF I spoke to Tom Vaughn, VP of MultiCoreWare (who develops the open source x265 HEVC video encoder and accompanying GUI interface). We discussed video transcoding, and I bought up this issue on Skylake. He stated that the issue was well known by MultiCoreWare for overclocked systems. Despite the prevalance of AVX testing software, x265 encoding with the right algorithms will push parts of the CPU beyond all others, and with large frames it can require large amounts of memory to be pushed around the caches at the same time, offering further permutations of stability. We also spoke about expanding our x265 tests, covering best case/worst case scenarios from a variety of file formats and sources, in an effort to pinpoint where stability can be a factor as well as overall performance. These might be integrated into future overclocking tests, so stay tuned.

The Intel Skylake i7-6700K Overclocking Performance Mini-Test CPU Tests on Windows: Professional
Comments Locked

103 Comments

View All Comments

  • Oxford Guy - Tuesday, September 1, 2015 - link

    (says the guy with a name like Communism)
  • HollyDOL - Sunday, August 30, 2015 - link

    Well, you might have a point with something here. Even though eye itself can take information very fast and optical nerve itself can transmit them very fast, the electro-chemical bridge (Na-K bridge) between them needs "very long" time before it stabilises chemical levels to be able to transmit another impulse between two nerves. Afaic it takes about 100ms to get the levels back (though I currently have no literature to back that value up) so the next impulse can be transmitted. I suspect there are multitudes of lanes so they are being cycled to get better "frame rate" and other goodies that make it up (tunnel effect for example - narrow field of vision to get more frames with same bandwidth?)...
    Actually I would like to see a science based article on that topic that would make things straight on this play field. Maybe AT could make an article (together with some opthalmologist/neurologist) to clear that up?
  • Communism - Monday, August 31, 2015 - link

    All latencies between the input and the output directly to the brain add up.

    Any deviation on top of that is an error rate that is added on top of that.

    Your argument might as well be "Light between the monitor and your retina is very fast traveling, so why would it matter?"

    One must take everything into consideration when talking about latency and temporal error.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Not to mention, even the best monitors have more than 2ms variance in response time depending on what colours they're showing.
  • Nfarce - Sunday, August 30, 2015 - link

    As one who has been building gaming rigs and overclocking since the Celeron 300->450MHz days of the late 1990s, I'd +1 that. Over the past 15+ years, every new build I did with a new chipset (about every 2-3 years) has shown a diminished return in overclock performance for gaming. And my resolutions have increased over that period as well, further putting more demand on the GPU than CPU (going from 1280x1024 in 1998 to 1600x1200 in 2001 to 1920x1080 in 2007 to 2560x1440p in 2013). So here I am today with an i5 4690K which has successfully been gamed on at 4.7GHz, yet I'm only running it at stock speed because there is ZERO improvement on frames in my benchmarked games (Witcher III, BF4, Crysis 3, Alien Isolation, Project Cars, DiRT Rally). It's just a waste of power and heat and wear and tear. I will overclock it however when running video editing software and other CPU-intensive apps which noticeably helps.
  • TallestJon96 - Friday, August 28, 2015 - link

    Scalling seems pretty good, I'd love to see analysis on the i5-6600k as well.
  • vegemeister - Friday, August 28, 2015 - link

    Not stable for all possible inputs == not stable. And *especially* not stable when problematic inputs are present in production software that actually does something useful.
  • Beaver M. - Saturday, August 29, 2015 - link

    Exactly. Fact of the matter is that proper overclocking takes a LONG LONG time to get stable, unless you get extremely lucky. I sometimes spend months to get it stable. Even when testing with Prime95 like theres no tomorrow, it still wont prove that the system is 100% stable. You also have to test games for hours for several days and of course other applications. But you cant really play games 24/7, so it takes quite some time.
  • sonny73n - Sunday, August 30, 2015 - link

    If you have all power saving features disabled, you only have to worry about stability under load. Otherwise, as CPU voltage and frequency fluctuate depend on each application, it maybe a pain. Also most mobos have issues with RAM together with CPU OCed to certain point.
  • V900 - Saturday, August 29, 2015 - link

    Thats an extremely theoretical definition of "production software".

    No professional or production company would ever overclock their machines to begin with.

    For the hobbyist overclocker who on a rare occasion needs to encode something in 4K60, the problem is solved by clicking a button in his settings and rebooting.

    I really don't see the big deal here.

Log in

Don't have an account? Sign up now