Frequency Scaling

Below is an example of our results from overclock testing in a table that we publish in with both processor and motherboard. Our tests involve setting a multiplier and a frequency, some stress tests, and either raising the multiplier if successful or increasing the voltage at the point of failure/a blue screen. This methodology has worked well as a quick and dirty method to determine frequency, though lacks the subtly that seasoned overclockers might turn to in order to improve performance.

This was done on our ASUS Z170-A sample while it was being tested for review. When we applied ASUS's automatic overclock software tool, Auto-OC, it finalized an overclock at 4.8 GHz. This was higher than what we had seen with the same processor previously (even with the same cooler), so in true fashion I was skeptical as ASUS Auto-OC has been rather hopeful in the past. But it sailed through our standard stability tests easily, without reducing in overclocking once, meaning that it was not overheating by any means. As a result, I applied our short-form CPU tests in a recently developed automated script as an extra measure of stability.

These tests run in order of time taken, so last up was Handbrake converting a low quality film followed by a high quality 4K60 film. In low quality mode, all was golden. At 4K60, the system blue screened. I triple-checked with the same settings to confirm it wasn’t going through, and three blue screens makes a strike out. But therein is a funny thing – while this configuration was stable with our regular mixed-AVX test, the large-frame Handbrake conversion made it fall over.

So as part of this testing, from 4.2 GHz to 4.8 GHz, I ran our short-form CPU tests over and above the regular stability tests. These form the basis of the results in this mini-test. Lo and behold, it failed at 4.6 GHz as well in similar fashion – AVX in OCCT OK, but HandBrake large frame not so much. I looped back with ASUS about this, and they confirmed they had seen similar behavior specifically with HandBrake as well.

Users and CPU manufacturers tend to view stability in one of two ways. The basic way is as a pure binary yes/no. If the CPU ever fails in any circumstance, it is a no. When you buy a processor from Intel or AMD, that rated frequency is in the yes column (if it is cooled appropriately). This is why some processors seem to overclock like crazy from a low base frequency – because at that frequency, they are confirmed as working 100%. A number of users, particularly those who enjoy strangling a poor processor with Prime95 FFT torture tests for weeks on end, also take on this view. A pure binary yes/no is also hard for us to test in a time limited review cycle.

The other way of approaching stability is the sliding scale. At some point, the system is ‘stable enough’ for all intents and purposes. This is the situation we have here with Skylake – if you never go within 10 feet of HandBrake but enjoy gaming with a side of YouTube and/or streaming, or perhaps need to convert a few dozen images into a 3D model then the system is stable.

To that end, ASUS is implementing a new feature in its automatic overclocking tool. Along with the list of stress test and OC options, an additional checkbox for HandBrake style data paths has been added. This will mean that a system needs more voltage to cope, or will top out somewhere else. But the sliding scale has spoken.

Incidentally at IDF I spoke to Tom Vaughn, VP of MultiCoreWare (who develops the open source x265 HEVC video encoder and accompanying GUI interface). We discussed video transcoding, and I bought up this issue on Skylake. He stated that the issue was well known by MultiCoreWare for overclocked systems. Despite the prevalance of AVX testing software, x265 encoding with the right algorithms will push parts of the CPU beyond all others, and with large frames it can require large amounts of memory to be pushed around the caches at the same time, offering further permutations of stability. We also spoke about expanding our x265 tests, covering best case/worst case scenarios from a variety of file formats and sources, in an effort to pinpoint where stability can be a factor as well as overall performance. These might be integrated into future overclocking tests, so stay tuned.

The Intel Skylake i7-6700K Overclocking Performance Mini-Test CPU Tests on Windows: Professional
Comments Locked

103 Comments

View All Comments

  • StrangerGuy - Sunday, August 30, 2015 - link

    If we keep dropping the OC multi on Skylake we are going into single-digit clock increases territory from 4GHz stock :)

    Yeah, I wonder why AT mentioned in their Skylake review about why people are losing interest in OCing despite Intel's claims of catering to it. From the looks of it, their 14nm process simply isn't tuned for 4GHz+ operation but towards the lower clocked but much more lucrative chips for the server and mobile segment.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Then you are deluded. There are edge cases and scenarios that will cause a hardware crash on a Xeon server with ECC RAM at stock speeds, so by your reckoning *nothing* is ever 100% stable.
  • danjw - Friday, August 28, 2015 - link

    When can we expect a platform overview? You reviewed the i7-6700K, but you didn't have much in details about them. You were expecting that from IDF. IDF is over, so is there an ETA?
  • MrBowmore - Friday, August 28, 2015 - link

    +1
  • hansmuff - Friday, August 28, 2015 - link

    I assume the POV-Ray score is the "Render averaged PPS"?
    My 2600K @4.4 gets 1497 PPS, so a 35% improvement compared to 6700k @4.4
  • hansmuff - Friday, August 28, 2015 - link

    And of course I mean the 6700k seems to be 35% faster in POV... sigh this needs an edit button
  • looncraz - Saturday, August 29, 2015 - link

    POV-Ray has been seeing outsized performance improvements on Intel.

    From Sandy Bridge to Haswell sees a 20% improvement, when the overall improvement is closer to 13%.

    HandBrake improved even more - a whopping 29% from Sandy Bridge to Haswell.

    And, of course, I'm talking core-for-core, clock-for-clock.

    I suspect much of this improvement is related to the AVX/SIMD improvements.

    Just hope AMD focused on optimizing for the big benchmark programs as well as their server target market with Zen (this is past tense since Zen is being taped out and currently being prototyped.. rumors and some speculation, of course, but probably pretty accurate).
  • zepi - Sunday, August 30, 2015 - link

    One has to remember, that "handbrake" doesn't actually use CPU-resources at all. The process that is actually benchmarked is running x264 codec with certain settings easily accessible by using GUI called handbrake.

    If x264 or x265 programmers create new codepaths inside the codecs that take benefit of new architecture, it received huge performance gains. But what this actually means is that Sandy Bridge and Skylake actually run different benchmarks with different instructions fed to processors.

    Do I care? No, because I just want my videos to be transcoded as quickly as possible, but one should still remember that this kind of real world benchmarks don't necessarily run same workloads on different processors.
  • MrBowmore - Friday, August 28, 2015 - link

    When are you going to publish the runthrough of the architechture?! Waiting impatiently! :)
  • NA1NSXR - Friday, August 28, 2015 - link

    Sigh, still no BCLK comparisons at same clocks. What would really answer some unanswered questions would be comparing 100 x 40 to 200 x 20 for example.

Log in

Don't have an account? Sign up now