At the time of our Skylake review of both the i7-6700K and the i5-6600K, due to the infancy of the platform and other constraints, we were unable to probe the performance uptake of the processors as they were overclocked. Our overclock testing showed that 4.6 GHz was a reasonable marker for our processors; however fast forward two weeks and that all seems to change as updates are released. With a new motherboard and the same liquid cooler, the same processor that performed 4.6 GHz gave 4.8 GHz with relative ease. In this mini-test, we tested our short-form CPU workload as well as integrated and discrete graphics at several frequencies to see where the real gains are.

In the Skylake review we stated that 4.6 GHz still represents a good target for overclockers to aim for, with 4.8 GHz being indicative of a better sample. Both ASUS and MSI have also stated similar prospects in their press guides that accompany our samples, although as with any launch there is some prospect that goes along with the evolution of understanding the platform over time.

In this mini-test (performed initially in haste pre-IDF, then extra testing after analysing the IGP data), I called on a pair of motherboards - ASUS's Z170-A and ASRock's Z170 Extreme7+ - to provide a four point scale in our benchmarks. Starting with the 4.2 GHz frequency of the i7-6700K processor, we tested this alongside every 200 MHz jump up to 4.8 GHz in both our shortened CPU testing suite as well as iGPU and GTX 980 gaming. Enough of the babble – time for fewer words and more results!


We actually got the CPU to 4.9 GHz, as shown on the right, but it was pretty unstable for even basic tasks.
(Voltage is read incorrectly on the right.)

OK, a few more words before results – all of these numbers can be found in our overclocking database Bench alongside the stock results and can be compared to other processors.

Test Setup

Test Setup
Processor Intel Core i7-6700K (ES, Retail Stepping), 91W, $350
4 Cores, 8 Threads, 4.0 GHz (4.2 GHz Turbo)
Motherboards ASUS Z170-A
ASRock Z170 Extreme7+
Cooling Cooler Master Nepton 140XL
Power Supply OCZ 1250W Gold ZX Series
Corsair AX1200i Platinum PSU
Memory Corsair DDR4-2133 C15 2x8 GB 1.2V or
G.Skill Ripjaws 4 DDR4-2133 C15 2x8 GB 1.2V
Memory Settings JEDEC @ 2133
Video Cards ASUS GTX 980 Strix 4GB
ASUS R7 240 2GB
Hard Drive Crucial MX200 1TB
Optical Drive LG GH22NS50
Case Open Test Bed
Operating System Windows 7 64-bit SP1

The dynamics of CPU Turbo modes, both Intel and AMD, can cause concern during environments with a variable threaded workload. There is also an added issue of the motherboard remaining consistent, depending on how the motherboard manufacturer wants to add in their own boosting technologies over the ones that Intel would prefer they used. In order to remain consistent, we implement an OS-level unique high performance mode on all the CPUs we test which should override any motherboard manufacturer performance mode.

Many thanks to...

We must thank the following companies for kindly providing hardware for our test bed:

Thank you to AMD for providing us with the R9 290X 4GB GPUs.
Thank you to ASUS for providing us with GTX 980 Strix GPUs and the R7 240 DDR3 GPU.
Thank you to ASRock and ASUS for providing us with some IO testing kit.
Thank you to Cooler Master for providing us with Nepton 140XL CLCs.
Thank you to Corsair for providing us with an AX1200i PSU.
Thank you to Crucial for providing us with MX200 SSDs.
Thank you to G.Skill and Corsair for providing us with memory.
Thank you to MSI for providing us with the GTX 770 Lightning GPUs.
Thank you to OCZ for providing us with PSUs.
Thank you to Rosewill for providing us with PSUs and RK-9100 keyboards.

Frequency Scaling and the Handbrake Problem
Comments Locked

103 Comments

View All Comments

  • Oxford Guy - Tuesday, September 1, 2015 - link

    (says the guy with a name like Communism)
  • HollyDOL - Sunday, August 30, 2015 - link

    Well, you might have a point with something here. Even though eye itself can take information very fast and optical nerve itself can transmit them very fast, the electro-chemical bridge (Na-K bridge) between them needs "very long" time before it stabilises chemical levels to be able to transmit another impulse between two nerves. Afaic it takes about 100ms to get the levels back (though I currently have no literature to back that value up) so the next impulse can be transmitted. I suspect there are multitudes of lanes so they are being cycled to get better "frame rate" and other goodies that make it up (tunnel effect for example - narrow field of vision to get more frames with same bandwidth?)...
    Actually I would like to see a science based article on that topic that would make things straight on this play field. Maybe AT could make an article (together with some opthalmologist/neurologist) to clear that up?
  • Communism - Monday, August 31, 2015 - link

    All latencies between the input and the output directly to the brain add up.

    Any deviation on top of that is an error rate that is added on top of that.

    Your argument might as well be "Light between the monitor and your retina is very fast traveling, so why would it matter?"

    One must take everything into consideration when talking about latency and temporal error.
  • qasdfdsaq - Wednesday, September 2, 2015 - link

    Not to mention, even the best monitors have more than 2ms variance in response time depending on what colours they're showing.
  • Nfarce - Sunday, August 30, 2015 - link

    As one who has been building gaming rigs and overclocking since the Celeron 300->450MHz days of the late 1990s, I'd +1 that. Over the past 15+ years, every new build I did with a new chipset (about every 2-3 years) has shown a diminished return in overclock performance for gaming. And my resolutions have increased over that period as well, further putting more demand on the GPU than CPU (going from 1280x1024 in 1998 to 1600x1200 in 2001 to 1920x1080 in 2007 to 2560x1440p in 2013). So here I am today with an i5 4690K which has successfully been gamed on at 4.7GHz, yet I'm only running it at stock speed because there is ZERO improvement on frames in my benchmarked games (Witcher III, BF4, Crysis 3, Alien Isolation, Project Cars, DiRT Rally). It's just a waste of power and heat and wear and tear. I will overclock it however when running video editing software and other CPU-intensive apps which noticeably helps.
  • TallestJon96 - Friday, August 28, 2015 - link

    Scalling seems pretty good, I'd love to see analysis on the i5-6600k as well.
  • vegemeister - Friday, August 28, 2015 - link

    Not stable for all possible inputs == not stable. And *especially* not stable when problematic inputs are present in production software that actually does something useful.
  • Beaver M. - Saturday, August 29, 2015 - link

    Exactly. Fact of the matter is that proper overclocking takes a LONG LONG time to get stable, unless you get extremely lucky. I sometimes spend months to get it stable. Even when testing with Prime95 like theres no tomorrow, it still wont prove that the system is 100% stable. You also have to test games for hours for several days and of course other applications. But you cant really play games 24/7, so it takes quite some time.
  • sonny73n - Sunday, August 30, 2015 - link

    If you have all power saving features disabled, you only have to worry about stability under load. Otherwise, as CPU voltage and frequency fluctuate depend on each application, it maybe a pain. Also most mobos have issues with RAM together with CPU OCed to certain point.
  • V900 - Saturday, August 29, 2015 - link

    Thats an extremely theoretical definition of "production software".

    No professional or production company would ever overclock their machines to begin with.

    For the hobbyist overclocker who on a rare occasion needs to encode something in 4K60, the problem is solved by clicking a button in his settings and rebooting.

    I really don't see the big deal here.

Log in

Don't have an account? Sign up now