Comparing IPC on Skylake: Memory Latency and CPU Benchmarks

The following explanation of IPC has been previously used in our Broadwell review.

Being able to do more with less, in the processor space, allows both the task to be completed quicker and often for less power. While the concept of having devices with multiple cores has allowed many programs to run at once, purely parallel compute such as graphics and most things to run faster, we are all still limited by the fact that a lot of software is still relying on one line of code after another. This is referred to as the serial part of the software, and is the basis for many early programming classes – getting the software to compile and complete is more important than speed. But the truth is that having a few fast cores helps more than several thousand super slow cores. This is where IPC comes in to play.

The principles behind extracting IPC are quite complex as one might imagine. Ideally every instruction a CPU gets should be read, executed and finished in one cycle, however that is never the case. The processor has to take the instruction, decode the instruction, gather the data (depends on where the data is), perform work on the data, then decide what to do with the result. Moving has never been more complicated, and the ability for a processor to hide latency, pre-prepare data by predicting future events or keeping hold of previous events for potential future use is all part of the plan. All the meanwhile there is an external focus on making sure power consumption is low and the frequency of the processor can scale depending on what the target device actually is.

For the most part, Intel has successfully increased IPC every generation of processor. In most cases, 5-10% with a node change and 5-25% with an architecture change with the most recent large jumps being with the Core architecture and the Sandy Bridge architectures, ushering in new waves of super-fast computational power. As Broadwell to Skylake is an architecture change with what should be large updates, we should expect some good gains.

Intel Desktop Processor Cache Comparison
  L1-D L1-I L2 L3 L4
Sandy Bridge i7 4 x 32 KB 4 x 32 KB 4 x 256 KB 8 MB  
Ivy Bridge i7 4 x 32 KB 4 x 32 KB 4 x 256 KB 8 MB  
Haswell i7 4 x 32 KB 4 x 32 KB 4 x 256 KB 8 MB  
Broadwell i7
(Desktop / Iris Pro 6200)
4 x 32 KB 4 x 32 KB 4 x 256 KB 6 MB 128 MB eDRAM
Skylake i7 4 x 32 KB 4 x 32 KB 4 x 256 KB 8 MB  

For this test we took Intel’s most recent high-end i7 processors from the last five generations and set them to 3.0 GHz and with HyperThreading disabled. As each platform uses DDR3, we set the memory across each to DDR3-1866 with a CAS latency of 9. For Skylake we also run at DDR4-2133 C15 as a default speed. From a pure cache standpoint, here is how each of the processors performed:

If we ignore Broadwell and its eDRAM, the purple line, especially from 16MB to 128MB, both of the lines for Skylake stay at the low latencies until 4MB. Between 4MB and 8MB, the cache latency still seems to be substantially lower than that of the previous generations.

Normally in this test, despite all of the CPUs having 8MB of L3 cache, the 8MB test has to spill out to main memory because some of the cache is already filled. If you have a more efficient caching and pre-fetch algorithm here, then the latency ‘at 8MB’ will be lower. So an update for Skylake, as shown in both the DDR4 and DDR3 results, is that the L3 caching algorithms or hardware resources have been upgraded.

At this point I would also compare the DDR3 to DDR4 results on Skylake above 16MB. It seems that the latency in this region is a lot higher than the others, showing nearly 100 clocks as we move up to 1GB. But it is worth remembering that these tests are against a memory clock of 2133 MHz, whereas the others are at 1866 MHz. As a result, the two lines are more or less equal in terms of absolute time, as we would expect.

Here are the generational CPU results at 3.0 GHz:

Dolphin Benchmark: link

Many emulators are often bound by single thread CPU performance, and general reports tended to suggest that Haswell provided a significant boost to emulator performance. This benchmark runs a Wii program that raytraces a complex 3D scene inside the Dolphin Wii emulator. Performance on this benchmark is a good proxy of the speed of Dolphin CPU emulation, which is an intensive single core task using most aspects of a CPU. Results are given in minutes, where the Wii itself scores 17.53 minutes.

Dolphin Emulation Benchmark

Cinebench R15

Cinebench is a benchmark based around Cinema 4D, and is fairly well known among enthusiasts for stressing the CPU for a provided workload. Results are given as a score, where higher is better.

Cinebench R15 - Single Threaded

Cinebench R15 - Multi-Threaded

Point Calculations – 3D Movement Algorithm Test: link

3DPM is a self-penned benchmark, taking basic 3D movement algorithms used in Brownian Motion simulations and testing them for speed. High floating point performance, MHz and IPC wins in the single thread version, whereas the multithread version has to handle the threads and loves more cores. For a brief explanation of the platform agnostic coding behind this benchmark, see my forum post here.

3D Particle Movement: Single Threaded

3D Particle Movement: MultiThreaded

Compression – WinRAR 5.0.1: link

Our WinRAR test from 2013 is updated to the latest version of WinRAR at the start of 2014. We compress a set of 2867 files across 320 folders totaling 1.52 GB in size – 95% of these files are small typical website files, and the rest (90% of the size) are small 30 second 720p videos.

WinRAR 5.01, 2867 files, 1.52 GB

Image Manipulation – FastStone Image Viewer 4.9: link

Similarly to WinRAR, the FastStone test us updated for 2014 to the latest version. FastStone is the program I use to perform quick or bulk actions on images, such as resizing, adjusting for color and cropping. In our test we take a series of 170 images in various sizes and formats and convert them all into 640x480 .gif files, maintaining the aspect ratio. FastStone does not use multithreading for this test, and thus single threaded performance is often the winner.

FastStone Image Viewer 4.9

Video Conversion – Handbrake v0.9.9: link

Handbrake is a media conversion tool that was initially designed to help DVD ISOs and Video CDs into more common video formats. The principle today is still the same, primarily as an output for H.264 + AAC/MP3 audio within an MKV container. In our test we use the same videos as in the Xilisoft test, and results are given in frames per second.

HandBrake v0.9.9 LQ Film

HandBrake v0.9.9 2x4K

Rendering – PovRay 3.7: link

The Persistence of Vision RayTracer, or PovRay, is a freeware package for as the name suggests, ray tracing. It is a pure renderer, rather than modeling software, but the latest beta version contains a handy benchmark for stressing all processing threads on a platform. We have been using this test in motherboard reviews to test memory stability at various CPU speeds to good effect – if it passes the test, the IMC in the CPU is stable for a given CPU speed. As a CPU test, it runs for approximately 2-3 minutes on high end platforms.

POV-Ray 3.7 Beta RC4

Synthetic – 7-Zip 9.2: link

As an open source compression tool, 7-Zip is a popular tool for making sets of files easier to handle and transfer. The software offers up its own benchmark, to which we report the result.

7-zip Benchmark

Overall: CPU IPC

Removing WinRAR as a benchmark because it gets boosted by the eDRAM in Broadwell, we get an interesting look at how each generation has evolved over time. Taking Sandy Bridge (i7-2600K) as the base, we have the following:

From a pure upgrade perspective, the IPC gain here for Skylake does not look great. In fact in two benchmarks the IPC seems to have decreased – 3DPM in single thread mode and 7-ZIP. What makes 3DPM interesting is that the multithread version still has some improvement at least, if only minor. This difference between MT and ST is more nuanced than first appearances suggest. Throughout the testing, it was noticeable that multithreaded results seem to (on average) get a better kick out of the IPC gain than single threaded. If this is true, it would suggest that Intel has somehow improved its thread scheduler or offered new internal hardware to deal with thread management. We’ll probably find out more at IDF later in the year.

If we adjust this graph to show generation to generation improvement and include the DDR4 results:

This graph shows that:

Sandy Bridge to Ivy Bridge: Average ~5.8% Up
Ivy Bridge to Haswell: Average ~11.2% Up
Haswell to Broadwell: Average ~3.3% Up
Broadwell to Skylake (DDR3): Average ~2.4% Up
Broadwell to Skylake (DDR4): Average ~2.7% Up

Oh dear. Typically with an architecture update we see a bigger increase in performance than 2.7% IPC.  Looking at matters purely from this perspective, Skylake does not come out well. These results suggest that Skylake is merely another minor upgrade in the performance metrics, and that a clock for clock result compared to Broadwell is not favorable. However, consider that very few people actually invested in Broadwell. If anything, Haswell was the last major mainstream processor generation that people actually purchased, which means that:

Haswell to Skylake (DDR3): Average ~5.7% Up.

This is more of a bearable increase, and it takes advantage of the fact that Broadwell on the desktop was a niche focused launch. The other results in the review will be interesting to see.

Skylake i7-6700K DRAM Testing: DDR4 vs DDR3L on Gaming Comparing IPC on Skylake: Discrete Gaming
Comments Locked

477 Comments

View All Comments

  • jwcalla - Wednesday, August 5, 2015 - link

    I kind of agree. I think I'm done with paying for a GPU I'm never going to use.
  • jardows2 - Wednesday, August 5, 2015 - link

    If you don't overclock, buy a Xeon E3. i7 performance at i5 price, without integrated GPU.
  • freeskier93 - Wednesday, August 5, 2015 - link

    Except the GPU is still there, it's just disabled. So yes, the E3 is a great CPU for the price (I have one) but you're still paying for the GPU because the silicon is still there, you're just not paying as much.
  • MrSpadge - Wednesday, August 5, 2015 - link

    Dude, an Intel CPU does not get cheaper if it's cheaper to produce. Their prices are only weakly linked to the production costs.
  • AnnonymousCoward - Saturday, August 8, 2015 - link

    That is such a good point. The iGPU might cost Intel something like $1.
  • Vlad_Da_Great - Wednesday, August 5, 2015 - link

    Haha, nobody cares abot you @jjj. Integrating GPU with CPU saves money not to mention space and energy. Instead of paying $200 for the CPU and buy dGPU for another 200-300, you get them both on the same die. OEM's love that. If you dont want to use them just disable the GPU and buy 200W from AMD/NVDA. And it appears now the System memory will come on the CPU silicon as well. INTC wants to exterminate everything, even the cockroaches in your crib.
  • Flunk - Wednesday, August 5, 2015 - link

    Your generational tests look like they could have come from different chips in the same series. Intel isn't giving us much reason to want to upgrade. They could have at least put out a 8-core consumer chip. It isn't even that much more die space to do so.
  • BrokenCrayons - Wednesday, August 5, 2015 - link

    With Skylake's Camera Pipeline, I should be able to apply a sepia filter to my selfies faster than ever before while saving precious electricity that will let me purchase a little more black eyeliner and those skull print leg warmers I've always wanted. Of course, if it doesn't, I'm going to be really upset with them and refuse to run anything more modern than a 1Giga-Pro VIA C3 at 650 MHz because it's the only CPU on the market that is gothic enough pending the lack of much needed sepia support in Skylake.
  • name99 - Wednesday, August 5, 2015 - link

    And BrokenCrayons wins the Daredevil award for most substantial of lack vision regarding how computers can be used in the future.

    For augmented reality to become a thing we need to, you know, actually be able to AUGMENT the image coming in through the camera...
    Today on the desktop (where it can be used to prototype algorithms, and for Surface type devices). Tomorrow in Atom, and (Intel hopes), giving them some sort of edge over ARM (though good luck with that --- I expect by the time this actually hits Atom, every major ARM vendor will have something comparable but superior).

    Beyond things like AR, Apple TODAY uses CoreImage in a variety of places to handle their UI (eg the Blur and Vibrancy effects in Yosemite). I expect they will be very happy to use new GPU extensions that do this with lower power, and that same lower power will extend to all users of the CI APIs.

    Without knowing EXACTLY what Camera Pipeline is providing, we're in no position to judge.
  • BrokenCrayons - Friday, August 7, 2015 - link

    I was joking.

Log in

Don't have an account? Sign up now