Final Words

Based on our testing of Skylake-K, performance per clock did not impress as expected, but by going back several generations it gave an overall performance boost worthy of an upgrade from 3-to-5 year old platforms where more CPU performance might be a good thing to have. The chipset side of the equation is also blown wide open, with PCIe ports being able to provide any PCIe x4 or under idea through the chipset without having to share bandwidth.

The reasons for Skylake-K’s clock-for-clock performance relates to the fact that Skylake is a mobile first design, and Intel has focused a lot of resources in two areas. First is power delivery, by adding more power gating and finer frequency control direct to the hardware through Speed Shift, the processor can take advantage of unused parts as well as improving the power consumption of a non-steady state workload. Speed Shift does two things - it requires operating system support, but it allows the operating system to hand control of frequency adjustments back to the hardware within an OS specified range. As the processor is able to adjust its own frequency settings almost 30x faster than the operating system, it can respond to dynamic workloads such as those experienced in day-to-day use of a device faster than the OS.  The second part of Speed Shift relates to the power states of a system, and the Skylake platform allows this hardware control to be substantially more granular than the high level P-states, allowing the onboard controller to determine the best efficiency mode to run the processor with respect to system power (which is also now tracked). When the system is in its most efficient state and performance is not a concern, Skylake can also induce a duty cycle for the processor cores, switching them on and off, to remain at the most efficient power state when performance is not a priority.

Also on the power side of things, the integrated graphics gets a boost here too. Intel has added a number of fixed function units, such as those for HEVC decode, to reduce power as well as more power gating and frequency domain differentiation. Generation 9 graphics also gets added RAW processing functionality for camera setups, and Multi Plane Overlay performs stretch/rotation and translation of parts of the desktop directly on the display controller rather than requiring trips out to main memory to waste power in doing so. Multi Plane Overlay is still in its infancy, but it is part of the graphics platform that Intel seems keen to improve over the years. All together, from a power perspective, it will be interesting to feel the claimed benefits of the new architecture.

The other avenue for Skylake’s design choices stems from the corporate or business aspect of Intel’s platform. At some point a form of the Skylake architecture will find its way into an extreme platform Xeon, which means the design must be able to cope from 4.5W in Skylake-Y up to a potential of 140W in a Xeon E5 (which is what currently exists on Haswell-EP). The best way to do this is by making the design more modular, adding in performance when necessary. While Intel has not been explicit about which segments are more modular than others, we have been indicated that the L2 cache and the movement from an 8-way associative design to a 4-way design is part of this.  It is worth noting that this change also affords power benefits for Skylake, at a small expense in performance.

Also, as per our architecture analysis, despite the decrease in L2 associativity, CPU performance limited environments benefit from wider and deeper execution buffers, attempting to extract as much instruction parallelism as possible. Skylake extends this by providing a larger out-of-order window for micro-ops to queue into, providing larger buffers for in-flight data (both load and store), doubling of the L2 cache miss bandwidth, general improvements to the branch predicition to reduce unnecessary speculation, an increase in micro-op dispatch and retirement as well as new instructions for better cache management. As per our understanding, these under-the-hood changes will be seen as the Skylake platform develops.

From the processor perspective, Skylake affords an interesting dynamic. For the most part, the TDP ratings in the stack of processors remain similar to those in Broadwell, meaning that the 15W design for Broadwell should only need minor adjustments (new motherboard, new ports for new features) and still provide sufficient cooling but at better performance and/or more battery life. Despite not being directly released today, Intel is talking a lot about its eDRAM implementations, both as 4+3e (quad core, GT3 with 64MB) models and 4+4e (quad core, GT4 with 128MB) models.

The addition of Iris and Iris Pro graphics to Intel's 15W and 28W SKUs stands to significantly improve the state of graphics performance on ultrabooks and near-ultrabook form factors. Intel has offered processors with more powerful graphics since Haswell, although seemingly begrudgingly, withholding larger GPUs and eDRAM for the 45W SKUs. While OEMs will be paying more for these higher performance processors, these new Iris and Iris Pro configurations are practically a love letter to Apple, who has long desired more powerful iGPUs for their products and was a major instigator of the Iris lineup in the first place. We wouldn’t be the least bit surprised to see Apple jump on these new parts for the 13” MacBook Pro and the MacBook Air series, as they deliver just what Apple has been looking for. 

This ends up being doubly beneficial for all concerned, as the actual working nature of the eDRAM has changed, making it more like a DRAM buffer with lower latency and transparent to software, but it is clear that Intel is releasing substantially more parts than it ever did on Haswell and Broadwell, with more to follow as time progresses. It is unclear if an eDRAM part will make the desktop however, though we might imagine a Skylake-H part with eDRAM in a future iMac in order to update the line.

Two new parts that will be constantly discussed over the next few months is the overclockable mobile processor, the i7-6820HK and the mobile Xeons. Firstly the overclockable processor is most likely only going to be in the large desktop replacement type of environments, coming out as a 45W part, but it will most likely end up in gaming systems where users want to add more heat and voltage to their gaming system. We might see some interesting laptop designs as a result of this.  The mobile Xeon aspect of the release is Intel's first foray into Xeons specifically targeted at mobile, rather than a repurposed desktop processor in a bulky chassis. At this point, Intel is only launching a couple of E3 v5 models at 45W, suggesting that they will still end up in large workstation chassis with professional graphics cards, but it implements a step that Intel might take down into the notebook segment with both ECC and vPro and other Xeon features that translate well into business environments.

As always, if previous analysis has proven anything, architecture discussions can promise much on paper but where it really matters is if it translates through the devices into which the architecture is used. Implementations such as Speed Shift, fixed function units and an increase in power gating/frequency control should all impact a substantial segment of users, assuming all these functions have the correct operating support. Arguably how earth shattering or paradigm shifting Intel’s Skylake platform is can be up for discussion, but this week is the tech show IFA in Berlin where we expect a number of OEMs to start showing devices as well as processors actually going on sale. We will update with product testing when we can get our hands on them.

Processor Lists and Conclusions
Comments Locked

173 Comments

View All Comments

  • Xenonite - Thursday, September 3, 2015 - link

    Actually, it seems that power consumption is the only thing that matters to consumers, even on the desktop.
    All this talk about AMD's lack of competition being the reason why we aren't seeing meaningful generational performance improvements is just that: talk.

    The real thing that hampers performance progress is consumers' plain refusal to upgrade for performance reasons (even a doubling in performance is not economically viable to produce since no one, except for me it seems, will buy it).
    Consumers only buy the lowest power system that they can afford. It has nothing to do with AMD.
    Even if AMD released a CPU that is 4x faster than piledriver, it wouldn't change Intel's priority (nor would it help AMD's sales...).
  • IUU - Wednesday, September 2, 2015 - link

    Sorry for my tone , but "I'm failing to see", how transistor count don't mean more to consumers than to anyone else.
    So, after 10 years of blissful carelessness(because duuude it's user experience dat matters, ugh..),
    you will have everyone deceiving you on what they offer on the price point they offer. Very convenient, especially if they are not able to sustain an exponential increase in performance and passing to the next paradigm to achieve it.

    Because untill very recently we have been seeing mostly healthy practices, despite the fact that you could always meet people pointing to big or small sins.
    Big example, What's the need of an igp on a processor that consumes 90 watts, especially a gpu that is tragically subpar? To hide the fact they have nothing more to offer to the consumer, cpu dependent, at 90 watts(at the current market situation) and have an excuse for charging more on a
    theoretically higher consuming and "higher performing" cpu?
    Because, what bugs me is what if 6700k lacked the igp? Would it perform better without a useless igp dragging it down? I really don't know, but I feel it wouldn't.
    Regarding the mobile solutions and the money and energy limited devices, the igp could really prove to be useful to a lot of people, without overloading their device with a clunky, lowly, discrete gpu.
  • xenol - Wednesday, September 2, 2015 - link

    If the 6700K lacked the iGPU with no other modifications, it would perform exactly the same.
  • MrSpadge - Wednesday, September 2, 2015 - link

    Yes, it would perform exaclty the same (if the iGPU is not used, otherwise it needs memory bandwidth). But the chip would run hotter since it would be a lot smaller. Si is not the best thermal conductor, but the presence of the iGPU spreads the other heat producers a bit.
  • xenol - Wednesday, September 2, 2015 - link

    I don't think that's how thermals in ICs work...
  • MrSpadge - Wednesday, September 2, 2015 - link

    Thermodynamics "work" and don't care if they're being applied to an IC or a metal brick. Silicon is a far better heat conductor than air, so even if the GPU is not used, it will transfer some of the heat from the CPU + Uncore to the heat spreader.

    My comment was a bit stupid, though, in the way that given how tightly packed the CPU cores and the uncore are, the GPU spreads none of them further apart from each other. It could have been designed like that, but according to the picture on one of first few pages it's not.
  • Xenonite - Thursday, September 3, 2015 - link

    No, it wouldn't. You could easily spread out the cores by padding them with much more cache and doubling their speculative and parallel execution capabilities. If you up the power available for such out of order execution, the additional die space could easily result in 50% more IPC throughput.
  • MrSpadge - Thursday, September 3, 2015 - link

    50% IPC increase? Go ahead and save AMD, then! They've been trying that for years with probably billions of R&D budget (accumulated over the years), yet their FX CPUs with huge L3 don't perform significantly better than the APUs with similar CPU cores and no L3 at all.
  • Xenonite - Thursday, September 3, 2015 - link

    Yes, but I specifically mentioned using that extra cache to feed the greater amount of speculative execution units made available by the removal of the iGPU.

    Sadly, AMD can't use this strategy because Global Foundaries' and TSMC's manufacturing technology cannot fit the same amount of transistors into a given area, as Intel's can.
    Furthermore, their yields for large dies are also quite a bit lower and AMD really doesn't have the monetary reserves to produce such a high-risk chip.

    Also, the largest fraction of that R&D budget went into developing smaller, cheaper and lower power processors to try and enter the mobile market, while almost all of the rest went into sacrificing single threaded design (such as improving and relying more on out of order execution, branch prediction and speculative execution) to design Bulldozer-like, multi-core CPUs (which sacrifice a large portion of die area, that could have been used to make a low amount of very fast cores, to implement a large number of slow cores).

    Lastly, I didn't just refer to L3 cache when I suggested using some of the free space left behind by the removal of the iGPU to increase the amount of cache. The L1 and L2 caches could have been made much larger, with more associativity to further reduce the amount and duration of pipeline stalls, due to not having a data dependancy in the cache.
    Also, while it is true that the L3 cache did not make much of a difference in the example you posted, its also equally true that cache performance becomes increasingly important as a CPU's data processing throughput increases.
    Modern CPU caches just seem to have stagnated (aside from some bandwidth inprovements every now and then), because our CPU cores haven't seen that much of a performance upgrade since the last time the caches have been improved.
    Once a CPU gets the required power and transistor budgets for improved out of order performance, the cache will need to be large enough to hold all the different datasets that a single core is working on at the same time (which is not a form a multi-threading in case you were wondering), while also being fast enough to service all of those units at once, without adversely affecting any one set of calculations.
  • techguymaxc - Wednesday, September 2, 2015 - link

    Your representation of Skylake's CPU/IPC performance is inaccurate and incomplete due to the use of the slowest DDR4 memory available. Given the nature of DDR4 (high bandwidth, high latency), it is an absolute necessity to pair the CPU with high clockspeed memory to mitigate the latency impairment. Other sites have tested with faster memory and seen a much larger difference between Haswell and Skylake. See Hardocp's review, (the gaming section specifically) as well as Techspot's review (page 13, memory speed comparison). Hardocp shows Haswell with 1866 RAM is actually faster than Skylake with 2133 RAM in Unigine Heaven and Bioshock Infinite @ lowest quality settings (to create a CPU bottleneck). I find Techspot's article particularly interesting in that they actually tested both platforms with fast RAM. In synthetic testing (Sandra 2015) Haswell with 2400 DDR3 has more memory bandwidth than Skylake with 2666 DDR4, it is not until you pair Skylake with 3000 DDR4 that it achieves more memory bandwidth than Haswell with 2400 DDR3. You can see here directly the impact that latency has, even on bandwidth and not just overall performance. Furthermore in their testing, Haswell with 2400 RAM vs. Skylake with 3000 RAM shows Haswell being faster in Cinebench R15 multi-threaded test (895 vs. 892). Their 7-zip testing has Haswell leading both Skylake configurations in a memory-bound workload (32MB dictionary) in terms of instructions per second. Finally, in a custom Photoshop workload Haswell's performance is once again sandwiched between the two Skylake configurations.

    Clearly both Haswell and Skylake benefit from faster memory. In fact, Skylake should ideally be paired with > 3000 DDR4 as there are still scenarios in which it is slower than Haswell with 2400 DDR3 due to latency differences.

    Enthusiasts are also far more likely to buy faster memory than the literal slowest memory available for the platform, given the minimal price difference. Right now on Newegg one can purchase a 16GB DDR3 2400 kit (2x8) for $90, a mere $10 more than an 1866 16GB kit. With DDR4 the situation is only slightly worse. The cheapest 16GB (2x8) 2133 DDR4 kit is $110, and 3000 goes for $135. It is also important to note that these kits have the same (primary) timings with a CAS latency of 15.

    So now we come to your reasoning for pairing Skylake with such slow RAM, and that of other reviewers, as you are not the only one to have done this. Intel only qualified Skylake with DDR4 up to 2133 MT/s. Why did they do this? To save time and money during the qualification stage leading up to Skylake's release. It is not because Skylake will not work with faster RAM, there isn't an unlocked Skylake chip in existence that is incapable of operating with at least 3000 RAM speed, and some significantly higher. Hardocp was able to test their Skylake sample (with no reports of crashing or errors) with the fastest DDR4 currently available today, 3600 MT/s. I have also heard anecdotally from enthusiasts with multiple samples that DDR4 3400-3600 seems to be the sweet spot for memory performance on Skylake.

    In conclusion, your testing method is improperly formed, when considered from the perspective of an enthusiast whose desire is to obtain the most performance from Skylake without over-spending. Now, if you believe your target audience is not in fact the PC enthusiast but instead a wider "mainstream" audience, I think the technical content of your articles easily belies this notion.

Log in

Don't have an account? Sign up now