A small portion of this page was posted as part of our initial Skylake-K coverage.

The High Level Core

Taking a high level view, the application of the Skylake-S/H architecture features results in the following:

From most perspectives, the central image provided by Intel could have been taken from a Broadwell presentation if you scrubbed out the ‘GT4’ element of the design, but the move to Skylake suggests improvements in each of these areas either in terms of functionality, bandwidth, power consumption, or all of the above.

Out of the box, the Skylake IA core is designed to extract instruction level parallelism from code, and the design allows for more operations to be dispatched, queued and retired in a single clock. We cover this more in the next page. But to augment this, the data transfer ring between cores, cache, graphics and the system agent is also increased in read bandwidth to 128b, allowing more data to be supplied to the cores.  In terms of the cores themselves, the big upgrades surround in-flight management of data, especially with respect to cache misses. We have been told that a number of these upgrades, while they might not be readily usable by current software, should have effects in certain vertical markets (finance, big data, etc).

From a memory connectivity standpoint, most processors will support both DDR3L and DDR4 with a dual memory controller design (or a single memory controller that supports both). Skylake-Y (Core M) is the exception here, and will only work in DDR3L/LPDDR3 modes. As always with multi-DRAM support designs, only one standard can be in use at any time.

On the PCI-Express Graphics allocation side, the Skylake-H and Skylake-K processors will have sixteen PCIe 3.0 lanes to use for directly attached devices to the processor, similar to Intel's previous generation processors. These can be split into a single PCIe 3.0 x16, x8/x8 or x8/x4/x4 with basic motherboard design. (Note that this is different to early reports of Skylake having 20 PCIe 3.0 lanes for GPUs. It does not.)

With this SLI will work up to x8/x8 and CrossFire to x8/x8 or x8/x4/x4 in both desktop and mobile designs. For desktops, if a motherboard supports x8/x4/x4 and a PCIe card is placed into that bottom slot, SLI will not work because only one GPU will have eight lanes. NVIDIA requires a minimum of PCIe x8 in order to enable SLI. Crossfire has no such limitation, which makes the possible configurations interesting. Below we discuss that the chipset has 20 (!) PCIe 3.0 lanes to use in five sets of four lanes, and these could be used for graphics cards as well. That means a motherboard can support x8/x8 from the CPU and PCIe 3.0 x4 from the chipset and end up with either dual-SLI or tri-CFX enabled when all the slots are populated.

For Skylake-U/Y, these processors are not typically paired with discrete graphics and as far as we can tell, the PCIe lanes have been removed from these lines. As a result, any storage based on PCIe (such as M.2) for devices based on these processors will be using the chipset PCIe lanes. As mentioned later, the chipsets on U/Y also differ to their more desktop oriented counterparts.

DMI 3.0

The Skylake-S and H processors are connected to the chipset by the four-lane DMI 3.0 interface. The DMI 3.0 protocol is an upgrade over the previous generation which used DMI 2.0 – this upgrade boosts the speed from 5.0 GT/s (2GB/sec) to 8.0 GT/s (~3.93GB/sec), essentially upgrading DMI from PCIe 2 to PCIe 3, but requires the motherboard traces between the CPU and chipset to be shorter (7 inches rather than 8 inches) in order to maintain signal speed and integrity. This also allows one of the biggest upgrades to the system, chipset connectivity, as shown below in the HSIO section.

Skylake-Y/H, by virtue of having the chipset on the processor package, can keep the interface between the CPU and IO very simple and uses what they call an OPIO – on package input-output. We have seen this as far back as Haswell to deliver 4GB/s of bandwidth at 1pJ/bit, and has been listed as being very power efficient as well as highly scalable.

CPU Power Delivery – Moving the FIVR

Moving on to power arrangements, with Skylake the situation changes as compared to Haswell. Prior to Haswell, voltage regulation was performed by the motherboard and the right voltages were then put into the processor. This was deemed inefficient for power consumption, and for the Haswell/Broadwell processors Intel decided to create a fully integrated voltage regulator (FIVR) in order to reduce motherboard cost and reduce power consumption. This had an unintended side-effect – while it was more efficient (good for mobile platforms), it also acted as a source of heat generation inside the CPU with high frequencies. As a result, overclocking was limited by temperatures and the quality of the FIVR led to a large variation in results. For Broadwell-Y, this also resulted in an increase for the z-height of the processor due to having leveling transistors on the rear of the package. As a result, in order to decrease the z-height of any corresponding product, a hole in the motherboard had to be forged:

For Skylake, the voltage regulation is moved back into the hands of the motherboard manufacturers. This should allow for cooler processors depending on how the silicon works, but it will result in slightly more expensive motherboards.

A slight indication of this will be that some motherboards will go back to having a large amount of multiplexed phases on the motherboard, and it will allow some manufacturers to use this as a differentiating point, although the usefulness of such a design is sometimes questionable.

The Claims: Performance and Power The Skylake Package: 100-Series Chipsets and DRAM
Comments Locked

173 Comments

View All Comments

  • JKflipflop98 - Sunday, September 6, 2015 - link

    Dude, if you're still running Arrandale, just about anything you buy at this point is going to be a *MAJOR* upgrade across the board.
  • tipoo - Tuesday, September 1, 2015 - link

    "Intel’s graphics topology consists of an ‘unslice’ (or slice common) that deals solely with command streaming, vertex fetching, tessellation, domain shading, geometry shaders, and a thread dispatcher. "

    This part of their architecture seemed like the weak spot which led to little scaling between 1/2/3 slices going from DX11 to DX12. So will that remain the same with Skylake, or are there other differences that will allow better scaling with DX12?
  • extide - Wednesday, September 2, 2015 - link

    That sentence is actually incorrect (the quote).

    The unslice is not the same as slice common.
    The Unslice is whats not in the slices, obviously, but the 'Slice Common' is what IS in the slice, but which ISNT the EU's themselves..
  • extide - Wednesday, September 2, 2015 - link

    So, for example, GT2 has 1 Unslice, 3 Slice Commons (1/slice) and 24 EU's (8/slice).
  • extide - Wednesday, September 2, 2015 - link

    Actually it's GT2 has 1 Unslice, 1 Slice, 3 sub slices, 3 Slice Commons (1/subslice) and 24 EU's (8/subslice).
  • ltcommanderdata - Tuesday, September 1, 2015 - link

    Apple appears to take the Core M and use it in a 7 W cTDP up configuration in the Retina MacBook. I wonder if the increase in performance would be worth the increased heat and power consumption to use U-class processors in a 7.5 W cTDP down configuration instead? Or even try to accommodate 9.5 W cTDP down U-class processors to take advantage of the GT3e to better drive the retina display?
  • Kutark - Tuesday, September 1, 2015 - link

    I feel like i should know this already, but what are they referring to with the whole 4+2, 2+2 etc etc?
  • Kutark - Tuesday, September 1, 2015 - link

    Nm, i think i figured it out. 4core + GT2, or 2core +gt2, etc etc
  • HideOut - Wednesday, September 2, 2015 - link

    Yep, nailed it. (the +2/3/4 is GT2/3/4)
  • Braincruser - Wednesday, September 2, 2015 - link

    first number is CPU cores/modules, second part is gpu modules.

Log in

Don't have an account? Sign up now