A small portion of this page was posted as part of our initial Skylake-K coverage.

The High Level Core

Taking a high level view, the application of the Skylake-S/H architecture features results in the following:

From most perspectives, the central image provided by Intel could have been taken from a Broadwell presentation if you scrubbed out the ‘GT4’ element of the design, but the move to Skylake suggests improvements in each of these areas either in terms of functionality, bandwidth, power consumption, or all of the above.

Out of the box, the Skylake IA core is designed to extract instruction level parallelism from code, and the design allows for more operations to be dispatched, queued and retired in a single clock. We cover this more in the next page. But to augment this, the data transfer ring between cores, cache, graphics and the system agent is also increased in read bandwidth to 128b, allowing more data to be supplied to the cores.  In terms of the cores themselves, the big upgrades surround in-flight management of data, especially with respect to cache misses. We have been told that a number of these upgrades, while they might not be readily usable by current software, should have effects in certain vertical markets (finance, big data, etc).

From a memory connectivity standpoint, most processors will support both DDR3L and DDR4 with a dual memory controller design (or a single memory controller that supports both). Skylake-Y (Core M) is the exception here, and will only work in DDR3L/LPDDR3 modes. As always with multi-DRAM support designs, only one standard can be in use at any time.

On the PCI-Express Graphics allocation side, the Skylake-H and Skylake-K processors will have sixteen PCIe 3.0 lanes to use for directly attached devices to the processor, similar to Intel's previous generation processors. These can be split into a single PCIe 3.0 x16, x8/x8 or x8/x4/x4 with basic motherboard design. (Note that this is different to early reports of Skylake having 20 PCIe 3.0 lanes for GPUs. It does not.)

With this SLI will work up to x8/x8 and CrossFire to x8/x8 or x8/x4/x4 in both desktop and mobile designs. For desktops, if a motherboard supports x8/x4/x4 and a PCIe card is placed into that bottom slot, SLI will not work because only one GPU will have eight lanes. NVIDIA requires a minimum of PCIe x8 in order to enable SLI. Crossfire has no such limitation, which makes the possible configurations interesting. Below we discuss that the chipset has 20 (!) PCIe 3.0 lanes to use in five sets of four lanes, and these could be used for graphics cards as well. That means a motherboard can support x8/x8 from the CPU and PCIe 3.0 x4 from the chipset and end up with either dual-SLI or tri-CFX enabled when all the slots are populated.

For Skylake-U/Y, these processors are not typically paired with discrete graphics and as far as we can tell, the PCIe lanes have been removed from these lines. As a result, any storage based on PCIe (such as M.2) for devices based on these processors will be using the chipset PCIe lanes. As mentioned later, the chipsets on U/Y also differ to their more desktop oriented counterparts.

DMI 3.0

The Skylake-S and H processors are connected to the chipset by the four-lane DMI 3.0 interface. The DMI 3.0 protocol is an upgrade over the previous generation which used DMI 2.0 – this upgrade boosts the speed from 5.0 GT/s (2GB/sec) to 8.0 GT/s (~3.93GB/sec), essentially upgrading DMI from PCIe 2 to PCIe 3, but requires the motherboard traces between the CPU and chipset to be shorter (7 inches rather than 8 inches) in order to maintain signal speed and integrity. This also allows one of the biggest upgrades to the system, chipset connectivity, as shown below in the HSIO section.

Skylake-Y/H, by virtue of having the chipset on the processor package, can keep the interface between the CPU and IO very simple and uses what they call an OPIO – on package input-output. We have seen this as far back as Haswell to deliver 4GB/s of bandwidth at 1pJ/bit, and has been listed as being very power efficient as well as highly scalable.

CPU Power Delivery – Moving the FIVR

Moving on to power arrangements, with Skylake the situation changes as compared to Haswell. Prior to Haswell, voltage regulation was performed by the motherboard and the right voltages were then put into the processor. This was deemed inefficient for power consumption, and for the Haswell/Broadwell processors Intel decided to create a fully integrated voltage regulator (FIVR) in order to reduce motherboard cost and reduce power consumption. This had an unintended side-effect – while it was more efficient (good for mobile platforms), it also acted as a source of heat generation inside the CPU with high frequencies. As a result, overclocking was limited by temperatures and the quality of the FIVR led to a large variation in results. For Broadwell-Y, this also resulted in an increase for the z-height of the processor due to having leveling transistors on the rear of the package. As a result, in order to decrease the z-height of any corresponding product, a hole in the motherboard had to be forged:

For Skylake, the voltage regulation is moved back into the hands of the motherboard manufacturers. This should allow for cooler processors depending on how the silicon works, but it will result in slightly more expensive motherboards.

A slight indication of this will be that some motherboards will go back to having a large amount of multiplexed phases on the motherboard, and it will allow some manufacturers to use this as a differentiating point, although the usefulness of such a design is sometimes questionable.

The Claims: Performance and Power The Skylake Package: 100-Series Chipsets and DRAM
Comments Locked

173 Comments

View All Comments

  • jimmy$mitty - Thursday, September 3, 2015 - link

    Is it your love of AMD that makes you say this? Think about it. The XB1 uses DDR3 for its GPU. This will use DDR4. The XB1 has a small eDRAM cache. Skylake has a small eDRAM cache. The XB1 has a very weak AMD Jaguar based CPU. This will have a much stronger Skylake based CPU.

    So why is it so far fetched to think that Skylake could get close to matching the XB1? It wont outright beat it, not this one maybe the next one, but it could get close with proper optimizations and DX12.

    http://www.anandtech.com/show/6993/intel-iris-pro-...

    http://www.anandtech.com/show/9320/intel-broadwell...

    Haswell beat the top end AMD APU at the time and Broadwell makes the current A10 look even worse.

    AMD is great if you are on a budget. But if you are looking simply for performance they are lagging behind in a lot of ways.
  • JKflipflop98 - Sunday, September 6, 2015 - link

    Ah, I wondered who would make an actually well-reasoned posting. I am not surprised to see it's you.
  • tipoo - Wednesday, September 2, 2015 - link

    I didn't say it was a good value. Just interesting how times have changed, that Intel integrated graphics are this close to a two year old console already.
  • eddman - Thursday, September 3, 2015 - link

    Yes, they "could" care less.
  • MobiusPizza - Friday, September 4, 2015 - link

    As ArsTechnica and TechReport (http://arstechnica.co.uk/gadgets/2015/09/intels-sk... has noted, eDRAM has performance advantage even for people with discrete GPUs
  • anubis44 - Tuesday, September 8, 2015 - link

    "I guarantee it that anyone interested in PC gaming could care less about Intel's IGP as any serious gamer will be getting a Skylake laptop with a Maxwell and next year a Pascal GPU."

    I would argue that anyone interested in PC gaming will avoid laptops like the plague and buy/build a desktop PC so they can replace graphics/ram/CPU easily and pay a lot less for a DX12 card, and on that note, anyone wanting to build a DX12-ready gaming machine right now will be getting a Radeon 290/390(X) series card and skipping Maxwell altogether, as it doesn't support hardware asynchronous shaders.
  • ered - Sunday, February 14, 2016 - link

    Well, when the Macbook gets it, you can stream your screen to the Apple TV connect an Xbox One/PS4 controller and play like you're on console. Having similar graphics and at the same time a computer for school etc. But of course these devices are not competitors to consoles, it's just interesting what is possible.
  • TallestJon96 - Wednesday, September 2, 2015 - link

    You actually make a great point. Despite the fact that on a desktop an i5 paired with a $200 dollar gpu will crush integrated graphics, on a laptop a 72 EU cpu could do some serious work. This paired with ddr4 could kicked integrated graphics up a notch, which is good for everyone, as it raises the lowest common denominator.

    Like you say, it probably won't be long until integrated graphics catch up with the Xbone, especially as they have a CPU advantage in many cases, and with ddr4 they have VERY similar system memory. It'll be a few more years after that til ps4 is caught up with. I would add that tablets will probably catch the xbone before the end of this generation. It could be an interesting future, where games could come to tablet, pc, and consoles simultaneously.
  • Stochastic - Wednesday, September 2, 2015 - link

    "... as it raises the lowest common denominator." That's the important bit. One reason there aren't more PC gamers is simply that there aren't that many people who have modern PCs powerful enough to run today's games. This limits the technical ambition of PC games as developers have to keep in mind the wider PC audience and not just us tech enthusiasts. If integrated graphics can continue improving generation to generation, in a few years time even $600 laptops will be capable of running games at comparable fidelity to the Xbox One. Adding substantive amounts of eDRAM to all integrated GPUs would go a long ways towards making that dream a reality.
  • flashpowered - Wednesday, September 2, 2015 - link

    I am hoping to replace my Arrandale laptop with an ultrabook, and really hope that the 15w or 28w Iris with eDRAM can give me something with a high resolution display and smoother running UI than Retina Haswell/Broadwell.

Log in

Don't have an account? Sign up now