A small portion of this page was posted as part of our initial Skylake-K coverage.

Sockets and Chipsets

The new Skylake-S processors are assigned a new socket, LGA 1151, while the soldered down models use a combination of BGA 1515 (Skylake-Y), BGA 1356 (Skylake-U) and BGA 1440 (Skylake-H). On the desktop, Intel’s policy since 2006 has been to maintain sockets for two generations and as a result moving from Broadwell to Skylake we were expecting the change. This means that Skylake processors will not work in LGA1150 based motherboards, i.e. those with Intel’s 8th and 9th generation chipsets. For Skylake we get the 100-series chipsets with additional functionality.

Both Skylake-S and Skylake-H will be subject to the list above, and as mentioned on the previous page while these are being announced today, the more business oriented models (B150/Q1x0) will have a slower roll out over the next few months.

It is worth noting that we had heard rumors that the Z170 chipset was going to be expensive for motherboard manufacturers, resulting in some motherboards that might cost more than X99 counterparts, although those fears were removed when Intel provided final pricing. Nevertheless, it means that there is scope for the cheaper chipsets with fewer features such as H170/H110, especially in high volume markets.

If we set aside the Q170 platform as being Z170 without overclocking but SMB/vPro support, going down in number means fewer features. Perhaps shockingly the H170 chipset is the only other chipset to support Intel RST on PCIe storage, but aside from that the segmentation is as expected with fewer chipset PCIe lanes available on the 150/110 lines.

We’ve touched on the number of chipset lanes on Skylake-S in previous articles, but the 20 PCIe lanes afforded by Z170 is essentially split into five x4 controllers. Each controller can be split into x1, x2 or x4 lanes to total up to four, with a few specific lanes set aside for various features. But the idea here is that the user no longer has to decide between things like M.2 or SATA Express or PCIe slots – with so many on offer, the combinations are somewhat endless.

So in the Z170 and Skylake-H arrangements, three of these x4 controllers are specifically for PCIe storage in RST, however using them all would eat up all the chipset SATA ports, requiring some form of SATA controller if that functionality is wanted. There is also some limitations with gigabit Ethernet controllers as well. But apart from that, any feature that requires a PCIe controller can be added as required (USB 3.1, Thunderbolt, WiFi, non-RST PCIe storage, PCIe slots).

Because Skylake-U and Skylake-Y use on-package low power chipsets, there is a need to be a little more frugal when it comes to functionality in order to save power. As a result we get 12 PCIe lanes for Skylake-U but these can still be split for Intel RST.

Skylake-Y takes out another pair, and limits the onboard SATA ports to two. Interestingly enough, you could still have five native gigabit Ethernet controllers on there as well. I offer $10 and a drink to the first OEM to come up with a Skylake-Y + 5x GbE design.

We have a large piece on the desktop motherboards being released or talked about for Skylake, covering some 55+ products and the different variations within. The major motherboard manufacturers such as ASUS, GIGABYTE, ASRock, MSI, EVGA and a couple of others should all have a wide range ready to purchase on day one, although some models may be region specific.


The badly MSPaint’ed hybrid: MSI’s XPower Gaming Socket, GIGABYTE’s G1 Gaming IO panel, EVGA’s DRAM slots, ECS’s chipset, ASRock’s PCIe arrangement and ASUS’ Deluxe audio.

Here’s an amalgamation of some of the designs coming to end users, with almost all of them investing heavily in gaming brands with specific components to aid the user experience while gaming. Aesthetic designs are also going to be a focus of this generation, with some of the manufacturers moving into a different direction with their designs and trying some new color schemes. Some basic looking models will also be available.

Prices for motherboards will range from $60 all the way past $400+, depending on feature set and size. A number of motherboards above $150 will feature a couple USB 3.1 Gen 2 (10Gbps) ports, although you will have to check whether they are Type-A or Type-C. That being said, most motherboards with USB 3.1 will use both, but there are a select few that are C-only or A-only.  We will see a lot of Intel’s new network controller, the I219-V, although the gaming lines might invest in Rivet Network’s Killer solution instead.

Thunderbolt obviously has more of a play in the laptop space here, and will be available through Intel’s Alpine Ridge controller.  As far as we can tell no devices are set to be announced this week that will have TB support (though we might be surprised), but the controller can also be used for USB 3.1 Gen 2, HDMI 2.0, DisplayPort, and DockPort. According to our sources it would seem that GIGABYTE currently has an exclusive on this technology in the desktop space, and it will be used for their USB 3.1 Gen 2 ports on most motherboard models. The other functionality from the Alpine Ridge controller (TB3, HDMI 2.0) will be on a case-by-case basis depending on how the controller works in two different modes or if extra components are used. We are told that Alpine Ridge costs similarly to the ASMedia ASM1142 controller, but will enable two USB 3.1 Gen 2 ports at 10 Gbps simultaneously as it uses four PCIe lanes from the chipset.

DRAM: The March to DDR4

In the world of DRAM for personal computers, DDR3 is currently king. Having been the main standard since 2007, you would be hard pressed to find a mainstream or low end platform sold that did not require access to DDR3. That changed in the enthusiast segment last year with the launch of Haswell-E which also introduced DDR4 at a high premium. For Haswell-E there was no crossover – you had no choice but to use DDR4 (unless you might be a million-unit customer).

Because the consumers and consumer product OEMs are more price sensitive, DDR4 will be a slower transition in desktops, but the uptake in laptops should be more down to availability. For example, we have seen some DDR4 SO-DIMM engineering samples, but right now we are unsure as to how many of these are available on the B2B circuit as very few (if any) has filtered down to consumer.

Note that there is precedent here in that the move from DDR2 to DDR3 saw a generation of processors that supported both standards and it was up to the motherboard manufacturer to design for it. In this transition, most Skylake processors will support both DDR3L and DDR4 modules, with a few caveats.

On the desktop side, caveat number one is that initially, only DDR4 motherboards will be on the market. So if you upgrade now, DDR4 needs to be on the shopping list as well. We have had word of some DDR3L-only motherboards coming, especially now that the B/H chipsets are beign released.  Combo boards with DDR3L and DDR4 slots on board are also inbound, but caveat one-point-five - you can use either DDR3L or DDR4 but not both at the same time.

Caveat number two, DDR3L is different to DDR3 as it operates at a lower voltage. This means that the memory controllers on Skylake most likely have a combined voltage domain, and regular DDR3 might not work (in fact early testing suggests not without reducing the voltage). Very few people currently own DDR3L DIMMs, so the likelihood of a user performing an upgrade while reusing their RAM might be slim.

Caveat number three: prices of DDR4 have dropped significantly since last year, and there is only a small premium over DDR3. The benefits of DDR4 include a lower operating voltage, a more stable design, and the ability to purchase 16GB modules with ease. That means that a Skylake-S platform will happily take 64GB of memory. Should 16GB SO-DIMM modules surface, we assume Skylake-H designs should be able to use 64GB also.

With that last point, we should point out that Skylake is a dual memory channel architecture, supporting two memory modules per channel. This gives a maximum of four DDR4 tests, and 4x16 = 64GB maximum. We have been told that Skylake’s DDR4 memory controller, compared to previous generations, is absolutely golden at higher speed memory support. By default Skylake supports the JEDEC standard for DDR4, 2133 MT/s at a latency of 15-15-15, but the overclocking guides we have received suggests that all processors should be able to reach DDR4-3200 relatively comfortably, with a few processors in the right motherboards going for DDR4-4000. While this should bode well for integrated graphics users, those high end kits are typically very expensive.

The Skylake Package: High Level Core and Power Delivery Skylake Core Microarchitecture and eDRAM Analysis
Comments Locked

173 Comments

View All Comments

  • Xenonite - Thursday, September 3, 2015 - link

    Actually, it seems that power consumption is the only thing that matters to consumers, even on the desktop.
    All this talk about AMD's lack of competition being the reason why we aren't seeing meaningful generational performance improvements is just that: talk.

    The real thing that hampers performance progress is consumers' plain refusal to upgrade for performance reasons (even a doubling in performance is not economically viable to produce since no one, except for me it seems, will buy it).
    Consumers only buy the lowest power system that they can afford. It has nothing to do with AMD.
    Even if AMD released a CPU that is 4x faster than piledriver, it wouldn't change Intel's priority (nor would it help AMD's sales...).
  • IUU - Wednesday, September 2, 2015 - link

    Sorry for my tone , but "I'm failing to see", how transistor count don't mean more to consumers than to anyone else.
    So, after 10 years of blissful carelessness(because duuude it's user experience dat matters, ugh..),
    you will have everyone deceiving you on what they offer on the price point they offer. Very convenient, especially if they are not able to sustain an exponential increase in performance and passing to the next paradigm to achieve it.

    Because untill very recently we have been seeing mostly healthy practices, despite the fact that you could always meet people pointing to big or small sins.
    Big example, What's the need of an igp on a processor that consumes 90 watts, especially a gpu that is tragically subpar? To hide the fact they have nothing more to offer to the consumer, cpu dependent, at 90 watts(at the current market situation) and have an excuse for charging more on a
    theoretically higher consuming and "higher performing" cpu?
    Because, what bugs me is what if 6700k lacked the igp? Would it perform better without a useless igp dragging it down? I really don't know, but I feel it wouldn't.
    Regarding the mobile solutions and the money and energy limited devices, the igp could really prove to be useful to a lot of people, without overloading their device with a clunky, lowly, discrete gpu.
  • xenol - Wednesday, September 2, 2015 - link

    If the 6700K lacked the iGPU with no other modifications, it would perform exactly the same.
  • MrSpadge - Wednesday, September 2, 2015 - link

    Yes, it would perform exaclty the same (if the iGPU is not used, otherwise it needs memory bandwidth). But the chip would run hotter since it would be a lot smaller. Si is not the best thermal conductor, but the presence of the iGPU spreads the other heat producers a bit.
  • xenol - Wednesday, September 2, 2015 - link

    I don't think that's how thermals in ICs work...
  • MrSpadge - Wednesday, September 2, 2015 - link

    Thermodynamics "work" and don't care if they're being applied to an IC or a metal brick. Silicon is a far better heat conductor than air, so even if the GPU is not used, it will transfer some of the heat from the CPU + Uncore to the heat spreader.

    My comment was a bit stupid, though, in the way that given how tightly packed the CPU cores and the uncore are, the GPU spreads none of them further apart from each other. It could have been designed like that, but according to the picture on one of first few pages it's not.
  • Xenonite - Thursday, September 3, 2015 - link

    No, it wouldn't. You could easily spread out the cores by padding them with much more cache and doubling their speculative and parallel execution capabilities. If you up the power available for such out of order execution, the additional die space could easily result in 50% more IPC throughput.
  • MrSpadge - Thursday, September 3, 2015 - link

    50% IPC increase? Go ahead and save AMD, then! They've been trying that for years with probably billions of R&D budget (accumulated over the years), yet their FX CPUs with huge L3 don't perform significantly better than the APUs with similar CPU cores and no L3 at all.
  • Xenonite - Thursday, September 3, 2015 - link

    Yes, but I specifically mentioned using that extra cache to feed the greater amount of speculative execution units made available by the removal of the iGPU.

    Sadly, AMD can't use this strategy because Global Foundaries' and TSMC's manufacturing technology cannot fit the same amount of transistors into a given area, as Intel's can.
    Furthermore, their yields for large dies are also quite a bit lower and AMD really doesn't have the monetary reserves to produce such a high-risk chip.

    Also, the largest fraction of that R&D budget went into developing smaller, cheaper and lower power processors to try and enter the mobile market, while almost all of the rest went into sacrificing single threaded design (such as improving and relying more on out of order execution, branch prediction and speculative execution) to design Bulldozer-like, multi-core CPUs (which sacrifice a large portion of die area, that could have been used to make a low amount of very fast cores, to implement a large number of slow cores).

    Lastly, I didn't just refer to L3 cache when I suggested using some of the free space left behind by the removal of the iGPU to increase the amount of cache. The L1 and L2 caches could have been made much larger, with more associativity to further reduce the amount and duration of pipeline stalls, due to not having a data dependancy in the cache.
    Also, while it is true that the L3 cache did not make much of a difference in the example you posted, its also equally true that cache performance becomes increasingly important as a CPU's data processing throughput increases.
    Modern CPU caches just seem to have stagnated (aside from some bandwidth inprovements every now and then), because our CPU cores haven't seen that much of a performance upgrade since the last time the caches have been improved.
    Once a CPU gets the required power and transistor budgets for improved out of order performance, the cache will need to be large enough to hold all the different datasets that a single core is working on at the same time (which is not a form a multi-threading in case you were wondering), while also being fast enough to service all of those units at once, without adversely affecting any one set of calculations.
  • techguymaxc - Wednesday, September 2, 2015 - link

    Your representation of Skylake's CPU/IPC performance is inaccurate and incomplete due to the use of the slowest DDR4 memory available. Given the nature of DDR4 (high bandwidth, high latency), it is an absolute necessity to pair the CPU with high clockspeed memory to mitigate the latency impairment. Other sites have tested with faster memory and seen a much larger difference between Haswell and Skylake. See Hardocp's review, (the gaming section specifically) as well as Techspot's review (page 13, memory speed comparison). Hardocp shows Haswell with 1866 RAM is actually faster than Skylake with 2133 RAM in Unigine Heaven and Bioshock Infinite @ lowest quality settings (to create a CPU bottleneck). I find Techspot's article particularly interesting in that they actually tested both platforms with fast RAM. In synthetic testing (Sandra 2015) Haswell with 2400 DDR3 has more memory bandwidth than Skylake with 2666 DDR4, it is not until you pair Skylake with 3000 DDR4 that it achieves more memory bandwidth than Haswell with 2400 DDR3. You can see here directly the impact that latency has, even on bandwidth and not just overall performance. Furthermore in their testing, Haswell with 2400 RAM vs. Skylake with 3000 RAM shows Haswell being faster in Cinebench R15 multi-threaded test (895 vs. 892). Their 7-zip testing has Haswell leading both Skylake configurations in a memory-bound workload (32MB dictionary) in terms of instructions per second. Finally, in a custom Photoshop workload Haswell's performance is once again sandwiched between the two Skylake configurations.

    Clearly both Haswell and Skylake benefit from faster memory. In fact, Skylake should ideally be paired with > 3000 DDR4 as there are still scenarios in which it is slower than Haswell with 2400 DDR3 due to latency differences.

    Enthusiasts are also far more likely to buy faster memory than the literal slowest memory available for the platform, given the minimal price difference. Right now on Newegg one can purchase a 16GB DDR3 2400 kit (2x8) for $90, a mere $10 more than an 1866 16GB kit. With DDR4 the situation is only slightly worse. The cheapest 16GB (2x8) 2133 DDR4 kit is $110, and 3000 goes for $135. It is also important to note that these kits have the same (primary) timings with a CAS latency of 15.

    So now we come to your reasoning for pairing Skylake with such slow RAM, and that of other reviewers, as you are not the only one to have done this. Intel only qualified Skylake with DDR4 up to 2133 MT/s. Why did they do this? To save time and money during the qualification stage leading up to Skylake's release. It is not because Skylake will not work with faster RAM, there isn't an unlocked Skylake chip in existence that is incapable of operating with at least 3000 RAM speed, and some significantly higher. Hardocp was able to test their Skylake sample (with no reports of crashing or errors) with the fastest DDR4 currently available today, 3600 MT/s. I have also heard anecdotally from enthusiasts with multiple samples that DDR4 3400-3600 seems to be the sweet spot for memory performance on Skylake.

    In conclusion, your testing method is improperly formed, when considered from the perspective of an enthusiast whose desire is to obtain the most performance from Skylake without over-spending. Now, if you believe your target audience is not in fact the PC enthusiast but instead a wider "mainstream" audience, I think the technical content of your articles easily belies this notion.

Log in

Don't have an account? Sign up now