Opinion: Why Counting ‘Platform’ PCIe Lanes (and using it in Marketing) Is Absurd

It’s at this point that I’d like to take a detour and discuss something I’m not particularly happy with: counting PCIe lanes.

The number of PCIe lanes on a processor, for as long as I can remember, has always been about which lanes come directly from the PCIe root, offering full bandwidth and with the lowest possible latency. In modern systems this is the processor itself, or in earlier, less integrated systems, the Northbridge. By this metric, a standard Intel mainstream processor has 16 lanes, an AMD Ryzen has 16 or 20, an Intel HEDT processor has 28 or 44 depending on the model, and an AMD Ryzen Threadripper has 60.

In Intel’s documentation, it explicitly lists what is available from the processor via the PCIe root complexes: here 44 lanes come from two lots of sixteen and one twelve lane complex. The DMI3 link to the chipset is in all but name a PCIe 3.0 x4 link, but is not included in this total.

The number of PCIe lanes on a chipset is a little different. Chipsets are for all practical purposes PCIe switches: using a limited bandwidth uplink, it is designed to carry traffic from low bandwidth controllers, such as SATA, Ethernet, and USB. AMD is limited in this regard, due to spending more time re-entering the pure CPU performance race over the last few years and outsource their designs to ASMedia. Intel has been increasing its PCIe 3.0 lane support on its chipsets for at least three generations, now supporting up to 24 PCIe 3.0 lanes. There are some caveats on what lanes can support which controllers, but in general we consider this 24.

Due to the shared uplink, PCIe lanes coming from the chipset (on both the AMD and Intel side) can be bottlenecked very easily, as well as being limited to PCIe 3.0 x4. The chipset introduces additional latency compared to having a controller directly attached to the processor, which is why we rarely see important hardware (GPUs, RAID controllers, FPGAs) connected to them.

The combination of the two lends itself to a variety of platform functionality and configurations. For example, for AMD's X399 platform that has 60 lanes from the processor, the following combinations are 'recommended':

X399 Potential Configurations
  Use PCIe Lanes Total
Content Creator 2 x Pro GPUs
2 x M.2 Cache Drives
10G Ethernet
1 x U.2 Storage
1 x M.2 OS/Apps
6 x SATA Local Backup
x16/x16 from CPU
x4 + x4 from CPU
x4 from CPU
x4 from CPU
x4 from CPU
From Chipset
52 Lanes
Extreme PC 2 x Gaming GPUs
1 x HDMI Capture Card
2 x M.2 for Games/Stream
10G Ethernet
1 x M.2 OS/Apps
6 x SATA Local Backup
x16/x16 from CPU
x8 from CPU
x4 + x4 from CPU
x4 from CPU
x4 from CPU
From Chipset
56 Lanes
Streamer 1 x Gaming GPU
1 x HDMI Capture Card
2 x M.2 Stream/Transcode
10G Ethernet
1 x U.2 Storage
1 x M.2 OS/Apps
6 x SATA Local Backup
x16 from CPU
x4 from CPU
x4 + x4 from CPU
x4 from CPU
x4 from CPU
x4 from CPU
From Chipset
40 Lanes
Render Farm 4 x Vega FE Pro GPUs
2 x M.2 Cache Drives
1 x M.2 OS/Apps
6 x SATA Local Backup
x16/x8/x8/x8
x4 + x4 from CPU
x4 from CPU
From Chipset
52 Lanes

What has started to happen is that these companies are combining both the CPU and chipset PCIe lane counts, in order to promote the biggest number. This is despite the fact that not all PCIe lanes are equal, they do not seem to care. As a result, Intel is cautiously promoting these new Skylake-X processors as having ’68 Platform PCIe lanes’, and has similar metrics in place for other upcoming hardware.

I want to nip this in the bud before it gets out of hand: this metric is misleading at best, and disingenuous at worst, especially given the history of how this metric has been provided in the past (and everyone will ignore the ‘Platform’ qualifier). Just because a number is bigger/smaller than a vendor expected does not give them the right to redefine it and mislead consumers.

To cite precedent: in the smartphone space, around 4-5 years ago, vendors were counting almost anything in the main processor as a core to provide a ‘full core count’. This meant that GPU segments became ‘cores’, special IP blocks for signal and image processing became ‘cores’, security IP blocks became ‘cores’. It was absurd to hear that a smartphone processor had fifteen cores, when the main general purpose cores were a quartet of ARM Cortex A7 designs. Users who follow the smartphone industry will notice that this nonsense stopped pretty quickly, partly due to anything being called a core, but some hints towards artificial cores potentially being placed in the system. If allowed to continue, this would have been a pointless metric.

The same thing is going to happen if the notion of ‘Platform PCIe Lanes’ is allowed to continue.

Explaining the Jump to Using HCC Silicon Test Bed and Setup
Comments Locked

152 Comments

View All Comments

  • ddriver - Monday, September 25, 2017 - link

    You are living in a world of mainstream TV functional BS.

    Quantum computing will never replace computers as we know and use them. QC is very good at a very few tasks, which classical computers are notoriously bad at. The same goes vice versa - QC suck for regular computing tasks.

    Which is OK, because we already have enough single thread performance. And all the truly demanding tasks that require more performance due to their time staking nature scale very well, often perfectly, with the addition of cores, or even nodes in a cluster mode.

    There might be some wiggle room in terms of process and material, but I am not overly optimistic seeing how we are already hitting the limits on silicon and there is no actual progress made on superior alternatives. Are they like gonna wait until they hit the wall to make something happen?

    At any rate, in 30 years, we'd be far more concerned with surviving war, drought and starvation than with computing. A problem that "solves itself" ;)
  • SharpEars - Monday, September 25, 2017 - link

    You are absolutely correct regarding quantum computing and it is photonic computing that we should be looking towards.
  • Notmyusualid - Monday, September 25, 2017 - link

    @ SharpEars

    Yes, as alluded to by IEEE. But I've not looked at it in a couple of years or so, and I think they were still struggling with an optical DRAM of sorts.
  • Gothmoth - Monday, September 25, 2017 - link

    and what have they done for the past 6 years?

    i am glad that i get more cores instead of 5-10% performance per generation.
  • Krysto - Monday, September 25, 2017 - link

    The would if they could. Improvements in IPC have been negligible since Ivy Bridge.
  • kuruk - Monday, September 25, 2017 - link

    Can you add Monero(Cryptonight) performance? Since Cryptonight requires at least 2MB of L3 cache per core for best performance, it would be nice to see how these compare to Threadripper.
  • evilpaul666 - Monday, September 25, 2017 - link

    I'd really like it if Enthusiast ECC RAM was a thing.

    I used to always run ECC on Athlons back in the Pentium III/4 days.Now with 32-128x more memory that's running 30x faster it doesn't seem like it would be a bad thing to have...
  • someonesomewherelse - Saturday, October 14, 2017 - link

    It is. Buy AMD.
  • IGTrading - Monday, September 25, 2017 - link

    I think we're being to kind on Intel.

    Despite the article clearly mentioning it in a proper and professional way, the calm tone of the conclusion seem to legitimize and make it acceptable that Intel basically deceives its customers and ships a CPU that consumes almost 16% more power than its stated TDP.

    THIS IS UNACCEPTABLE and UNPROFESSIONAL from Intel.

    I'm not "shouting" this :) , but I'm trying to underline this fact by putting it in caps.

    People could burn their systems if they design workstations and use cooling solutions for 165W TDP.

    If AMD would have done anything remotely similar, we would have seen titles like "AMD's CPU can fry eggs / system killer / motherboard breaker" and so on ...

    On the other hand, when Intel does this, it is silently, calmly and professionally deemed acceptable.

    It is my view that such a thing is not acceptable and these products should be banned from the market UNTIL Intel corrects its documentation or the power consumption.

    The i7960X fits perfectly in its TDP of 165W, how come i7980X is allowed to run wild and consume 16% more ?!

    This is similar with the way people accepted every crapping design and driver fail from nVIDIA, even DEAD GPUs while complaining about AMD's "bad drivers" that never destroyed a video card like nVIDIA did. See link : https://www.youtube.com/watch?v=dE-YM_3YBm0

    This is not cutting Intel "some slack" this is accepting shit, lies and mockery and paing 2000 USD for it.

    For 2000$ I expect the CPU to run like a Bentley for life, not like modded Mustang which will blow up if you expect it to work as reliably as a stock model.
  • whatevs - Monday, September 25, 2017 - link

    What a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you.

Log in

Don't have an account? Sign up now