Feeding the Beast

When frequency was all that mattered for CPUs, the main problem became efficiency, thermal performance, and yields: the higher the frequency was pushed, the more voltage needed, the further outside the peak efficiency window the CPU was, and the more power it consumed per unit work. For the CPU that was to sit at the top of the product stack as the performance halo part, it didn’t particularly matter – until the chip hit 90C+ on a regular basis.

Now with the Core Wars, the challenges are different. When there was only one core, making data available to that core through caches and DRAM was a relatively easy task. With 6, 8, 10, 12 and 16 cores, a major bottleneck suddenly becomes the ability to make sure each core has enough data to work continuously, rather than waiting at idle for data to get through. This is not an easy task: each processor now needs a fast way of communicating to each other core, and to the main memory. This is known within the industry as feeding the beast.

Top Trumps: 60 PCIe Lanes vs 44 PCIe lanes

After playing the underdog for so long, AMD has been pushing the specifications of its new processors as one of the big selling points (among others). Whereas Ryzen 7 only had 16 PCIe lanes, competing in part against CPUs from Intel that had 28/44 PCIe lanes, Threadripper will have access to 60 lanes for PCIe add-in cards. In some places this might be referred to as 64 lanes, however four of those lanes are reserved for the X399 chipset. At $799 and $999, this competes against the 44 PCIe lanes on Intel’s Core i9-7900X at $999.

The goal of having so many PCIe lanes is to support the sort of market these processors are addressing: high-performance prosumers. These are users that run multiple GPUs, multiple PCIe storage devices, need high-end networking, high-end storage, and as many other features as you can fit through PCIe. The end result is that we are likely to see motherboards earmark 32 or 48 of these lanes for PCIe slots (x16/x16, x8/x8/x8/x8, x16/x16/x16, x16/x8/x16/x8), followed by a two or three for PCIe 3.0 x4 storage via U.2 drives or M.2 drives, then faster Ethernet (5 Gbit, 10 Gbit). AMD allows each of the PCIe root complexes on the CPU, which are x16 each, to be bifurcated down to x1 as needed, for a maximum of 7 devices. The 4 PCIe lanes going to the chipset will also support several PCIe 3.0 and PCIe 2.0 lanes for SATA or USB controllers.

Intel’s strategy is different, allowing 44 lanes into x16/x16/x8 (40 lanes) or x16/x8/x16/x8 (40 lanes) or x16/x16 to x8/x8/x8x8 (32 lanes) with 4-12 lanes left over for PCIe storage or faster Ethernet controllers or Thunderbolt 3. The Skylake-X chipset then has an additional 24 PCIe lanes for SATA controllers, gigabit Ethernet controllers, SATA controllers and USB controllers.

Top Trumps: DRAM and ECC

One of Intel’s common product segmentations is that if a customer wants a high core count processor with ECC memory, they have to buy a Xeon. Typically Xeons will support a fixed memory speed depending on the number of channels populated (1 DIMM per channel at DDR4-2666, 2 DIMMs per channel at DDR4-2400), as well as ECC and RDIMM technologies. However, the consumer HEDT platforms for Broadwell-E and Skylake-X will not support these and use UDIMM Non-ECC only.

AMD is supporting ECC on their Threadripper processors, giving customers sixteen cores with ECC. However, these have to be UDIMMs only, but do support DRAM overclocking in order to boost the speed of the internal Infinity Fabric. AMD has officially stated that the Threadripper CPUs can support up to 1 TB of DRAM, although on close inspection it requires 128GB UDIMMs, which max out at 16GB currently. Intel currently lists a 128GB limit for Skylake-X, based on 16GB UDIMMs.

Both processors run quad-channel memory at DDR4-2666 (1DPC) and DDR4-2400 (2DPC).

Top Trumps: Cache

Both AMD and Intel use private L2 caches for each core, then have a victim L3 cache before leading to main memory. A victim cache is a cache that obtains data when it is evicted from the cache underneath it, and cannot pre-fetch data. But the size of those caches and how AMD/Intel has the cores interact with them is different.

AMD uses 512 KB of L2 cache per core, leading to an 8 MB of L3 victim cache per core complex of four cores. In a 16-core Threadripper, there are four core complexes, leading to a total of 32 MB of L3 cache, however each core can only access the data found in its local L3. In order to access the L3 of a different complex, this requires additional time and snooping. As a result there can be different latencies based on where the data is in other L3 caches compared to a local cache.

Intel’s Skylake-X uses 1MB of L2 cache per core, leading to a higher hit-rate in the L2, and uses 1.375MB of L3 victim cache per core. This L3 cache has associated tags and the mesh topology used to communicate between the cores means that like AMD there is still time and latency associated with snooping other caches, however the latency is somewhat homogenized by the design. Nonetheless, this is different to the Broadwell-E cache structure, that had 256 KB of L2 and 2.5 MB of L3 per core, both inclusive caches.

The AMD Ryzen Threadripper 1950X and 1920X Review Silicon, Glue, & NUMA Too
Comments Locked

347 Comments

View All Comments

  • Notmyusualid - Sunday, August 13, 2017 - link

    Yep, I'll get the door for him.
  • Jeff007245 - Friday, August 11, 2017 - link

    I don't comment much (if ever), but I have to say one thing... I miss Anand's reviews. What happened to AnandTech?

    What ever happened to IPC testing when IPC used to be compared on a clock for clock basis? I remember the days when IPC used to be Instructions Per Clock, and this website and others would even use a downclock/overclock processors at a nominal clock rate to compare the performance of each processor's IPC. Hell, even Bulldozer with a high clock architecture was downclocked to compare is "relative IPC" in regards using a nominal clockrate.

    And to add to what other's are saying about the bias in the review... Honestly, I have been feeling the same way for some time now. Must be because AnandTech is at the "MERCY" of their mother company Purch Media... When you are at the mercy of your advertisers, you have no choice but to bend the knee, or even worse, bend over and do as they say "or else"...

    Thanks for taking the time in creating this review, but AnandTech to me is no longer AnandTech... What other's say is true, this place is only good for the Forums and the very technical community that is still sticking around.
  • fanofanand - Tuesday, August 15, 2017 - link

    Downclocking and overclocking processors to replicate a different processor within the same family can lead to inaccurate results, as IPC can and does rely (at least to a degree) on cache size and structure. I get what you are saying, but I think Ian's work is pretty damn good.
  • SloppyFloppy - Friday, August 11, 2017 - link

    Why did you leave out the i9s from the gaming tests?
    Why didn't you include the 7700k when you include 1800x for gaming tests?

    People want to know that if they buy a $1k 7900X or 1950X if it's not only great for media creation/compiling but also gaming.
  • silverblue - Friday, August 11, 2017 - link

    Stated why at the bottom of page 1. Also, he used the 7740X, so there is little to no point in putting the 7700K.
  • Lolimaster - Friday, August 11, 2017 - link

    The 1950X is as good at gaming as the 1800X, OCed 1700, with many more cpu resource to toy with.
  • Swp1996 - Friday, August 11, 2017 - link

    Thats The Best Title I have ever seen ...😂😂😂😂🤣🤣🤣🤣🤣 Steroids 😂😂😂🤣🤣🤣🤣🤣🤣🤣
  • corinthos - Friday, August 11, 2017 - link

    in other words.. AMD Ryzen is still the best bet for most people, and the best value. 1700 OC'd all day!
  • BillBear - Friday, August 11, 2017 - link

    >Move on 10-15 years and we are now at the heart of the Core Wars: how many CPU cores with high IPC can you fit into a consumer processor? Up to today, the answer was 10, but now AMD is pushing the barrier to 16

    I don't personally think of Threadripper or parts like Broadwell-E as being consumer level parts.

    For me, the parts most consumers use have been using for the last decade have been Intel parts with two cores or four cores at the high end.

    It's been a long period of stagnation, with cutting power use on mobile parts being the area that saw the most attention and improvement.
  • James S - Friday, August 11, 2017 - link

    Agree the HEDT platforms are not for the average consumer they are for enthusiasts, professional workstation usage, and some other niche uses.

    When the frequency war stopped and the IPC war started. We should have had the core competition 5-8 years back since IPC stagnated to a couple percent gains year on year.

Log in

Don't have an account? Sign up now