Feeding the Beast

When frequency was all that mattered for CPUs, the main problem became efficiency, thermal performance, and yields: the higher the frequency was pushed, the more voltage needed, the further outside the peak efficiency window the CPU was, and the more power it consumed per unit work. For the CPU that was to sit at the top of the product stack as the performance halo part, it didn’t particularly matter – until the chip hit 90C+ on a regular basis.

Now with the Core Wars, the challenges are different. When there was only one core, making data available to that core through caches and DRAM was a relatively easy task. With 6, 8, 10, 12 and 16 cores, a major bottleneck suddenly becomes the ability to make sure each core has enough data to work continuously, rather than waiting at idle for data to get through. This is not an easy task: each processor now needs a fast way of communicating to each other core, and to the main memory. This is known within the industry as feeding the beast.

Top Trumps: 60 PCIe Lanes vs 44 PCIe lanes

After playing the underdog for so long, AMD has been pushing the specifications of its new processors as one of the big selling points (among others). Whereas Ryzen 7 only had 16 PCIe lanes, competing in part against CPUs from Intel that had 28/44 PCIe lanes, Threadripper will have access to 60 lanes for PCIe add-in cards. In some places this might be referred to as 64 lanes, however four of those lanes are reserved for the X399 chipset. At $799 and $999, this competes against the 44 PCIe lanes on Intel’s Core i9-7900X at $999.

The goal of having so many PCIe lanes is to support the sort of market these processors are addressing: high-performance prosumers. These are users that run multiple GPUs, multiple PCIe storage devices, need high-end networking, high-end storage, and as many other features as you can fit through PCIe. The end result is that we are likely to see motherboards earmark 32 or 48 of these lanes for PCIe slots (x16/x16, x8/x8/x8/x8, x16/x16/x16, x16/x8/x16/x8), followed by a two or three for PCIe 3.0 x4 storage via U.2 drives or M.2 drives, then faster Ethernet (5 Gbit, 10 Gbit). AMD allows each of the PCIe root complexes on the CPU, which are x16 each, to be bifurcated down to x1 as needed, for a maximum of 7 devices. The 4 PCIe lanes going to the chipset will also support several PCIe 3.0 and PCIe 2.0 lanes for SATA or USB controllers.

Intel’s strategy is different, allowing 44 lanes into x16/x16/x8 (40 lanes) or x16/x8/x16/x8 (40 lanes) or x16/x16 to x8/x8/x8x8 (32 lanes) with 4-12 lanes left over for PCIe storage or faster Ethernet controllers or Thunderbolt 3. The Skylake-X chipset then has an additional 24 PCIe lanes for SATA controllers, gigabit Ethernet controllers, SATA controllers and USB controllers.

Top Trumps: DRAM and ECC

One of Intel’s common product segmentations is that if a customer wants a high core count processor with ECC memory, they have to buy a Xeon. Typically Xeons will support a fixed memory speed depending on the number of channels populated (1 DIMM per channel at DDR4-2666, 2 DIMMs per channel at DDR4-2400), as well as ECC and RDIMM technologies. However, the consumer HEDT platforms for Broadwell-E and Skylake-X will not support these and use UDIMM Non-ECC only.

AMD is supporting ECC on their Threadripper processors, giving customers sixteen cores with ECC. However, these have to be UDIMMs only, but do support DRAM overclocking in order to boost the speed of the internal Infinity Fabric. AMD has officially stated that the Threadripper CPUs can support up to 1 TB of DRAM, although on close inspection it requires 128GB UDIMMs, which max out at 16GB currently. Intel currently lists a 128GB limit for Skylake-X, based on 16GB UDIMMs.

Both processors run quad-channel memory at DDR4-2666 (1DPC) and DDR4-2400 (2DPC).

Top Trumps: Cache

Both AMD and Intel use private L2 caches for each core, then have a victim L3 cache before leading to main memory. A victim cache is a cache that obtains data when it is evicted from the cache underneath it, and cannot pre-fetch data. But the size of those caches and how AMD/Intel has the cores interact with them is different.

AMD uses 512 KB of L2 cache per core, leading to an 8 MB of L3 victim cache per core complex of four cores. In a 16-core Threadripper, there are four core complexes, leading to a total of 32 MB of L3 cache, however each core can only access the data found in its local L3. In order to access the L3 of a different complex, this requires additional time and snooping. As a result there can be different latencies based on where the data is in other L3 caches compared to a local cache.

Intel’s Skylake-X uses 1MB of L2 cache per core, leading to a higher hit-rate in the L2, and uses 1.375MB of L3 victim cache per core. This L3 cache has associated tags and the mesh topology used to communicate between the cores means that like AMD there is still time and latency associated with snooping other caches, however the latency is somewhat homogenized by the design. Nonetheless, this is different to the Broadwell-E cache structure, that had 256 KB of L2 and 2.5 MB of L3 per core, both inclusive caches.

The AMD Ryzen Threadripper 1950X and 1920X Review Silicon, Glue, & NUMA Too
Comments Locked

347 Comments

View All Comments

  • Vorl - Thursday, August 10, 2017 - link

    the answer to both of you is that "this is a High end PC processor, not a workstation CPU, and not a server CPU. That was clearly covered at the start of the article.

    If you want raw number crunching info, there will be other sites that are going to have those reviews, and really, maybe anandtech will review it in that light since it really is such a powerful CPU in another review for server stuff.

    Also, there is a LOT of value in having a standardized set of tests. Even if a few tests here and there are no longer valuable like PDF opening, the same tests being used across the board are important for BENCH. you can't compare products if you aren't using the same tools.

    Unfortunately AMD is ahead of the curve currently with massive SMP being given to normal consumers now at a reasonable price. It will take a little time for dev's to catch up and really make use of this amazing CPU.

    With the processing power in a CPU like this imagine the game mechanics that can be created and used, For those of us that are more interested in making this a reasonably priced workstation/server build for VMs etc, cool for us, but that isn't where this is being marketed, and it's not really fair to jump all over the reviewer for it.
  • Zstream - Thursday, August 10, 2017 - link

    Utter rubbish. This CPU is designed for a workstation build. Some a product labeled Xeon is a workstation CPU, but this isn't?
  • mapesdhs - Friday, August 11, 2017 - link

    Yeah, TR doesn't really look like something that's massively aimed at gamers, it has too many capabilities and features which gamers wouldn't be interested in.
  • pm9819 - Friday, August 18, 2017 - link

    AMD themselves call it a consumer cpu. Is Intel paying them as well
  • Lolimaster - Friday, August 11, 2017 - link

    It's a HEDT/workstation, a year ago people called Workstation a dual Xeon 8 cores, which a sole 1950X replicates.

    Intel draws a line not supporting ECC, AMD supports ECC in all their main cpu's server or not all the way back to Athlon 64.

    16cores/32threads, ECC, 64 pci-e lanes, upgrade path to 32cores/64threads with zen3. Smells Workstation to me.

    Another thing is server cpu's which EPYC is, with features tailored to it, like a massive core count with low clock speeds to maximize efficiency and damn expensive mobos without any gamerish gizmo, just think to put on building without looking at net. TR can do a bit of that too, but optimized to an all around performance and budget friendly.
  • Ian Cutress - Thursday, August 10, 2017 - link

    Dan sums it up. Some of these tests are simply check boxes - is it adequate enough.

    Some people do say that an automated suite isn't the way to do things: unfortunately without spending over two months designing this script I wouldn't have time for nearly as much data or to test nearly as many CPUs. Automation is a key aspect to testing, and I've spent a good while making sure tests like our Chromium Compile can be process consistent across systems.

    There's always scope to add more tests (my scripts are modular now), if they can be repeatable and deterministic, but also easy to understand in how they are set up. Feel free to reach out via email if you have suggestions.
  • Johan Steyn - Thursday, August 10, 2017 - link

    Ian, I understand that you see them as checkboxes, but this is not a normal CPU John doe is going to buy. It has a very specific audience and I feel you are missing that audience badly. I guy that buys this to use for rendering or 3Dstudio Max, is not going to worry about games. Yes, it would be a great bonus to also be OK at it. Other sittes even did tests of running rendering as well as play games at the same time. TR shined like a star against Intel. This is actually something that might happen in real life. A guy could begin a render and then while waiting, decide to play a game.

    I would not buy TR to open pdf's, would I?
  • Ian Cutress - Thursday, August 10, 2017 - link

    No, but you open things like IDEs and Premiere. A PDF test is a gateway test in that regard with an abnormally large input. When a workstation is not crunching hard, it's being used to navigate through programs with perhaps the web and documents in tow where the UX is going to be indicative of something like PDF opening.
  • Lolimaster - Friday, August 11, 2017 - link

    Including useless benchs not only you waste target audience time, you too having to write and upload images from that useless benchs instead of making the article more interesting.

    How about a "the destroyer for HEDT/Workstion", a typical productivy load + some gaming, out of a sudden people will get TWICE the cpu resources, they can do things they couldn't before on the same machine.

    They could get a dual socket mobo with 2x10c Xeons paying the hefty premium with pathetic clock speeds if they wante to game a bit while doing work, TR fixed that, with mass consumer type of gaming performance while reducing the multicore costs by more than half (cores counts + ECC support without paying intel tax).
  • Lolimaster - Friday, August 11, 2017 - link

    And that audience few months ago was limited to do their productivity thing with 6-8 cores or 10 paying the huge intel tax, probably they couldn't game without hurting other things and had a 2 secondary PC for killing time.

    With TR and the massive 16 core count they can finally do all of that off a single PC or focus the entire powerhorse when they need (leaving things do work during their sleep).

Log in

Don't have an account? Sign up now