Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.


Click to enlarge (lots of cores and threads = lots of core pairings)

Comparing core to core latencies from Zen 4 (7950X) and Zen 3 (5950X), both are using a two CCX 8-core chiplet design, which is a marked improvement over the four CCX 16-core design featured on the Zen 2 microarchitecture, the Ryzen 9 3950X. The inter-core latencies within the L3 cache range from between 15 ns and 19 ns. The inter-core latencies between different cores within different parts of the CCD show a larger latency penalty of up to 79.5 ns, which is something AMD should work on going forward, but it's an overall improvement in cross CCX latencies compared to Zen 3. Any gain is still a gain.

Even though AMD has opted for a newer and more 'efficient' IOD which is based on TSMC's 6 nm node. It is around the same size physically as the previous AMD IOD on Zen 3 manufactured on GlobalFoundries 12 nm node, but with a much larger transistor count. Within the IOD is the newly integrated RDNA 2 graphics, although this isn't typical iGPU in the sense that an APU is. A lot of the room on the IOD is made up of the DDR5 memory controller or IMC, as well as the chips PCIe 5.0 lanes, and of course, connects to the logic through its primary interconnect named Infinity Fabric. All of these variables play a part on power, latency, and operation.


AMD Ryzen 9 5950X Core-to-Core Latency results

It's actually astounding how similar the latency performance of the Ryzen 9 7950X (Zen 4) is when compared directly to the Ryzen 9 5950X (Zen 3), despite being on the new 5 nm TSMC manufacturing process. Even with a change of IOD, but with the same interconnect, the inter-core latencies within the Ryzen 9 7950X are great in terms of cores within the same core complex; latency does degrade when pairing up with a core in another chiplet, but this works and AMD's Ryzen 5000 series proved that the overall penalty performance is negatable.

Test Bed and Setup SPEC2017 Single-Threaded Results
POST A COMMENT

205 Comments

View All Comments

  • Tomatotech - Friday, September 30, 2022 - link

    Nice idea but you’re swimming against the flow of history. The trend is always to more tightly integrate various components into smaller and smaller packages. Apple have moved to onboard RAM in the same package as the CPU which has bought significant bandwidth advantages and seems to have boosted iGPU to the level of low-end dGPUs.

    The main takeaway from your metaphor of the 650w dGPU with a 55w mainboard and 100-200w CPU is that high-end dGPUs are now effectively separate computers in their own right - especially as a decent one can be well over 50% of the cost of the whole PC - and are being constrained by having to fit into the PC in terms of physical space, power supply capacity, and cooling capacity.

    It’s a shrinking market on both the low end and high end for home use of dGPU, given these innovations and constraints and I don’t know where it’s going to go from here.

    Since I got optic fibre, I’ve started renting cloud based high-end dGPU and it has been amazing albeit the software interface has been frustrating at times. With symmetric gigabit service and 1-3ms ping, it’s like having it under my desk. I worked out that for unlimited hours and given the cost of electricity, it would take 10 years for my cloud rental costs to match the cost of buying and running a home high end dGPU.

    Not everyone has optic fibre of course but globally it’s rolling out year by year so the trend is clear again.
    Reply
  • Castillan - Wednesday, September 28, 2022 - link

    "

    clang version 10.0.0
    clang version 7.0.1 (ssh://git@github.com/flang-compiler/flang-driver.git
    24bd54da5c41af04838bbe7b68f830840d47fc03)

    -Ofast -fomit-frame-pointer
    -march=x86-64
    -mtune=core-avx2
    -mfma -mavx -mavx2
    "

    ...and then later the article says:

    "The performance increase can be explained by a number of variables, including the switch from DDR4 to DDR5 memory, a large increase in clock speed, as well as the inclusion of the AVX-512 instruction set, albeit using two 256-bit pumps."

    The problem here being that those arguments to Clang will NOT enable AVX-512. Only AVX2 will be enabled. I verified this on an AVX512 system.

    To enable AVX512, at least at the most basic level, you'll want to use "-mavx512f ". There's also a whole stack of other AVX512 capabilities, which are enabled with "-mavx512dq -mavx512bw -mavx512vbmi -mavx512vbmi2 -mavx512vl" but some may not be supported. It won't hurt to include those on the command line though, until you try to compile something that makes use of those specific features, and then you'll see a failure if the platform doesn't support those extensions.
    Reply
  • Ryan Smith - Friday, September 30, 2022 - link

    Correct. AVX-512 is not in play here. That is an error in analysis on our part. Thanks! Reply
  • pman6 - Thursday, September 29, 2022 - link

    intel supports 8k60 AV1 decode.

    Does ryzen 7000 support 8k60 ??
    Reply
  • GeoffreyA - Monday, October 3, 2022 - link

    The Radeon Technology Group is getting 16K ready. Reply
  • yhselp - Thursday, September 29, 2022 - link

    I'd love to see you investigate memory scaling on the Zen 4 core. Reply
  • Myrandex - Thursday, September 29, 2022 - link

    The table on page four mentions "Quad Channel (128-bit bus)" for memory support. Does that mean we could have a 4 memory slot solution, with one memory module per channel, with four channel support? This way to drastically increase memory bandwidth all while maintaining those fast DDR5 frequencies? Reply
  • Ryan Smith - Friday, September 30, 2022 - link

    No. That configuration would be no different than a 2 DIMM setup in terms of bandwidth or capacity. Slotted memory is all configured DIMMs; as in Dual Inline Memory Module. Reply
  • GeoffreyA - Friday, September 30, 2022 - link

    All in all, excellent work, AMD, on the 7950X. Undoubtedly shocking performance. Even that dubious AVX-512 benchmark where Intel used to win, Zen 4 has taken command of it. However, lower your prices, AMD, and don't be so greedy. Little by little, you are becoming Intel. Don't be evil.

    Thanks, Ryan and Gavin, for the review and all the hard work. Much appreciated. Have a great week.
    Reply
  • Footman36 - Friday, September 30, 2022 - link

    Yawn. I really don't see what the big fuss is about. I currently run 5600X and was interested to see how the 7600X compared and while it does look like a true uplift in performance over the 5600X, I would have to factor in cost of new motherboard and DDR5 ram! On top of that, the comparison is not exactly apples to apples in the testing. 7600X has a turbo speed of 5.3, 5600X 4.6. 7600X runs with 5200 DDR5 and 5600X 3200 DDR4, 7600X has TDP 105W, 5600X 65W. If you take a look at the final page where the 7950X is tested in ECO mode which effectively supplies 65W instead of 105W you lose 18% performance. If we try to do apples to apples and use eco mode with 7600X, to get apples to apples with 65W of 5600W, then lower boost to 4.6ghz then the performance of the 2 cpu's looks very similar. Perhaps not the way I should be analyzing the results, but just my observation.... Reply

Log in

Don't have an account? Sign up now