CPU Tests: Microbenchmarks

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

In terms of the core-to-core tests on the Tiger Lake-H 11980HK, it’s best to actually compare results 1:1 alongside the 4-core Tiger Lake design such as the i7-1185G7:

What’s very interesting in these results is that although the new 8-core design features double the cores, representing a larger ring-bus with more ring stops and cache slices, is that the core-to-core latencies are actually lower both in terms of best-case and worst-case results compared to the 4-core Tiger Lake chip.

This is generally a bit perplexing and confusing, generally the one thing to account for such a difference would be either faster CPU frequencies, or a faster clock of lower cycle latency of the L3 and the ring bus. Given that TGL-H comes 8 months after TGL-U, it is plausible that the newer chip has a more matured implementation and Intel would have been able to optimise access latencies.

Due to AMD’s recent shift to a 8-core core complex, Intel no longer has an advantage in core-to-core latencies this generation, and AMD’s more hierarchical cache structure and interconnect fabric is able to showcase better performance.

Cache & DRAM Latency

This is another in-house test built by Andrei, which showcases the access latency at all the points in the cache hierarchy for a single core. We start at 2 KiB, and probe the latency all the way through to 256 MB, which for most CPUs sits inside the DRAM (before you start saying 64-core TR has 256 MB of L3, it’s only 16 MB per core, so at 20 MB you are in DRAM).

Part of this test helps us understand the range of latencies for accessing a given level of cache, but also the transition between the cache levels gives insight into how different parts of the cache microarchitecture work, such as TLBs. As CPU microarchitects look at interesting and novel ways to design caches upon caches inside caches, this basic test proves to be very valuable.

What’s of particular note for TGL-H is the fact that the new higher-end chip does not have support for LPDDR4, instead exclusively relying on DDR4-3200 as on this reference laptop configuration. This does favour the chip in terms of memory latency, which now falls in at a measured 101ns versus 108ns on the reference TGL-U platform we tested last year, but does come at a cost of memory bandwidth, which is now only reaching a theoretical peak of 51.2GB/s instead of 68.2GB/s – even with double the core count.

What’s in favour of the TGL-H system is the increased L3 cache from 12MB to 24MB – this is still 3MB per core slice as on TGL-U, so it does come with the newer L3 design which was introduced in TGL-U. Nevertheless, this fact, we do see some differences in the L3 behaviour; the TGL-H system has slightly higher access latencies at the same test depth than the TGL-U system, even accounting for the fact that the TGL-H CPUs are clocked slightly higher and have better L1 and L2 latencies. This is an interesting contradiction in context of the improved core-to-core latency results we just saw before, which means that for the latter Intel did make some changes to the fabric. Furthermore, we see flatter access latencies across the L3 depth, which isn’t quite how the TGL-U system behaved, meaning Intel definitely has made some changes as to how the L3 is accessed.

Power Consumption - Up to 65W or not? SPEC CPU - Single-Threaded Performance
Comments Locked

229 Comments

View All Comments

  • Qasar - Monday, May 17, 2021 - link

    " Name a single workload where the spec results line up with application performance"
    post a single link that shows you are right, and Andrei is wrong, as so far, it seems you are just typing FUD.
    personally, im going with Andrei.
  • Spunjji - Tuesday, May 18, 2021 - link

    Why don't you name some where it doesn't, given that you're the one making the extraordinary claim here?
  • Andrei Frumusanu - Monday, May 17, 2021 - link

    I've added in the text to those pages now, and I explain why they would end up like that.

    The TGL-H system has half the memory level parallelism with its 2x64 DDR4 channels versus the 4x16b LPDDR4 channels of the TGL system, and those two workloads are characterised by heavy parallelised memory bandwidth.

    We've seen a 66% performance difference on a 5950X between 2x SR and 4x SR DIMM memory in the MT test, it all depends on the DRAM configuration and what kind of parallelism it allows.

    Our testing is correct and we have the correct understanding of the microarchitectures and workloads.
  • vyor - Monday, May 17, 2021 - link

    Thanks for finally actually giving reasons, should have been there before publishing.

    And no, no it isn't. You don't even publish your compiler settings.
  • Andrei Frumusanu - Monday, May 17, 2021 - link

    The compiler settings are literally on the SPEC page and have been there the whole time, and have been set in stone on the Windows side for over a year now for every article.
  • vyor - Monday, May 17, 2021 - link

    I do not believe those are the actual compiler settings. Because if they are, you fucked up hard.
  • mode_13h - Monday, May 17, 2021 - link

    Outlier results should be investigated and understood. They might be very informative of edge cases. Or, they might indeed expose procedural errors in the testing. Either way, your attitude of dismissing them as erroneous and abusing the testers is not helpful.

    It's fine to call attention to anomalies and ask questions, but abuse is not called for and shouldn't be tolerated.
  • vyor - Monday, May 17, 2021 - link

    Except that he's been getting called out for this for the last year+.
  • Andrei Frumusanu - Monday, May 17, 2021 - link

    And every time I've demolished the unsubstantiated empty argument with data and facts. If you do not have any actual technical argument to make then don't make any.
  • ballsystemlord - Monday, May 17, 2021 - link

    I have to back up Andrei here. You've only given us hyperbole so far.
    Which compiler settings do you have a problem with exactly?
    As a former Gentoo Linux user, I don't see a problem with them. Of course, -Ofast shouldn't be used in a production system -- but he is benchmarking here.

Log in

Don't have an account? Sign up now