CPU Tests: Microbenchmarks

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

Broadwell is a familiar design, with all four cores connected in a ring-bus topology.

Cache-to-DRAM Latency

This is another in-house test built by Andrei, which showcases the access latency at all the points in the cache hierarchy for a single core. We start at 2 KiB, and probe the latency all the way through to 256 MB, which for most CPUs sits inside the DRAM (before you start saying 64-core TR has 256 MB of L3, it’s only 16 MB per core, so at 20 MB you are in DRAM).

Part of this test helps us understand the range of latencies for accessing a given level of cache, but also the transition between the cache levels gives insight into how different parts of the cache microarchitecture work, such as TLBs. As CPU microarchitects look at interesting and novel ways to design caches upon caches inside caches, this basic test proves to be very valuable.

Our data shows a 4-cycle L1, a 12-cycle L2, a 26-50 cycle L3, while the eDRAM has a wide range from 50-150 cycles. This is still quicker than main memory, which goes to 200+ cycles.

Frequency Ramping

Both AMD and Intel over the past few years have introduced features to their processors that speed up the time from when a CPU moves from idle into a high powered state. The effect of this means that users can get peak performance quicker, but the biggest knock-on effect for this is with battery life in mobile devices, especially if a system can turbo up quick and turbo down quick, ensuring that it stays in the lowest and most efficient power state for as long as possible.

Intel’s technology is called SpeedShift, although SpeedShift was not enabled until Skylake.

One of the issues though with this technology is that sometimes the adjustments in frequency can be so fast, software cannot detect them. If the frequency is changing on the order of microseconds, but your software is only probing frequency in milliseconds (or seconds), then quick changes will be missed. Not only that, as an observer probing the frequency, you could be affecting the actual turbo performance. When the CPU is changing frequency, it essentially has to pause all compute while it aligns the frequency rate of the whole core.

We wrote an extensive review analysis piece on this, called ‘Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics’, due to an issue where users were not observing the peak turbo speeds for AMD’s processors.

We got around the issue by making the frequency probing the workload causing the turbo. The software is able to detect frequency adjustments on a microsecond scale, so we can see how well a system can get to those boost frequencies. Our Frequency Ramp tool has already been in use in a number of reviews.

From an idle frequency of 800 MHz, It takes ~32 ms for Intel to boost to 2.0 GHz, then another ~32 ms to get to 3.7 GHz. We’re essentially looking at 4 frames at 60 Hz to hit those high frequencies.

A y-Cruncher Sprint

The y-cruncher website has a large about of benchmark data showing how different CPUs perform to calculate specific values of pi. Below these there are a few CPUs where it shows the time to compute moving from 25 million digits to 50 million, 100 million, 250 million, and all the way up to 10 billion, to showcase how the performance scales with digits (assuming everything is in memory). This range of results, from 25 million to 250 billion, is something I’ve dubbed a ‘sprint’.

I have written some code in order to perform a sprint on every CPU we test. It detects the DRAM, works out the biggest value that can be calculated with that amount of memory, and works up from 25 million digits. For the tests that go up to the ~25 billion digits, it only adds an extra 15 minutes to the suite for an 8-core Ryzen CPU.

With this test, we can see the effect of increasing memory requirements on the workload and the scaling factor for a workload such as this.

  • MT   25m: 1.617s
  • MT   50m: 3.639s
  • MT  100m: 8.156s
  • MT  250m: 24.050s
  • MT  500m: 53.525s
  • MT 1000m: 118.651s
  • MT 2500m: 341.330s

The scaling here isn’t linear – moving from 25m to 2.5b, we should see a 100x time increase, but instead it is 211x.

CPU Tests: SPEC Gaming Tests: Chernobylite
Comments Locked

120 Comments

View All Comments

  • bernstein - Monday, November 2, 2020 - link

    GDDR6 would be ideally suited as an L4 CPU cache... it has >500GB/s throughput and relatively low cost...
  • e36Jeff - Monday, November 2, 2020 - link

    Sure, if you build a 256-bit bus and somehow cram 8 GDDR6 chips onto the CPU package. You'd also be losing 30-40W of TDP to that.
    This is an application that HBM2 would be much better for. You can easily cram up to 4GB into the package with a much lower TDP impact and still get your 500+GB/s throughput. The biggest issue for this is going to be the impact of having to add in another memory controller and the associated die space and power that it eats up.
  • FreckledTrout - Monday, November 2, 2020 - link

    This is also how I see it playing out. Certainly by the time Intel/AMD switch to using GAAFET maybe before. You just need a couple die shrinks that bring densities up and power down.
  • bernstein - Monday, November 2, 2020 - link

    scratch that, GDDR6 has much too high latency...
  • stanleyipkiss - Monday, November 2, 2020 - link

    The 5775C was ahead of its time. Don't know why they didn't go down that rabbit hole (of increasing the size with each gen)
  • hecksagon - Monday, November 2, 2020 - link

    Adding an extra 84mm2 of die area is a recipe for margin erosion, especially when the benefit is situational.
  • CrispySilicon - Monday, November 2, 2020 - link

    Well, I use a 5775C for my main home PC (using it now) and it's more than that. Broadwell was designed for low power. It doesn't run well over 4Ghz and it's not made to.

    My rig idles at about 800mhz, clocks up to 4ghz on all cores, 2ghz on the edram, and 2ghz on DDR3L (overclocked 1866 hyperx fury), yes, 3L, becuase THAT'S where the magic happens. Low power performance.

    I've also used TridentX 2400CL10 modules in it, not worth the higher voltage.

    I'm going to upgrade finally next year. CXL and DDR5 will finally retire this diamond in the rough.

    Retest with nothing in the BIOS changed except the eDRAM multiplier to 20 and see what happens.
  • Notmyusualid - Wednesday, November 4, 2020 - link

    I usually run my Broadwell at 4.4GHz 24/7. However I have a failed bios battery so using the m/b default 4.0GHz overclock settings today. I don't let mine idle at low speeds, its High Performance mode only & I only boot the Desktop for gaming, or Software Define Radio. Both of which want GHz.

    Memory is Vengeance LED 3200MHz (CL15 & only stable at 3000MHz, XMP is not stable either), and 32GB is currently installed.

    Given;
    C:\Windows\System32>winsat mem
    Windows System Assessment Tool
    > Running: Feature Enumeration ''
    > Run Time 00:00:00.00
    > Running: System memory performance assessment ''
    > Run Time 00:00:05.45
    > Memory Performance 54386.55 MB/s
    > Total Run Time 00:00:06.65

    I think that is why my Broadwell missed out on any eDRAM - it wasn't necessary.

    Dolphin runs about 35x seconds, as I remember it.

    6950X running cool in 2020...
  • MrCommunistGen - Monday, November 2, 2020 - link

    HA. Epic timing. Just starting to read this now, but I recently built a system with a Broadwell-based Xeon E3 chip I got for cheap on eBay. Mostly just because I wanted to play with a chip that had eDRAM and the price of entry for an i5 or i7 has remained pretty high.

    This will be a very interesting read!
  • alufan - Monday, November 2, 2020 - link

    News all day as long as its about Intel so it seems on here said it before and have seen nothing since to change my mind

Log in

Don't have an account? Sign up now