CPU Tests: Microbenchmarks

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

All three CPUs exhibit the same behaviour - one core seems to be given high priority, while the rest are not.

Frequency Ramping

Both AMD and Intel over the past few years have introduced features to their processors that speed up the time from when a CPU moves from idle into a high powered state. The effect of this means that users can get peak performance quicker, but the biggest knock-on effect for this is with battery life in mobile devices, especially if a system can turbo up quick and turbo down quick, ensuring that it stays in the lowest and most efficient power state for as long as possible.

Intel’s technology is called SpeedShift, although SpeedShift was not enabled until Skylake.

One of the issues though with this technology is that sometimes the adjustments in frequency can be so fast, software cannot detect them. If the frequency is changing on the order of microseconds, but your software is only probing frequency in milliseconds (or seconds), then quick changes will be missed. Not only that, as an observer probing the frequency, you could be affecting the actual turbo performance. When the CPU is changing frequency, it essentially has to pause all compute while it aligns the frequency rate of the whole core.

We wrote an extensive review analysis piece on this, called ‘Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics’, due to an issue where users were not observing the peak turbo speeds for AMD’s processors.

We got around the issue by making the frequency probing the workload causing the turbo. The software is able to detect frequency adjustments on a microsecond scale, so we can see how well a system can get to those boost frequencies. Our Frequency Ramp tool has already been in use in a number of reviews.

From an idle frequency of 800 MHz, It takes ~16 ms for Intel to boost to the top frequency for both the i9 and the i5. The i7 was most of the way there, but took an addition 10 ms or so. 

Power Consumption: Caution on Core i9 CPU Tests: Office and Science
Comments Locked

279 Comments

View All Comments

  • Oxford Guy - Saturday, April 3, 2021 - link

    Since you are interested in playing Mr. Censor I can give you some advice. Instead of campaigning to have this site degraded to be like Ars and Slashdot — echo chambers of post hiding and clique voting, there are more than enough sites like that where you can find that kind of entertainment.

    I doubt that this site is going to change the comments system into one of those echo chambers for you. But, I can’t stop you from continuing your peevish inept censorship campaign.
  • Qasar - Saturday, April 3, 2021 - link

    nor will it ever change no matter how you whine and complain about everything, just to give you some thing to whine and complain about, whats your point :-)
  • marsdeat - Tuesday, March 30, 2021 - link

    Slight error on page 1: "it really has to go against the 12-core Ryzen 9 5900X, where it loses out by 50% on cores but has a chance to at least draw level on single thread performance."

    No, it loses out by 33% on cores, or the 5900X has 50% more cores. The 11900K doesn't lose by 50% on cores.
  • factual - Tuesday, March 30, 2021 - link

    At this point in time, the best bang for buck CPU is 10700K (At least in Canada). There's no point in wasting money on 11th gen Intel CPUs, you are better off paying the premium for the Ryzen 5000 instead of buying 11th gen Intel.
  • Fulljack - Wednesday, March 31, 2021 - link

    or 10700F if you already have GPU. it's much cheaper even against older 3700X in my country.
  • JayNor - Tuesday, March 30, 2021 - link

    While ABT may be hard for Intel to explain, the description chart in this article indicates that a good cooler + enabled ABT might provide an interesting benchmark result, especially if this is effectively what AMD enables in their default configuration.
  • Byte - Tuesday, March 30, 2021 - link

    Rocket lake is akin to Nvidias Turing, hard pass generation of chips. Unless you really really can't wait a year. But then again, maybe you can buy it.
  • jeremyshaw - Tuesday, March 30, 2021 - link

    Turing was at least consistently faster than their predecessors in the consumer space (and could still claim #1 gaming against its contemporaries). Just being #1, in and of itself, is reason enough to exist. Turing also didn't move the needle in price/perf, but it didn't regress, either. Rocket Lake does not have any real performance angle to hold onto, outside of the narrow AVX512 market (is it even a market?), and its price/perf is worse in many aspects (and that's just MSRP to MSRP, nevermind street prices).
  • AdamK47 - Tuesday, March 30, 2021 - link

    2080 Ti?

    What going on over there at AnandTech?
  • Slash3 - Tuesday, March 30, 2021 - link

    That 2080 Ti is almost brand new to AT, too. Game tests until recently had been done with a GTX 1080. It's not really their wheelhouse at this point and I think they're more than aware. It's ok, the gap has long since been filled.

Log in

Don't have an account? Sign up now