CPU Tests: Microbenchmarks

Core-to-Core Latency

As the core count of modern CPUs is growing, we are reaching a time when the time to access each core from a different core is no longer a constant. Even before the advent of heterogeneous SoC designs, processors built on large rings or meshes can have different latencies to access the nearest core compared to the furthest core. This rings true especially in multi-socket server environments.

But modern CPUs, even desktop and consumer CPUs, can have variable access latency to get to another core. For example, in the first generation Threadripper CPUs, we had four chips on the package, each with 8 threads, and each with a different core-to-core latency depending on if it was on-die or off-die. This gets more complex with products like Lakefield, which has two different communication buses depending on which core is talking to which.

If you are a regular reader of AnandTech’s CPU reviews, you will recognize our Core-to-Core latency test. It’s a great way to show exactly how groups of cores are laid out on the silicon. This is a custom in-house test built by Andrei, and we know there are competing tests out there, but we feel ours is the most accurate to how quick an access between two cores can happen.

All three CPUs exhibit the same behaviour - one core seems to be given high priority, while the rest are not.

Frequency Ramping

Both AMD and Intel over the past few years have introduced features to their processors that speed up the time from when a CPU moves from idle into a high powered state. The effect of this means that users can get peak performance quicker, but the biggest knock-on effect for this is with battery life in mobile devices, especially if a system can turbo up quick and turbo down quick, ensuring that it stays in the lowest and most efficient power state for as long as possible.

Intel’s technology is called SpeedShift, although SpeedShift was not enabled until Skylake.

One of the issues though with this technology is that sometimes the adjustments in frequency can be so fast, software cannot detect them. If the frequency is changing on the order of microseconds, but your software is only probing frequency in milliseconds (or seconds), then quick changes will be missed. Not only that, as an observer probing the frequency, you could be affecting the actual turbo performance. When the CPU is changing frequency, it essentially has to pause all compute while it aligns the frequency rate of the whole core.

We wrote an extensive review analysis piece on this, called ‘Reaching for Turbo: Aligning Perception with AMD’s Frequency Metrics’, due to an issue where users were not observing the peak turbo speeds for AMD’s processors.

We got around the issue by making the frequency probing the workload causing the turbo. The software is able to detect frequency adjustments on a microsecond scale, so we can see how well a system can get to those boost frequencies. Our Frequency Ramp tool has already been in use in a number of reviews.

From an idle frequency of 800 MHz, It takes ~16 ms for Intel to boost to the top frequency for both the i9 and the i5. The i7 was most of the way there, but took an addition 10 ms or so. 

Power Consumption: Caution on Core i9 CPU Tests: Office and Science
Comments Locked

279 Comments

View All Comments

  • Makste - Tuesday, April 6, 2021 - link

    I again have to agree with you on this. Especially with the cooler scenario, it is not easy to spot the detail, but you have managed to bring it to the surface. Rocket Lake is not a good upgrade option now that I look at it.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    (Sorry I messed up and forgot quotation marks in the previous post. 1st, 3rd, and 5th paragraphs are quotes from the article.)

    you wrote:
    ‘Rocket Lake on 14nm: The Best of a Bad Situation’

    I fixed it:
    Rocket Lake on 14nm: Intel's Obsolete Node Produces Inferior CPU'

    ‘Intel is promoting that the new Cypress Cove core offers ‘up to a +19%’ instruction per clock (IPC) generational improvement over the cores used in Comet Lake, which are higher frequency variants of Skylake from 2015.’

    What is the performance per watt? What is the performance per decibel? How do those compare with AMD? Performance includes performance per watt and per decibel, whether Intel likes that or not.

    ‘Designing a mass-production silicon layout requires balancing overall die size with expected yields, expected retail costs, required profit margins, and final product performance. Intel could easily make a 20+ core processor with these Cypress Cove cores, however the die size would be too large to be economical, and perhaps the power consumption when all the cores are loaded would necessitate a severe reduction in frequency to keep the power under control. To that end, Intel finalised its design on eight cores.’

    Translation: Intel wanted to maximize margin by feeding us the ‘overclocked few cores’ design paradigm, the same thing AMD did with Radeon VII. It’s a cynical strategy when one has an inferior design. Just like Radeon VII, these run hot, loud, and underperform. AMD banked on enough people irrationally wanting to buy from ‘team red’ to sell those, while its real focus was on peddling Polaris forever™ + consoles in the GPU space. Plus, AMD sells to miners with designs like that one.

    ‘Intel has stated that in the future it will have cores designed for multiple process nodes at the same time, and so given Rocket Lake’s efficiency at the high frequencies, doesn’t this mean the experiment has failed? I say no, because it teaches Intel a lot in how it designs its silicon’

    This is bad spin. This is not an experimental project. This is product being massed produced to be sold to consumers.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    One thing many are missing, with all the debate about AVX-512, is the AVX-2 performance per watt/decibel problem:

    'The rated TDP is 125 W, although we saw 160 W during a regular load, 225 W peaks with an AVX2 rendering load, and 292 W peak power with an AVX-512 compute load'

    Only 225 watts? How much power does AMD's stuff use with equivalent work completion speed?
  • Hifihedgehog - Thursday, April 1, 2021 - link

    "The spin also includes the testing, using a really loud high-CFM CPU cooler in the Intel and a different quieter one on the AMD."

    Keep whining... You'll eventually tire out.

    https://i.imgur.com/HZVC03T.png

    https://i.imgflip.com/53vqce.jpg
  • Makste - Tuesday, April 6, 2021 - link

    Isn't it too much for you to keep posting the same thing over and over?
  • Oxford Guy - Wednesday, March 31, 2021 - link

    Overclocking support page still doesn’t mention that Intel recently discontinued the overclocking warranty, something that was available since Sandy Bridge or something. Why the continued silence on this?

    ‘On the Overclocking Enhancement side of things, this is perhaps where it gets a bit nuanced.’

    How is it an ‘enhancement’ when the chips are already system-melting hot? There isn't much that's nuanced about Intel’s sudden elimination of the overclocking warranty.

    ‘Overall, it’s a performance plus. It makes sense for the users that can also manage the thermals. AMD caught a wind with the feature when it moved to TSMC’s 7nm. I have a feeling that Intel will have to shift to a new manufacturing node to get the best out of ABT’

    It also helps when people use extremely loud very high CFM coolers for their tests. Intel pioneered the giant hidden fridge but deafness-inducing air cooling is another option.

    How much performance will buyers find in the various hearing aids they'll be in the market for? There aren't any good treatments for tinnitus, btw. That's a benefit one gets for life.

    ‘Intel uses one published value for sustained performance, and an unpublished ‘recommended’ value for turbo performance, the latter of which is routinely ignored by motherboard manufacturers.’

    It’s also routinely ignored by Intel since it peddles its deceptive TDP.

    ‘This is showing the full test, and we can see that the higher performance Intel processors do get the job done quicker. However, the AMD Ryzen 7 processor is still the lowest power of them all, and finishes the quickest. By our estimates, the AMD processor is twice as efficient as the Core i9 in this test.’

    Is that with the super-loud very high CFM cooler on the Intel and the smaller weaker Noctua on the AMD? If so, how about a noise comparison? Performance per decibel?

    ‘The cooler we’re using on this test is arguably the best air cooling on the market – a 1.8 kilogram full copper ThermalRight Ultra Extreme, paired with a 170 CFM high static pressure fan from Silverstone.’

    The same publication that kneecapped AMD’s Zen 1 and Zen 2 but refusing to enable XMP for RAM on the very dubious claim that most enthusiasts don’t enter BIOS to switch it on. Most people are going to have that big loud cooler? Does Intel bundle it? Does it provide a coupon? Does the manual say you need cooler from a specific list?
  • BushLin - Wednesday, March 31, 2021 - link

    I won't argue with the rest of your assessment but given these CPUs are essentially factory overclocked close to their limits, the only people who'd benefit from an overclocking warranty are probably a handful of benchmark freaks doing suicide runs on LN2.
  • Oxford Guy - Thursday, April 1, 2021 - link

    That’s why I said the word ‘enhancement’ seems questionable.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    ‘Anyone wanting a new GPU has to actively pay attention to stock levels, or drive to a local store for when a delivery arrives.’

    You forgot the ‘pay the scalper price at retail’ part. MSI, for instance, was the first to raise its prices across the board to Ebay scalper prices and is now threatening to raise them again.

    ‘In a time where we have limited GPUs available, I can very much see users going all out on the CPU/memory side of the equation, perhaps spending a bit extra on the CPU, while they wait for the graphics market to come back into play. After all, who really wants to pay $1300 for an RTX 3070 right now?’

    • That is the worst possible way to deal with planned obsolescence.

    14nm is already obsolete. Now, you’re adding in wating for a very long time to get a GPU, making your already obsolete CPU really obsolete by the time you can get one. If you’re waiting for reasonable prices for GPUs you’re looking at, what, more than a year of waiting?

    ‘Intel’s Rocket Lake as a backported processor design has worked’

    No. It’s a failure. The only reasons Intel will be able to sell it is because AMD is production-constrained and because there isn’t enough competition in the x86 space to force AMD to cut the pricing of the 5000 line.

    Intel also cynically hobbled the CPU by starving it of cores to increase profit for itself, banking that people will buy it anyway. It’s the desktop equivalent of Radeon VII. Small die + way too high clock to ‘compensate’ + too-high price = banking on consumer foolishness to sell them (or mining, in the case of AMD). AVX-512 isn’t really going to sell these like mining sold the Radeon VII.

    ‘However, with the GPU market being so terrible, users could jump an extra $100 and get 50% more AMD cores.’

    No mention of power consumption, heat, and noise. Just ‘cores’ and price tag.
  • Oxford Guy - Wednesday, March 31, 2021 - link

    'Intel could easily make a 20+ core processor with these Cypress Cove cores, however the die size would be too large to be economical'

    Citation needed.

    And, economical for Intel or the customer?

    Besides, going from 8 cores to 20+ is using hyperbole to distract from the facts.

    'and perhaps the power consumption when all the cores are loaded would necessitate a severe reduction in frequency to keep the power under control.'

    The few cores + excessive clocks to 'compensate' strategy is a purely cynical one. It always causes inferior performance per watt. It always causes more noise.

    So, Intel is not only trying to feed us its very obsolete 14nm node, it's trying to do it in the most cynical manner it can: by trying to use 8 cores as the equivalent of what it used to peddle exclusively for the desktop market: quads.

    It thinks it can keep its big margins up by segmenting this much, hoping people will be fooled into thinking the bad performance per watt from too-high clocks is just because of 14nm — not because it's cranking too few cores too high to save itself a few bucks.

    Intel could offer more cores and implement as turbo with a gaming mode that would keep power under control for gaming while maximizing performance. The extra cores would presumably be able to do more work for the watts by keeping clocks/voltage more within the optimal range.

    But no... it would rather give people the illusion of a gaming-optimized part ('8 cores ought to be enough for anyone') when it's only optimized for its margin.

Log in

Don't have an account? Sign up now