Last week, we published our AMD 2nd Gen Ryzen Deep Dive, covering our testing and analysis of the latest generation of processors to come out from AMD. Highlights of the new products included better cache latencies, faster memory support, an increase in IPC, an overall performance gain over the first generation products, new power management methods for turbo frequencies, and very competitive pricing.

In our review, we had a change in some of the testing. The big differences in our testing for this review was two-fold: the jump from Windows 10 Pro RS2 to Windows 10 Pro RS3, and the inclusion of the Spectre and Meltdown patches to mitigate the potential security issues. These patches are still being rolled out by motherboard manufacturers, with the latest platforms being first in that queue. For our review, we tested the new processors with the latest OS updates and microcode updates, as well as re-testing the Intel Coffee Lake processors as well. Due to time restrictions, the older Ryzen 1000-series results were used.

Due to the tight deadline of our testing and results, we pushed both our CPU and gaming tests live without as much formal analysis as we typically like to do. All the parts were competitive, however it quickly became clear that some of our results were not aligned with those from other media. Initially we were under the impression that this was as a result of the Spectre and Meltdown (or Smeltdown) updates, as we were one of the few media outlets to go back and perform retesting under the new standard.

Nonetheless, we decided to take an extensive internal audit of our testing to ensure that our results were accurate and completely reproducible. Or, failing that, understanding why our results differed. No stone was left un-turned: hardware, software, firmware, tweaks, and code. As a result of that process we believe we have found the reason for our testing being so different from the results of others, and interestingly it opened a sizable can of worms we were not expecting.


An extract from our Power testing script

What our testing identified is that the source of the issue is actually down to timers. Windows uses timers for many things, such as synchronization or ensuring linearity, and there are sets of software relating to monitoring and overclocking that require the timer with the most granularity - specifically they often require the High Precision Event Timer (HPET). HPET is very important, especially when it comes to determining if 'one second' of PC time is the equivalent to 'one second' of real-world time - the way that Windows 8 and Windows 10 implements their timing strategy, compared to Windows 7, means that in rare circumstances the system time can be liable to clock shift over time. This is often highly dependent on how the motherboard manufacturer implements certain settings. HPET is a motherboard-level timer that, as the name implies, offers a very high level of timer precision beyond what other PC timers can provide, and can mitigate this issue. This timer has been shipping in PCs for over a decade, and under normal circumstances it should not be anything but a boon to Windows.

However, it sadly appears that reality diverges from theory – sometimes extensively so – and that our CPU benchmarks for the Ryzen 2000-series review were caught in the middle. Instead of being a benefit to testing, what our investigation found is that when HPET is forced as the sole system timer, it can  sometimes a hindrance to system performance, particularly gaming performance. Worse, because HPET is implemented differently on different platforms, the actual impact of enabling it isn't even consistent across vendors. Meaning that the effects of using HPET can vary from system to system, as well as the implementation.

And that brings us to the state HPET, our Ryzen 2000-series review, and CPU benchmarking in general. As we'll cover in the next few pages, HPET plays a very necessary and often very beneficial role in system timer accuracy; a role important enough that it's not desirable to completely disable HPET – and indeed in many systems this isn't even possible – all the while certain classes of software such as overclocking & monitoring software may even require it. However for a few different reasons it can also be a drain on system performance, and as a result HPET shouldn't always be used. So let's dive into the subject of hardware timers, precision, Smeltdown, and how it all came together to make a perfect storm of volatility for our Ryzen 2000-series review.

A Timely Re-Discovery
Comments Locked

242 Comments

View All Comments

  • bbertram - Thursday, April 26, 2018 - link

    I think you will see alot of websites testing these combinations and re-validating their results. How do we trust any benchmarks now? Going to be some fun reading in the coming weeks.
  • Ryan Smith - Thursday, April 26, 2018 - link

    "Please take this comment into account when deciding if you're going to be flipping HEPT switches with every game on both CPU brands."

    Thankfully, we have no need to flip any switches for HPET. The new testing protocol is that we're sticking with the default OS settings. Which means HPET is available to the OS, but the system isn't forced to use it over all other timers.

    "And hey, I didn't see it, but did you do any comparisons on if GPU maker makes a difference to the HEPT impact on CPU maker?" We've done a couple of tests internally. Right now it doesn't look like it makes a difference. Not that we'd expect it to. The impact of HPET is to the CPU.
  • HeyYou,It'sMe - Thursday, April 26, 2018 - link

    Even before the patches, using the HPET timer causes severe system overhead. This is a known issue that is exacerbated slightly by the patches, but there isn't a massive increase in overhead. AnandTech should post HPET overhead before and after the patches. You will find the impact is much the same.
  • eva02langley - Thursday, April 26, 2018 - link

    Also, HPET seems to have a higher impact on old games. Maybe it was the way older engine were developed.

    Also, are we sure HPET is not just messing with the FPS data since the timing could be off?
  • peevee - Thursday, April 26, 2018 - link

    Great illustration of the phrase that "it's better not to know than know something which isn't so".

    Standard 1kHz RTC is good enough for all real performance measurement where measured tasks run for at least a second or two (otherwise such performance just does not matter in the PC context). Multiple measurement, plus elimination of false precision from averaging the results would eliminate all errors significant for the task.

    When you have to change default system configuration to run the tests, the tests reflect these non-default configurations nobody is running at home or at work, and as such simply irrelevant.
  • pogostick - Thursday, April 26, 2018 - link

    I don't understand how using HPET on Intel could have such a drastic effect. Just the fact that it is available slows the system down? How? A benchmark only needs to access this timer once at the beginning and once at the end. There is no need for incessant polling of the clock. Is the only way to guarantee that you are using it to force it on for the whole system? What do these differences look like in other OSes? There are way too many questions unanswered here.

    Is it not more likely that using non-HPET timers allows the platform to essentially create it's own definition for what constitutes "1 second"? Wouldn't using a timer based on the core tend to stretch out the definition of "1 second" over a longer period if the core becomes heavily taxed or heated?

    These systems need to be tested with a common clock. Whether that is some specialized pcie device, or a network clock, or a new motherboard standard that offers special pins to an external clock source, or whatever, is to be determined elsewhere. All boards need to be using the same clock for testing.
  • Ryan Smith - Thursday, April 26, 2018 - link

    I don't understand how using HPET on Intel could have such a drastic effect. Just the fact that it is available slows the system down?

    It's not that it's available is the problem. The issue is that the OS is forced to use it for all timer calls.

    "How?"

    Relative to the other timers, such as QPC, HPET is a very, very expensive timer to check. It requires going to the OS kernel and the kernel in turn going to the chipset, which is quite slow and time-consuming compared to any other timer check.

    "A benchmark only needs to access this timer once at the beginning and once at the end. There is no need for incessant polling of the clock."

    Games have internal physics simulations and such. Which require a timer to see how much time has elapsed since the last step of the simulation. So the timer can actually end up being checked quite frequently.

    "Is the only way to guarantee that you are using it to force it on for the whole system?"

    As a user, generally speaking: yes. Otherwise a program will use the timer the developer has programmed it to use.

    "Wouldn't using a timer based on the core tend to stretch out the definition of "1 second" over a longer period if the core becomes heavily taxed or heated?"

    No. Modern Invariant timers are very good about keeping accurate time, and are very cheap to access.
  • pogostick - Friday, April 27, 2018 - link

    Thank you.
  • risa2000 - Monday, April 30, 2018 - link

    "Games have internal physics simulations and such. Which require a timer to see how much time has elapsed since the last step of the simulation. So the timer can actually end up being checked quite frequently."

    Would it be too difficult to set up a profiler session and count how many times is HPET called and eventually even from where?

    To produce such an impact it must be a load of calls and I still cannot imagine why so many.

    Next, why Intel suffers so much by forced HPET compared to new AMD?
  • Kaihekoa - Friday, April 27, 2018 - link

    Excuses excuses.

Log in

Don't have an account? Sign up now