Forcing HPET On, Plus Spectre and Meltdown Patches

Based on my extreme overclocking roots back in the day, my automated benchmark scripts for the past year or so have forced HPET through the OS. Given that AMD’s guidance is now that it doesn’t matter for performance, and Intel hasn’t even mentioned the issue relating to a CPU review, having HPET enabled was the immediate way to ensure that every benchmark result was consistent, and would not be interfered with by clock drift on special motherboard manufacturer in-OS tweaks. This was a fundamental part of my overclocking roots – if I want to test a CPU, I want to make certainly sure that the motherboard is not causing any issues. It really gets up my nose when after a series of CPU testing, it turns out that the motherboard had an issue – keeping HPET on was designed to stop any timing issues should they arise.

From our results over that time, if HPET was having any effect, it was unnoticed: our results were broadly similar to others, and each of the products fell in line with where they were expected. Over the several review cycles we had, there were a couple of issues that cropped up that we couldn’t explain, such as our Skylake-X gaming numbers that were low, or the first batch of Ryzen gaming tests, where the data was thrown out for being obviously wrong however we never managed to narrow down the issue.

Enter our Ryzen 2000 series numbers in the review last week, and what had changed was the order of results. The way that forcing HPET was affecting results was seemingly adjusted when we bundle in the Spectre and Meltdown patches that also come with their own performance decrease on some systems. Pulling one set of results down further than expected started some alarm bells and needed closer examination.

HPET, by the way it is invoked, is programmed by a memory mapped IO window through the ACPI into the circuit found on the chipset. Accessing it is very much an IO command, and one of the types of commands that fall under the realm of those affected by the Spectre and Meltdown patches. This would imply that any software that required HPET access (or all timing software if HPET is forced) would have the performance reduced even further when these patches are applied, further compounding the issue.

It Affects AMD and Intel Differently: Productivity

So far we have done some quick initial re-testing on the two key processors in this debate, the Ryzen 7 2700X and Intel Core i7-8700K. These are the two most talked about processors at this time, due to the fact that they are closely matched in performance and price, with each one having benefits in certain areas over the other. For our new tests, we have enabled the Spectre/Meltdown patches on both systems – HPET is ‘on’ in the BIOS, but left as ‘default’ in the operating system.

For our productivity tests, on the Intel system, there was an overall +3.3% gain when un-forcing HPET in the OS:

The biggest gains here were in the web tests, a couple of the renderers, WinRAR (memory bound), and PCMark 10. Everything else was pretty much identical. Our compile tests gave us three very odd consecutive numbers, so we are looking at those results separately.

On the AMD system, the productivity tests difference was an overall +0.3% gain when un-forcing HPET in the OS:

This is a lower gain, with the biggest rise coming from PCMark10’s video conference test to the tune of +16%. The compile test results were identical, and a lot of tests were with 1-2%.

If Affects AMD and Intel Differently: Gaming

The bigger changes happen with the gaming results, which is the reason why we embarked on this audit to decipher our initial results. Games rely on timers to ensure data and pacing and tick rates are all sufficient for frames to be delivered in the correct manner – the balance here is between waiting on timers to make sure everything is correct, or merely processing the data and hoping it comes out in more or less the right order: having too fine a control might cause performance delays. In fact, this is what we observe.

With our GTX 1080 and AMD’s Ryzen 7 2700X, we saw minor gains across the board, however it was clear that 1080p was the main beneficiary over 4K. The 10%+ adjustments came in only Civilization 6 and Rise of the Tomb Raider.

Including the 99th percentile data, removing HPET gave an overall boost of around 4%, however the most gains were limited to specific titles at the smaller resolutions, which would be important for any user relying on fast frame rates at lower resolutions.

The Intel side of the equation is where it gets particularly messy. We rechecked these results several times, but the data was quite clear.

As with the AMD results, the biggest beneficiaries of disabling HPET were the 1080p tests. Civilization 6 and Rise of the Tomb Raider had substantial performance boosts (also in 4K testing), with Grand Theft Auto observing an additional +27%. By comparison, Shadow of Morder was ‘only’ +6%.

Given that the difference between the two sets of data is related to the timer, one could postulate that the more granular the timer, the more the effect it can have: on both of our systems, the QPC timer is set for 3.61 MHz as a baseline, but the HPET frequencies are quite different. The AMD system has a HPET timer at 14.32 MHz (~4x), while the Intel system has a HPET timer at 24.00 MHz (~6.6x). It is clear that the higher granularity of the Intel timer is causing substantially more pipeline delays – moving from a tick-to-tick delay of 277 nanoseconds to 70 nanoseconds to 41.7 nanoseconds is crossing the boundary from being slower than a CPU-to-DRAM access to almost encroaching on a CPU-to-L3 cache access, which could be one of the reasons for the results we are seeing, along with the nature of how the HPET timer works.

There is also another aspect to gaming that does not appear with standard CPU tests: depending on how the engine is programmed, some game developers like to keep track of a lot of the functions in flight in order to either adjust features on the fly, or for internal metrics. For anyone that has worked extensively on a debug mode and had to churn through the output, it is basically this. If a title had shipped with a number of those internal metrics still running in the background, this is exactly the sort of issue that having HPET enabled could stumble upon - if there is a timing mismatch (based on the way HPET works) and delays are introduced due to these mismatches, it could easily slow down the system and reduce the frame rate.

AMD and Intel Have Different HPET Guidance Why This Matters
Comments Locked

242 Comments

View All Comments

  • Spunjji - Thursday, April 26, 2018 - link

    Is there any other kind? Either you're at the budget end where everything is GPU limited or at the high-end where not spending a decent amount on a monitor to go with your £500 GPU is a crying shame.

    There's a niche where Intel has a clear win, and that's people running 240Hz 1080p rigs. For most folks with the money to spend, 2560x1440 (or an ultra-wide equivalent) @ 144hz is where it's at for the ideal compromise between picture quality, smoothness and cost. There are a lot of monitors hitting those specs right now.
  • eva02langley - Wednesday, April 25, 2018 - link

    I was mentioning in the review that 1080p benchmarks need to go... now it is even more true with HPET.

    Kudos on this guys, it is really interesting to read.
  • DanNeely - Wednesday, April 25, 2018 - link

    >93% of Steam gamers main display is at 1080p or lower.

    If the new review suit split what GPUs were run at what resolutions, dropping 1080p from the high end card section might be reasonable. OTOH with 240hz 1080p screens a thing there's still an enthusiast market for 1080p combined with a flagship GPU.
  • IndianaKrom - Wednesday, April 25, 2018 - link

    * Raises hand, that's me, someone with a GTX 1080 and 240Hz 1920x1080 display.

    The industry seems obsessed with throwing higher and higher spacial resolution at gamers when what I really want is better temporal resolution.
  • eva02langley - Thursday, April 26, 2018 - link

    1080p @ 60Hz which is a non issue because we are talking about RX 580/1060 GTX or below. At that point the GPU is the bottleneck.

    It only affect 1080p @ 144Hz with a 1080 GTX/Vega 64 minimum which is really < 2%.

    You are really the exception, however the 1080p CPU bottleneck focus on you entirely without even taking in consideration other Use Cases.
  • Holliday75 - Thursday, April 26, 2018 - link

    I am willing to bet that 95%+ of Steam users have no clue what we are talking about and don't care.
  • mapesdhs - Sunday, May 6, 2018 - link

    IndianaKrom, are you aware that using high(er) frequency monitors retrains your brain's vision system so that you become tuned to that higher refresh rate? New Scientist had an article about this recently; gamers who use high frequency monitors can't use normal monitors anymore, even if previously they would not have found 60Hz bothersome at all. In other words, you're chasing goalposts that will simply keep moving by virtue of using ever higher refresh rates. I mean blimey, 240Hz is higher than the typical "analogue" vision refresh of a bird. :D

    IMO these high frequency monitors are bad for gaming in general, because they're changing product review conclusion via authors accepting that huge fps numbers are normal (even though the audience that would care is minimal). Meanwhile, game devs are not going to create significantly more complex worlds if it risks new titles showing more typical frame rates in the 30s to 80s as authors would then refer to that as slow, perhaps criticise the 3D engine, moan that gamers with HF monitors will be disappointed, and I doubt GPU vendors would like it either. We're creating a marketing catch22 with all this, doubly so as VR imposes some similar pressures.

    I don't mind FPS fans wanting HF monitors in order to be on the cutting edge of competitiveness, but it shouldn't mean reviews become biased towards that particular market in the way they discuss the data (especially at 1080p), and it's bad if it's having a detrimental effect on new game development (I could be wrong about the latter btw, but I strongly suspect it's true from all I've read and heard).

    We need a sanity check with frame rates in GPU reviews: if a game is doing more than 80 or 90fps at 1080p, then the conclusion emphasis should be that said GPU is more than enough for most users at that resolution; if it's well over 100fps then it's overkill. Just look at the way 8700K 1080p results are described in recent reviews, much is made of differences between various CPUs when the frame rates are already enormous. Competitive FPS gamers with HF monitors might care, but for the vast majority of gamers the differences are meaningless.
  • Luckz - Monday, May 14, 2018 - link

    So the real question is if someone first exposed to 100/120/144 Hz immediately squirms in delight, or if they only vomit in disgust months later when they see a 60 Hz screen again. That should be the decider.
  • Spunjji - Thursday, April 26, 2018 - link

    1080p is popular in the Steam survey where, incidentally, so is low-end GPU and CPU hardware. Most of those displays are 60hz and an awful lot of them are in laptops. Pointing at the Steam surveys to indicate where high-end CPU reviews should focus their stats is misguided.

    I'm still not certain that testing CPUs in a way that artificially amplifies their differences in a non-CPU-reliant workload is really the way to go.
  • ElvenLemming - Wednesday, April 25, 2018 - link

    You can just ignore the 1080p benchmarks if you don't think they're meaningful. As DanNeely said, 93% of surveyed Steam users are 1080p or lower, so I'd be shocked if more than a handful of review sites get rid of it.

Log in

Don't have an account? Sign up now