The Intel Core i9-9900K at 95W: Fixing The Power for SFF
by Ian Cutress on November 29, 2018 8:00 AM ESTCPU Performance: Encoding Tests
With the rise of streaming, vlogs, and video content as a whole, encoding and transcoding tests are becoming ever more important. Not only are more home users and gamers needing to convert video files into something more manageable, for streaming or archival purposes, but the servers that manage the output also manage around data and log files with compression and decompression. Our encoding tasks are focused around these important scenarios, with input from the community for the best implementation of real-world testing.
All of our benchmark results can also be found in our benchmark engine, Bench.
Handbrake 1.1.0: Streaming and Archival Video Transcoding
A popular open source tool, Handbrake is the anything-to-anything video conversion software that a number of people use as a reference point. The danger is always on version numbers and optimization, for example the latest versions of the software can take advantage of AVX-512 and OpenCL to accelerate certain types of transcoding and algorithms. The version we use here is a pure CPU play, with common transcoding variations.
We have split Handbrake up into several tests, using a Logitech C920 1080p60 native webcam recording (essentially a streamer recording), and convert them into two types of streaming formats and one for archival. The output settings used are:
- 720p60 at 6000 kbps constant bit rate, fast setting, high profile
- 1080p60 at 3500 kbps constant bit rate, faster setting, main profile
- 1080p60 HEVC at 3500 kbps variable bit rate, fast setting, main profile
Encoding is a good example where the performance decreases by a noticable margin (10%+), although perhaps not as much as you might think. In all of our tests however, the 95W mode again pulls the 9900K down to the level of a 9700K. This pattern goes through all of our encoding tests.
7-zip v1805: Popular Open-Source Encoding Engine
Out of our compression/decompression tool tests, 7-zip is the most requested and comes with a built-in benchmark. For our test suite, we’ve pulled the latest version of the software and we run the benchmark from the command line, reporting the compression, decompression, and a combined score.
It is noted in this benchmark that the latest multi-die processors have very bi-modal performance between compression and decompression, performing well in one and badly in the other. There are also discussions around how the Windows Scheduler is implementing every thread. As we get more results, it will be interesting to see how this plays out.
Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.
WinRAR 5.60b3: Archiving Tool
My compression tool of choice is often WinRAR, having been one of the first tools a number of my generation used over two decades ago. The interface has not changed much, although the integration with Windows right click commands is always a plus. It has no in-built test, so we run a compression over a set directory containing over thirty 60-second video files and 2000 small web-based files at a normal compression rate.
WinRAR is variable threaded but also susceptible to caching, so in our test we run it 10 times and take the average of the last five, leaving the test purely for raw CPU compute performance.
AES Encryption: File Security
A number of platforms, particularly mobile devices, are now offering encryption by default with file systems in order to protect the contents. Windows based devices have these options as well, often applied by BitLocker or third-party software. In our AES encryption test, we used the discontinued TrueCrypt for its built-in benchmark, which tests several encryption algorithms directly in memory.
The data we take for this test is the combined AES encrypt/decrypt performance, measured in gigabytes per second. The software does use AES commands for processors that offer hardware selection, however not AVX-512.
101 Comments
View All Comments
Hul8 - Friday, November 30, 2018 - link
The fact that all motherboard vendors do the exact same thing could lead one to draw the conclusion that the practice is actually mandated and suggested by Intel - unofficially of course.Higher benchmark results will look good especially for casual readers (who only look at certain performance graphs and skip the power consumption numbers), all the while allowing Intel to market them as "95 W" parts.
Alexvrb - Friday, November 30, 2018 - link
If Intel didn't like this practice they could hardcode behavior in the CPU itself. Oh wait, they DO... and they allow this because it makes them bench better. Meanwhile look at their cheaper locked "95W" models, I bet you won't see them auto-overclocking to 150W+ even with the board defaulting to "unlimited" TDP.Gastec - Wednesday, June 19, 2019 - link
It should be ILLEGAL for motherboard makers to go out of Intel's specifications by default. All overclocking should be entirely the responsibility of the user.rsandru - Thursday, November 29, 2018 - link
It's not capping, it's running the CPU according to the Intel datasheet specification.Operating the component beyond specification is usually called overclocking which is nice and all but doesn't allow an unbiased comparison of the different products.
LTC8K6 - Thursday, November 29, 2018 - link
Why not clamp it to the Intel spec?TheinsanegamerN - Thursday, November 29, 2018 - link
Because motherboards dont do that, they are letting the 9900k run wild.Alexvrb - Friday, November 30, 2018 - link
With Intel's blessing. If Intel wasn't onboard, they'd clamp the behavior on-chip, and you'd have to manually overclock to override TDP for any length of time (for unlocked chips, anyway).Anyway my prediction is that if Intel continues this practice, AMD just starts following suit more and more as time goes on. We'll see.
djayjp - Thursday, November 29, 2018 - link
So many of these tests would run better (faster and with much greater efficiency) on a highly parallel GPU instead.PeachNCream - Thursday, November 29, 2018 - link
You may have missed the point of the article.melgross - Thursday, November 29, 2018 - link
What I find interesting about all of this is that with mobile ARM chips the exact same characteristics are called throttling instead. Possibly we should get these naming conventions together? Either x86 chips throttle, as mobile ARM chips do, or mobile ARM chips have turbo mode too.