Stock CPU Performance: Encoding Tests

With the rise of streaming, vlogs, and video content as a whole, encoding and transcoding tests are becoming ever more important. Not only are more home users and gamers needing to convert video files into something more manageable, for streaming or archival purposes, but the servers that manage the output also manage around data and log files with compression and decompression. Our encoding tasks are focused around these important scenarios, with input from the community for the best implementation of real-world testing.

All of our benchmark results can also be found in our benchmark engine, Bench.

Handbrake 1.1.0: Streaming and Archival Video Transcoding

A popular open source tool, Handbrake is the anything-to-anything video conversion software that a number of people use as a reference point. The danger is always on version numbers and optimization, for example the latest versions of the software can take advantage of AVX-512 and OpenCL to accelerate certain types of transcoding and algorithms. The version we use here is a pure CPU play, with common transcoding variations.

We have split Handbrake up into several tests, using a Logitech C920 1080p60 native webcam recording (essentially a streamer recording), and convert them into two types of streaming formats and one for archival. The output settings used are:

  • 720p60 at 6000 kbps constant bit rate, fast setting, high profile
  • 1080p60 at 3500 kbps constant bit rate, faster setting, main profile
  • 1080p60 HEVC at 3500 kbps variable bit rate, fast setting, main profile

Handbrake 1.1.0 - 720p60 x264 6000 kbps FastHandbrake 1.1.0 - 1080p60 x264 3500 kbps FasterHandbrake 1.1.0 - 1080p60 HEVC 3500 kbps Fast

7-zip v1805: Popular Open-Source Encoding Engine

Out of our compression/decompression tool tests, 7-zip is the most requested and comes with a built-in benchmark. For our test suite, we’ve pulled the latest version of the software and we run the benchmark from the command line, reporting the compression, decompression, and a combined score.

It is noted in this benchmark that the latest multi-die processors have very bi-modal performance between compression and decompression, performing well in one and badly in the other. There are also discussions around how the Windows Scheduler is implementing every thread. As we get more results, it will be interesting to see how this plays out.

Please note, if you plan to share out the Compression graph, please include the Decompression one. Otherwise you’re only presenting half a picture.

7-Zip 1805 Compression7-Zip 1805 Decompression7-Zip 1805 Combined

WinRAR 5.60b3: Archiving Tool

My compression tool of choice is often WinRAR, having been one of the first tools a number of my generation used over two decades ago. The interface has not changed much, although the integration with Windows right click commands is always a plus. It has no in-built test, so we run a compression over a set directory containing over thirty 60-second video files and 2000 small web-based files at a normal compression rate.

WinRAR is variable threaded but also susceptible to caching, so in our test we run it 10 times and take the average of the last five, leaving the test purely for raw CPU compute performance.

WinRAR 5.60b3

AES Encryption: File Security

A number of platforms, particularly mobile devices, are now offering encryption by default with file systems in order to protect the contents. Windows based devices have these options as well, often applied by BitLocker or third-party software. In our AES encryption test, we used the discontinued TrueCrypt for its built-in benchmark, which tests several encryption algorithms directly in memory.

The data we take for this test is the combined AES encrypt/decrypt performance, measured in gigabytes per second. The software does use AES commands for processors that offer hardware selection, however not AVX-512.

AES Encoding

Stock CPU Performance: Office Tests Stock CPU Performance: Legacy Tests
Comments Locked

129 Comments

View All Comments

  • dgingeri - Saturday, January 26, 2019 - link

    With Intel recently releasing the "F" SKUs for processors that don't have integrated graphics, I would think this processor would be a Core i3-8121FU.
  • KOneJ - Sunday, January 27, 2019 - link

    ROFL, mate. Though a UF line-up honestly wouldn't surprise me with where MCMs, TSVs, yields, iGPUs, and core counts are seemingly headed.
  • Piotrek54321 - Saturday, January 26, 2019 - link

    I would love an article on how quantum mechanical effects have to be taken into account at such small nodes.
  • KOneJ - Sunday, January 27, 2019 - link

    I would love to see the mathematics of quantum mechanics cleaned up to be more elegant and less Newtonian in nature.
  • Rudde - Saturday, January 26, 2019 - link

    I looked into the transistor density of different nodes and particularily the claim that Intel 10nm will feature "100 million transistors per square millimeter."
    Intel seems to historically lack in transistor density. 22nm has ~8 million per mm², while competing 28nm from GlobalFoundries have ~13 and TSMC has ~12.
    Moving unto 14nm and all foundries double their transistor density. Intel goes to 15M/mm², GF to 24 (on a node bought from Samsung) and TSMC's 16nm also to 24M/mm².
    TSMC's 7nm node has a density of ~40M/mm².
    Now Intel has made two statements (both found in the first page of the article):
    1. 100 million transistors per mm² or a 5.7x improvement.
    2. A 2.7x improvement in density over 14nm, which gives 55M/mm². 55M/mm² would be consistent with Intel's claim of beating TSMC's 7nm.
    Next I'm assuming my calculations about Intel's transistor density are wrong, and that both of Intels claims are true. In that case Intel's current 14nm would be 27M/mm². Now of course we can't assume my calculations about GF and TSMC are correct either and we are left without any conclusion.
  • Rudde - Saturday, January 26, 2019 - link

    I jumped the gun too early and didn't proceed to page two that explains a lot of the same things as I tries to explain, but uses actual node data and not chip sizes.
  • smalM - Saturday, January 26, 2019 - link

    Page two doesn't use actual node data, it uses Intel propaganda ;-)
  • KOneJ - Sunday, January 27, 2019 - link

    Yep, they're not the only ones optimizing libraries. They're trying to muddle transistors with design compiling. While this is fair, it's not taking into account that others are working both halves of the problem as well. Clearly meant to be misleading.
  • sidm2k11 - Saturday, January 26, 2019 - link

    How is the fan noise on the PN60? Mine makes a pretty loud whine all the time and temperatures regularly cross 80 on full load...My 4010u Brix PC is whisper quiet by comparison.
  • alacard - Saturday, January 26, 2019 - link

    Well that was a wonderfully intricate review. Thank you.

Log in

Don't have an account? Sign up now