CPU Encoding Tests

One of the interesting elements on modern processors is encoding performance. This includes encryption/decryption, as well as video transcoding from one video format to another. In the encrypt/decrypt scenario, this remains pertinent to on-the-fly encryption of sensitive data - a process by which more modern devices are leaning to for software security. Video transcoding as a tool to adjust the quality, file size and resolution of a video file has boomed in recent years, such as providing the optimum video for devices before consumption, or for game streamers who are wanting to upload the output from their video camera in real-time. As we move into live 3D video, this task will only get more strenuous, and it turns out that the performance of certain algorithms is a function of the input/output of the content.

All of our benchmark results can also be found in our benchmark engine, Bench.

7-Zip 9.2: link

One of the freeware compression tools that offers good scaling performance between processors is 7-Zip. It runs under an open-source licence, is fast, and easy to use tool for power users. We run the benchmark mode via the command line for four loops and take the output score.

Encoding: 7-Zip Combined Score

Encoding: 7-Zip Compression

Encoding: 7-Zip Decompression

At the request of a few users, we've gone back through our saved benchmark data and pulled out compression/decompression numbers for 7-zip. AMD clearly makes a win here in decompression by a long way.

WinRAR 5.40: link

For the 2017 test suite, we move to the latest version of WinRAR in our compression test. WinRAR in some quarters is more user friendly that 7-Zip, hence its inclusion. Rather than use a benchmark mode as we did with 7-Zip, here we take a set of files representative of a generic stack (33 video files in 1.37 GB, 2834 smaller website files in 370 folders in 150 MB) of compressible and incompressible formats. The results shown are the time taken to encode the file. Due to DRAM caching, we run the test 10 times and take the average of the last five runs when the benchmark is in a steady state.

Encoding: WinRAR 5.40

WinRAR encoding is another test that doesn't scale up especially well with thread counts. After only a few threads, most of its MT performance gains have been achieved. Which isn't a help to Threadripper, and is outright a hiderence in Creator Mode.

AES Encoding

Algorithms using AES coding have spread far and wide as a ubiquitous tool for encryption. Again, this is another CPU limited test, and modern CPUs have special AES pathways to accelerate their performance. We often see scaling in both frequency and cores with this benchmark. We use the latest version of TrueCrypt and run its benchmark mode over 1GB of in-DRAM data. Results shown are the GB/s average of encryption and decryption.

Encoding: AES

HandBrake v1.0.2 H264 and HEVC: link

As mentioned above, video transcoding (both encode and decode) is a hot topic in performance metrics as more and more content is being created. First consideration is the standard in which the video is encoded, which can be lossless or lossy, trade performance for file-size, trade quality for file-size, or all of the above can increase encoding rates to help accelerate decoding rates. Alongside Google's favorite codec, VP9, there are two others that are taking hold: H264, the older codec, is practically everywhere and is designed to be optimized for 1080p video, and HEVC (or H265) that is aimed to provide the same quality as H264 but at a lower file-size (or better quality for the same size). HEVC is important as 4K is streamed over the air, meaning less bits need to be transferred for the same quality content.

Handbrake is a favored tool for transcoding, and so our test regime takes care of three areas.

Low Quality/Resolution H264: Here we transcode a 640x266 H264 rip of a 2 hour film, and change the encoding from Main profile to High profile, using the very-fast preset.

Encoding: Handbrake H264 (LQ)

High Quality/Resolution H264: A similar test, but this time we take a ten-minute double 4K (3840x4320) file running at 60 Hz and transcode from Main to High, using the very-fast preset.

Encoding: Handbrake H264 (HQ)

HEVC Test: Using the same video in HQ, we change the resolution and codec of the original video from 4K60 in H264 into 4K60 HEVC.

Encoding: Handbrake HEVC (4K)

In the HQ H264 test, AMD pushes ahead with both the processors, while SMT-off severely limits the 1950X due to the lack of SMT threads. As we move to HEVC though, the 1950X and 7900X clash on performance.

Benchmarking Performance: CPU Web Tests Benchmarking Performance: CPU Office Tests
Comments Locked

347 Comments

View All Comments

  • Ian Cutress - Thursday, August 10, 2017 - link

    Anand hasn't worked at the website for a few years now. The author (me) is clearly stated at the top.

    Just think about what you're saying. If I was in Intel's pocket, we wouldn't be being sampled by AMD, period. If they were having major beef with how we were reporting, I'd either be blacklisted or consistently on a call every time there's been an AMD product launch (and there's been a fair few this year).

    I've always let the results do the talking, and steered clear from hype generated by others online. We've gone in-depth into the how things are done the way they are, and the positives and negatives as to the methods of each action (rather than just ignoring the why). We've run the tests, and been honest about our results, and considered the market for the product being reviewed. My background is scientific, and the scientific method is applied rigorously and thoroughly on the product and the target market. If I see bullshit, I point it out and have done many times in the past.

    I'm not exactly sure what you're problem is - you state that the review is 'slanted journalism', but fail to give examples. We've posted ALL of our review data that we have, and we have a benchmark database for anyone that ones to go through all the data at any time. That benchmark database is continually being updated with new CPUs and new tests. Feel free to draw your own conclusions if you don't agree with what is written.

    Just note that a couple of weeks ago I was being called a shill for AMD. A couple of weeks before that, a shill for Intel. A couple before that... Nonetheless both companies still keep us on their sampling lists, on their PR lists, they ask us questions, they answer our questions. Editorial is a mile away from anything ad related and the people I deal with at both companies are not the ones dealing with our ad teams anyway. I wouldn't have it any other way.
  • MajGenRelativity - Thursday, August 10, 2017 - link

    I personally always enjoy reading your reviews Ian. Even though they don't always reach the conclusions I hoped they would reach before reading, you have the evidence and benchmarks to back it up. Keep up the good work!
  • Diji1 - Thursday, August 10, 2017 - link

    Agreed!
  • Zstream - Thursday, August 10, 2017 - link

    For me, it isn't about "scientific benchmarking", it's about what benchmarks are used and what story is being told. I think, along with many others, would never buy a threadripper to open a single .pdf. I could be wrong, but I don't think that's the target audience Intel or AMD is aiming for.

    I mean, why not forgo the .pdf and other benchmarks that are really useless for this product and add multi-threaded use cases. For instance, why not test how many VM's and I/O is received, or launching a couple VM's, running a SQL DB benchmark, and gaming at the same time?

    It could just be me, but I'm not going to buy a 7900x or 1950x for opening up .pdf files, or test SunSpider/Kraken lol. Hopefully we didn't include those benchmarks to tell a story, as mentioned above.

    We're goingto be compiling, 3d rendering with multi-gpu's, running multiple VM's, all while multi-tasking with other apps.

    My 2 cents.
  • DanNeely - Thursday, August 10, 2017 - link

    Single threaded use cases aren't why people buy really wide CPUs. But performing badly in them, since they represent a lot of ordinary basic usage, can be a reason not to buy one. Also running the same benches on all products allows for them all to be compared readily vs having to hunt for benches covering the specific pair you're interested in.

    VM type benchmarks are more Johan's area since that's a traditional server workload. OTOH there's a decent amount of overlap with developer workloads there too so adding it now that we've got a compile test might not be a bad idea. On the gripping hand, any new benchmarks need to be fully automated so Ian can push an easy button to collect data while he works on analysis of results. Also the value of any new benchmark needs to be weighed against how much it slows the entire benching run down, and how much time rerunning it on a large number of existing platforms will take to generate a comparison set.
  • iwod - Thursday, August 10, 2017 - link

    It really depends on use case. 20% slower on PDF opening? I dont care, because the time has reached diminishing returns and Intel needs to be MUCH faster for this to be a UX problem.

    But I think at $999 Intel has a strong case for its i9. But factoring in the MB AMD is still cheaper. Not sure if that is mentioned in the article.

    Also note Intel is on their third iteration of 14nm, against a new 14nm from AMD GloFlo.

    I am very excited for 7nm Zen 2 coming next year. I hope all the software and compiler as well as optimisation has time to catch up for Zen.
  • Zstream - Thursday, August 10, 2017 - link

    I won't get into an argument, but I and many of my friends, who are on the developer side of the house have been waiting for this review, and it doesn't provide me with any useful information. I understand it might be Johan's wheelhouse, but come on... opening a damn .pdf file, and testing SunSpider/Kraken/gaming benchmarks? That won't provide anyone interested in either CPU any validation of purchase. I'm not trying to be salty, I just want some more damn details vs. trying to put both vendors in a good light.
  • Ian Cutress - Thursday, August 10, 2017 - link

    Rather than have 20 different tests for each set of different CPUs and very minimal overlap, we have a giant glove that has all the tests for every CPU in a single script. So 80 test points, rather than 4x20. The idea is that there are benchmarks for everyone, so you can ignore the ones that don't matter, rather than expect 100% of the benchmarks to matter (e.g. if you care about five tests, does it matter to you if the tests are published alongside 75 other tests, or do they have to be the only five tests in the review?). It's not a case of trying to put both vendors in a good light, it's a case of this is a universal test suite.
  • Zstream - Thursday, August 10, 2017 - link

    Well, show me a database benchmark, virtual machine benchmark, 3dmax benchmark, blender benchmark and I'll shutty ;)

    It's hard for me to look at this review outside of a gamers perspective, which I'm not. Sorry, just the way I see it. I'll wait for more pro-consumer benchmarks?
  • Johan Steyn - Thursday, August 10, 2017 - link

    This is exactly my point as well. Why on earth so much focus on single threaded tests and games, since we all knew from way back TR was not going to be a winner here. Where are all the other benches as you mention. Oh, no, this will have Intel look bad!!!!!

Log in

Don't have an account? Sign up now