CPU Benchmark Performance: Encoding and Compression

One of the interesting elements on modern processors is encoding performance. This covers two main areas: encryption/decryption for secure data transfer, and video transcoding from one video format to another.

In the encrypt/decrypt scenario, how data is transferred and by what mechanism is pertinent to on-the-fly encryption of sensitive data - a process by which more modern devices are leaning to for software security.

Video transcoding as a tool to adjust the quality, file size and resolution of a video file has boomed in recent years, such as providing the optimum video for devices before consumption, or for game streamers who are wanting to upload the output from their video camera in real-time. As we move into live 3D video, this task will only get more strenuous, and it turns out that the performance of certain algorithms is a function of the input/output of the content.

We are using DDR4 memory at the following settings:

  • DDR4-3200

Encoding

(5-1a) Handbrake 1.3.2, 1080p30 H264 to 480p Discord

(5-1b) Handbrake 1.3.2, 1080p30 H264 to 720p YouTube

(5-1c) Handbrake 1.3.2, 1080p30 H264 to 4K60 HEVC

(5-2a) 7-Zip 1900 Compression

(5-2b) 7-Zip 1900 Decompression

(5-2c) 7-Zip 1900 Combined Score

(5-3) AES Encoding

(5-4) WinRAR 5.90 Test, 3477 files, 1.96 GB

CPU Benchmark Performance: Simulation And Rendering CPU Benchmark Performance: Legacy and Web
Comments Locked

125 Comments

View All Comments

  • Gavin Bonshor - Thursday, June 30, 2022 - link

    We test at JEDEC to compare apples to apples from previous reviews. The recommendation is on my personal experience and what AMD recommends.
  • HarryVoyager - Thursday, June 30, 2022 - link

    Having done an upgrade from a 5800X to a 5800X3D, one of the interesting things about the 5800X3D is that its largely RAM insensitive. You can get pretty much the same performance out of DDR4-2366 as you can 3600+.

    And its not that it is under-performing. The things that it beats the 5800X at, it still beats it at, even when the 5800X is running very fast low latency RAM.

    The up shot is, if you're on an AM4 platform with stock ram, you actually get a lot of improvement from the 5800X3D in its favored applications
  • Lucky Stripes 99 - Saturday, July 2, 2022 - link

    This is why I hope to see this extra cache come to the APU series. My 4650G is very RAM speed sensitive on the GPU side. Problem is, if you start spending a bunch of cash on faster system memory to boost GPU speeds, it doesn't take long before a discrete video card becomes the better choice.
  • Oxford Guy - Saturday, July 2, 2022 - link

    The better way to test is to use both the optimal RAM and the slow JEDEC RAM.
  • sonofgodfrey - Thursday, June 30, 2022 - link

    Wow, awhile since I looked at these gaming benchmarks. These FPS times are way past the point of "minimum" necessary. I submit two conclusions:
    1) At some point you just have to say the game is playable and just check that box.
    2) The benchmarks need to reflect this result.

    If I were doing these tests, I would probably just set a low limit for FPS and note how much (% wise) of the benchmark run was below that level. If it is 0%, then that CPU/GPU/whatever combination just gets a "pass", if not it gets a "fail" (and you could dig into the numbers to see how much it failed).

    Based on this criteria, if I had to buy one of these processors for gaming, I would go with the least costly processor here, the i5-12600k. It does the job just fine, and I can spend the extra $210 on a better GPU/Memory/SSD.
    (Note: I'm not buying one of these processors, I don't like Alder Lake for other reasons, and this is not an endorsement of Alder Lake)
  • lmcd - Thursday, June 30, 2022 - link

    Part of the intrigue is that it can hit the minimums and 1% lows for smooth play with 120Hz/144Hz screens.
  • hfm - Friday, July 1, 2022 - link

    I agree. I'm using a 5600X + 3080 + 32GB dual channel dual rank and my 3080 is still the bottleneck most of the time at the resolution I play all my games in, 1440p@144Hz
  • mode_13h - Saturday, July 2, 2022 - link

    > These FPS times are way past the point of "minimum" necessary.

    You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.

    And the relevance of such testing is because future games will probably lean more heavily on the CPU than current games. So, even at higher resolutions, we should expect to see future game performance affected by one's choice of a CPU, today, to a greater extent than current games are.

    So, in essence, what you're seeing is somewhat artificial, but that doesn't make it irrelevant.

    > I would probably just set a low limit for FPS and
    > note how much (% wise) of the benchmark run was below that level.

    Good luck getting consensus on what represents a desirable framerate. I think the best bet is to measure mean + 99th percentile and then let people decide for themselves what's good enough.
  • sonofgodfrey - Tuesday, July 5, 2022 - link

    >Good luck getting consensus on what represents a desirable framerate.

    You would need to do some research (blind A-B testing) to see what people can actually detect.
    There are probably dozens of human factors PhD thesis about this in the last 20 years.
    I suspect that anything above 60 Hz is going to be the limit for most people (after all, a majority of movies are still shot at 24 FPS).

    >You're missing the point. They test at low resolutions because those tend to be CPU-bound. This exaggerates the difference between different CPUs.

    I can see your logic, but what I see is this:
    1) Low resolution test is CPU bound: At several hundred FPS on some of these tests they are not CPU bound, and the few percent difference is no real difference.
    2) Predictor of future performance: Probably not. Future games if they are going to push the CPU will use a) even more GPU offloading (e.g. ray-tracing, physics modeling), b) use more CPUs in parallel, c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration). IOW, you're benchmark isn't measuring the right "thing", and you can't know what the right thing is until it happens.
  • mode_13h - Thursday, July 7, 2022 - link

    > You would need to do some research (blind A-B testing) to see what people can actually detect.

    Obviously not going to happen, on a site like this. Furthermore, readers have their own opinions of what framerates they want and trying to convince them otherwise is probably a thankless errand.

    > I suspect that anything above 60 Hz is going to be the limit for most people
    > (after all, a majority of movies are still shot at 24 FPS).

    I can tell you from personal experience this isn't true. But, it's also not an absolute. You can't divorce the refresh rate from other properties of the display, like whether the pixel illumination is fixed or strobed.

    BTW, 24 fps movies look horrible to me. 24 fps is something they settled on way back when film was heavy, bulky, and expensive. And digital cinema cameras are quite likely used at higher framerates, if only so they can avoid judder when re-targeting to 30 or 60 Hz targets.

    > At several hundred FPS on some of these tests they are not CPU bound,

    When different CPUs produce different framerates with the same GPU, then you know the CPU is a limiting factor to some degree.

    > and the few percent difference is no real difference.

    The point of benchmarking is to quantify performance. If the difference is only a few percent, then so be it. We need data in order to tell us that. Without actually testing, we wouldn't know.

    > Predictor of future performance: Probably not.

    That's a pretty bold prediction. I say: do the testing, report the data, and let people decide for themselves whether they think they'll need more CPU headroom for future games.

    > Future games if they are going to push the CPU will use
    > a) even more GPU offloading (e.g. ray-tracing, physics modeling),

    With the exception of ray-tracing, which can *only* be done on the GPU, then why do you think games aren't already offloading as much as possible to the GPU?

    > b) use more CPUs in parallel

    That starts to get a bit tricky, as you have increasing numbers of cores. The more you try to divide up the work involved in rendering a frame, the more overhead you incur. Contrast that to a CPU with faster single-thread performance, and you know all of that additional performance will end up reducing the CPU portion of frame preparation. So, as nice as parallelism is, there are practical challenges when trying to scale up realtime tasks to use ever increasing numbers of cores.

    > c) use instruction set additions that don't exist or are not available yet (AVX 512, AI accelleration).

    Okay, but if you're buying a CPU today that you want to use for several years, you need to decide which is best from the available choices. Even if future CPUs have those features and future games can use them, that doesn't help me while I'm still using the CPU I bought today. And games will continue to work on "legacy" CPUs for a long time.

    > IOW, you're benchmark isn't measuring the right "thing",
    > and you can't know what the right thing is until it happens.

    Let's be clear: it's not *my* benchmark. I'm just a reader.

    Also, video games aren't new and the gaming scene changes somewhat incrementally, especially given how many years it now takes to develop them. So, tests done today should have similar relevance in the next few years as what test from a few years ago would tell us about gaming performance today.

    I'll grant you that it would be nice to have data to support this: if someone would re-benchmark modern games with older CPUs and compare the results from those benchmarks to ones takes when the CPUs first launched.

Log in

Don't have an account? Sign up now