CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Interestingly both 9900KS settings performed slightly worse than the 9900K here, which you wouldn't expect given the all-core turbo being higher. It would appear that there is something else the bottleneck in this test.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

All the 9900 parts and settings perform roughly the same with one another, however the PL2 255W setting on the 9900KS does allow it to get a small ~5% advantage over the standard 9900K.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++

Both 9900KS settings perform equally well here, and a sizeable jump over the standard 9900K.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

One of the biggest differences between the two power settings is in POV-Ray, with a marked frequency difference. In fact, the 159W setting on the 9900KS puts it below our standard settings for the 9900K, which likely had an big default turbo budget on the board it was on at the time.

CPU Performance: System Tests CPU Performance: Encoding Tests
Comments Locked

235 Comments

View All Comments

  • Agent Smith - Friday, November 1, 2019 - link

    Only one year warranty with this CPU, reduced from 3yrs. So it’s marginally faster, uses more power, offers no gaming advantages and it’s price hike doesn’t justify the performance gain and warranty disadvantage over 9900k.

    ... and the 3950x is about to arrive. Mmm?
  • willis936 - Friday, November 1, 2019 - link

    Counter strike really needs to be added to benchmarks. It’s just silly how useless these gaming benchmarks are. There is virtually nothing that separates any of the processors. How can you recommend it for gaming when your data shows that a processor half the price is just as good? Test the real scenarios that people would want to use this chip.
  • Xyler94 - Friday, November 1, 2019 - link

    It's more because you need a specific set of circumstances these days to see the difference in gaming that's more than margin of error.

    You need at least a 2080, but preferably a 2080ti
    You need absolutely nothing else running on the computer other than OS, Game and launcher
    You need the resolution to be set at 1080p
    You need the quality to be at medium to high.

    then, you can see differences. CS:GO shows nice differences... but there's no monitor in the world that can display 400 to 500FPS, so yeah... Anandtech still uses a 1080, which is hardly taxing to any modern CPU, that's why you see no differences.
  • willis936 - Friday, November 1, 2019 - link

    csgo is a proper use case. It isn’t intense, graphically, and people regularly play with 1440p120. Shaving milliseconds off input to display latency matters. I won’t go into an in depth analysis to why, but imagine a human response time has a gaussian distribution and whoever responds first wins. Even if the mean response time is 150 ms, if the standard deviation is 20 ms and your input to display latency is 50 ms then there are gains to cutting 20, 10, even 5 ms off of it.

    And yes, more fps does reduce input latency, even in cases where the monitor refresh rate is lower than the fps.

    https://youtu.be/hjWSRTYV8e0
  • Xyler94 - Tuesday, November 5, 2019 - link

    If you visually can't react fast enough, doesn't matter how quickly the game can take an input, you're still limited on the information presented to you. 240hz is the fastest you can go, and 400FPS vs 450FPS isn't gonna win you tournaments.

    CS:GO is not a valid test, as there's more to gaming than FPS. Input lag is more about the drivers and peripherals, and there's even lag between your monitor and GPU to consider. But go on, pretend 50FPS at 400+ makes that huge of a difference.
  • solnyshok - Friday, November 1, 2019 - link

    No matter what GHz, buying a 14nm/PCIE3 chip/mobo just before 10nm/PCIE4 comes to the market... Seriously? Wait another 6 months.
  • mattkiss - Friday, November 1, 2019 - link

    10nm/PCIe 4 isn't coming to desktop next year, where did you hear that?
  • eek2121 - Friday, November 1, 2019 - link

    The 3700X is totally trolling Intel right now.
  • RoboMurloc - Friday, November 1, 2019 - link

    I dunno if anyone mentioned yet, but the KS has additional security measures to mitigate exploits which are probably causing the performance regressions.
  • PeachNCream - Friday, November 1, 2019 - link

    I expect I will never own an i9-9900KS or a Ryzen 7 3700X, but it is interesting to see how close AMD's 65W 8 core chip gets to Intel's 127+W special edition CPU in terms of performance in most of these benchmarks.

Log in

Don't have an account? Sign up now