CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Corona is a fully multithreaded test, so the non-HT parts get a little behind here. The Core i9-9900K blasts through the AMD 8-core parts with a 25% margin, and taps on the door of the 12-core Threadripper.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Blender has an eclectic mix of requirements, from memory bandwidth to raw performance, but like Corona the processors without HT get a bit behind here. The high frequency of the 9900K pushes it above the 10C Skylake-X part, and AMD's 2700X, but behind the 1920X.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.


Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++LuxMark v3.1 OpenCL

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

CPU Performance: System Tests CPU Performance: Office Tests
POST A COMMENT

275 Comments

View All Comments

  • GreenReaper - Friday, October 19, 2018 - link

    The answer is "yes, with a but". Certain things scale really well with hyperthreading. Other things can see a severe regression, as it thrashes between one workload and another and/or overheats the CPU, reducing its ability to boost.

    Cache contention can be an issue: the i9-9900K has only 33% more cache than the i7-9700K, not 100% (and even if there were, it wouldn't have the same behaviour unless it was strictly partitioned). Memory bandwidth contention is a thing, too. And within the CPU, some parts can not be partitioned - it just relies on them running fast enough to supplky the parts which can.

    And clearly hyperthreading has an impact on overclocking ability. It might be interesting to see the gaming graphs with the i7-9700K@5.3Ghz vs. i9-9900K@5.0Ghz (or, if you want to save 50W, i7-9700K@5.0Ghz vs. i9-9900K@4.7Ghz - basically the i9-9900K's default all-core boost, but 400Mhz above the i7-9700K's 4.6Ghz all-core default, both for the same power).
    Reply
  • NaterGator - Friday, October 19, 2018 - link

    Any chance y'all would be willing to run those HT-bound tests with the 9900K's HT disabled in the BIOS? Reply
  • ekidhardt - Friday, October 19, 2018 - link

    Thanks for the review!

    I think far too much emphasis has been placed on 'value'. I simply want the fastest, most powerful CPU that isn't priced absurdly high.

    While the 9900k msrp is high, it's not in the realm of irrational spending, it's a few hundred dollars more. For a person that upgrades once every 5-6 years--a few hundred extra is not that important to me.

    I'd also like to argue against those protesting pre-order logic. I pre-ordered. And my logic is this: intel has a CLEAR track record of great CPU's. There hasn't been any surprisingly terrible CPU's released. They're consistently reliable.

    Anyway! I'm happy I pre-ordered and don't care that it costs a little bit extra; I've got a fast 8 core 16 thread CPU that should last quite a while.
    Reply
  • Schmich - Friday, October 19, 2018 - link

    You have the numbers anyway. Not everyone buys the highest end and then wait many years to upgrade. That isn't the smartest choice because you spend so much money and then after 2-3 years you're just a mid-ranger.

    For those who want high-end they can still get a 2700x today, and then the 3700x next year with most likely better performance than your 9900k due to 7nm, PLUS have money over PLUS a spare 2700x they can sell.

    Same thing for GPU except for this gen. I never understood those who buy the xx80Ti version and then upgrade after 5 years. Your overall experience would be better only getting the xx70 but upgrading more often.
    Reply
  • Spunjji - Monday, October 22, 2018 - link

    This is what actual logic looks like! Reply
  • Gastec - Sunday, November 04, 2018 - link

    Basically "The more you buy, the more you save" :-\ Reply
  • shaolin95 - Friday, October 19, 2018 - link

    Exactly. I think the ones beating the value dead horse are mainly AMD fanboys defending their 2700x purchase Reply
  • eva02langley - Friday, October 19, 2018 - link

    Sorry, value is a huge aspect. The reason why RTX is such an issue. Also, at this price point, I would go HEDT if compute was really that important for me.

    It is not with 10-15% performance increase over a 2700x at 1080p with a damn 1080 TI that I will see a justified purchase.
    Reply
  • Arbie - Friday, October 19, 2018 - link

    Gratuitous trolling, drags down thread quality. Do you really still need to be told what AMD has done for this market? Do you even think this product would exist without them - except at maybe twice the already high price? Go pick on someone that deserves your scorn, such as ... Intel. Reply
  • Great_Scott - Friday, October 19, 2018 - link

    What a mess. I guess gaming really doesn't depend on the CPU any more. Those Ryzen machines were running at a 1Ghz+ speed deficit and still do decently.

    Intel needs a new core design and AMD needs a new fab.
    Reply

Log in

Don't have an account? Sign up now