CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Intel's HEDT chips are quite good at Corona, but if we compare the 3900X to the 3950X, we still see some good scaling.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

AMD is taking the lead in our blender test, with the 16-core chips easily going through Intel's latest 18-core hardware.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.

In our test, we run the simple ‘Ball’ scene on both the C++ code path, in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++

Despite using Intel's Embree engine, again AMD's 16-cores easily win out against Intel's 18-core chips, at under half the cost.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

POV-Ray ends up with AMD 16-core splitting the two Intel 18-core parts, which means we're likely to see the Intel Core i9-10980XE at the top here. It would have been interesting to see where an Intel 16-core Core-X on Cascade would end up for a direct comparison, but Intel has no new 16-core chip planned.

CPU Performance: System Tests CPU Performance: Encoding Tests
Comments Locked

206 Comments

View All Comments

  • eva02langley - Thursday, November 14, 2019 - link

    Yeah, because the average Joe is owning a 2080 TI to play at 1080p...
  • blppt - Thursday, November 14, 2019 - link

    Believe it or not, you need a 2080Ti to play 1080p at max settings smoothly in RDR2 at the moment.

    My oc'd 1080ti (FTW3) chokes on that game at 1080p/max settings.
  • itproflorida - Thursday, November 14, 2019 - link

    Not.. 1440p 78 fps avg for RDR2 Benchmark and in game 72 fps avg maxed settings @ 1440p, 2080ti and 9700k@5Ghz.
  • blppt - Thursday, November 14, 2019 - link

    The 2080ti and other 2xxx series cards do MUCH better in RDR2 than their equivalent 10-series cards. Look at the benchmarks---we have Vega 64s challenging 1080tis in this game. That should not happen.

    https://www.guru3d.com/articles_pages/red_dead_red...
  • Ian Cutress - Thursday, November 14, 2019 - link

    I have 2080 Ti units standing by, but my current benchmark run is with 1080s until I do a full benchmark reset. Probably Q1 next year, when I'm back at home for longer than 5 days. Supercomputing, Tech Summit, IEDM, and CES are in my next few weeks.
  • Dusk_Star - Thursday, November 14, 2019 - link

    > In our Ryzen 7 3700X review, with the 12-core processor

    Pretty sure the 3700X is 8 cores.
  • Lux88 - Thursday, November 14, 2019 - link

    Not a single compilation benchmark...
  • Ian Cutress - Thursday, November 14, 2019 - link

    Having issues getting the benchmark to work on Win 10 1909, didn't have time to debug. Hoping to fix it for the next benchmark suite update.
  • Lux88 - Thursday, November 14, 2019 - link

    Thanks!
  • stux - Thursday, November 14, 2019 - link

    Sad,

    Desperately want to know if the 3950x will make a good developer workstation. 64GB of Ram and a fast nvme, or is it going to be memory bandwidth bottlenecked... and I’ll need to step up to TR3.

Log in

Don't have an account? Sign up now