CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

We can see the sizeable difference in performance between the 7700K and the 2600K, coming from microarchitecture updates and frequency, however even overclocking the 2600K only halves that gap.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Similarly with Blender, the overclock only cuts the defecit in half between the 2600K and 7700K at stock performance. Add in an overclock to the 7700K, and that gap gets wider.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++
LuxMark v3.1 OpenCL

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

POV-Ray is a little different, just because AVX2 is playing a part here in how well the newer processors perform. POV-Ray also prefers cores over threads, so having eight real cores means the 9700K gets a nice big lead.

CPU Performance: System Tests CPU Performance: Office Tests
Comments Locked

213 Comments

View All Comments

  • kgardas - Friday, May 10, 2019 - link

    Indeed, it's sad that it took ~8 years to have double performance kind of while in '90 we get that every 2-3 years. And look at the office tests, we're not there yet and we will probably never ever be as single-thread perf. increases are basically dead. Chromium compile suggests that it makes a sense to update at all -- for developers, but for office users it's nonsense if you consider just the CPU itself.
  • chekk - Friday, May 10, 2019 - link

    Thanks for the article, Ian. I like your summation: impressive and depressing.
    I'll be waiting to see what Zen 2 offers before upgrading my 2500K.
  • AshlayW - Friday, May 10, 2019 - link

    Such great innovation and progress and cost-effectiveness advances from Intel between 2011 and 2017. /s

    Yes AMD didn't do much here either, but it wasn't for lack of trying. Intel deliberately stagnated the market to bleed consumers from every single cent, and then Ryzen turns up and you get the 6 and now 8 core mainstream CPUs.

    Would have liked to see 2600K versus Ryzen honestly. Ryzen 1st gen is around Ivy/Haswell performance per core in most games and second gen is haswell/broadwell. But as many games get more threaded, Ryzen's advantage will ever increase.

    I owned a 2600K and it was the last product from Intel that I ever owned that I truly felt was worth its price. Even now I just can't justify spending £350-400 quid on a hexa core or octa with HT disabled when the competition has unlocked 16 threads for less money.
  • 29a - Friday, May 10, 2019 - link

    "Yes AMD didn't do much here either"

    I really don't understand that statement at all.
  • thesavvymage - Friday, May 10, 2019 - link

    Theyre saying AMD didnt do much to push the price/performance envelope between 2011 and 2017. Which they didnt, since their architecture until Zen was terrible.
  • eva02langley - Friday, May 10, 2019 - link

    Yeah, you are right... it is AMD fault and not Intel who wanted to make a dime on your back selling you quadcore for life.
  • wilsonkf - Friday, May 10, 2019 - link

    Would be more interesting to add 8150/8350 to the benchmark. I run my 8350 at 4.7Ghz for five years. It's a great room heater.
  • MDD1963 - Saturday, May 11, 2019 - link

    I don't think AMD would have sold as many of the 8350s and 9590s as they did had people known that i3's and i5's outperformed them in pretty much all games, and, at lower clock speeds, no less. Many people probably bought the FX8350 because it 'sounded faster' at 4.7 GHz than did the 2600K at 'only' 3.8 GHz' , or so I speculate, anyway... (sort of like the Florida Broward county votes in 2000!)
  • Targon - Tuesday, May 14, 2019 - link

    Not everyone looks at games as the primary use of a computer. The AMD FX chips were not great when it came to IPC, in the same way that the Pentium 4 was terrible from an IPC basis. Still, the 8350 was a lot faster than the Phenom 2 processors, that's for sure.
  • artk2219 - Wednesday, May 15, 2019 - link

    I got my FX 8320 because I preferred threads over single core performance. I was much more likely to notice a lack of computing resources and multi tasking ability vs how long something took to open or run. The funny part is that even though people shit all over them, they were, and honestly still are valid chips for certain use cases. They'll still game, they can be small cheap vhosts, nas servers, you name it. The biggest problem recently is finding a decent AM3+ board to put them in.

Log in

Don't have an account? Sign up now