Stock CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Corona is an AVX2 benchmark, and it would appear that the Cannon Lake CPU can't take full advantage of the functionality. There's still a 10% difference at fixed frequency.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Blender also uses an AVX2 code path, and we see that the CNL processor scored worse at stock settings than at fixed frequency settings. Again, this is likely due to a power or thermal issue.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.


Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

Stock CPU Performance: System Tests Stock CPU Performance: Office Tests
Comments Locked

129 Comments

View All Comments

  • 0ldman79 - Friday, January 25, 2019 - link

    This whole situation begs the question, what could Intel have gotten out of 65nm, 32nm, 22nm, etc, had they run it for five generations.

    I wonder if they'll do similarly on the 10nm process, punt the first time or two then knock it out of the park. Skylake was a beautiful success. Maybe Sunny Cove will be the same for 10nm.
  • StrangerGuy - Friday, January 25, 2019 - link

    The point is Intel now needs better uarch designers lot more than process designers. Yes 10nm improvements is hard work and an interesting read...but for users they ultimately only care about end performance and perf/$, not die sizes, transistors/mm2 or manufacturing margins. If Zen 2 blows the doors off CFL would anybody even care about about Intel's process advantage? Hell not.
  • KOneJ - Sunday, January 27, 2019 - link

    Doubt this is even an "if" at this point. Curious to see if *Cove cores can keep Zen 4 and later from running away too much. Only time will tell, but Intel bringing in guys like Keller can't possibly be a bad thing. And in spite of their disastrous former attempts at building a dGPU, I fully expect Intel to make it happen this go around.
  • eva02langley - Sunday, January 27, 2019 - link

    The problem is, do you believe 7nm would be any different? Unless they implement EUV directly, I don't see it. Intel will be force, like AMD, to go fab less because their node will not be better than the competition. To add to it, it is most likely be behind in time to.
  • zodiacfml - Saturday, January 26, 2019 - link

    Great job again though it doesn't warrant it for this Intel junk. Looks like they're paying Lenovo just to use Cannon lake, usable chips that came from tuning manufacturing.
    The performance is where I expected it to be.
    I still stand to my theory that Intel is reluctant to spend, leaving their engineers stressing if they can produce 10nm products without new equipment.
    Anyways, it is a dead horse. AMD will be all the rage for 2019.
  • KOneJ - Sunday, January 27, 2019 - link

    "Intel is reluctant to spend"
    To the contrary: throwing money at the problem is exactly what they're doing. Have you tracked their CAPEX these past few years?
    "AMD will be all the rage for 2019."
    I think that's basically a given.
  • zodiacfml - Sunday, January 27, 2019 - link

    The reports were pretty vague and I don't remember them spending substantial money except the news that they're spending for more capacity on 14nm.
    AMD is pretty lukewarm for me last year. I'm certain that this year will be a lot stronger for AMD until Intel and Nvidia starts taking their customers more seriously.
  • KOneJ - Sunday, January 27, 2019 - link

    Even for a company Intel's size, spending north of $12B a year isn't penny-pinching. I know their revenue and margins are massive, but their failings haven't been a lack of spending since SB. They've been progressively spending more than ever.
  • YoloPascual - Saturday, January 26, 2019 - link

    bUt 10nm iNtEL iS bEtTeR tHaN 7nm TSMC riGhT?
  • KOneJ - Sunday, January 27, 2019 - link

    Shouldn't your alias be yOlOpAsCuAl, wannabe troll?

Log in

Don't have an account? Sign up now