HEDT Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Corona sees improvement in line with the frequency gain, however the higher core count AMD parts win out here.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Similarly with Blender as to Corona: the new Intel Core i9-9980XE performs better than the previous generation 7980XE, but sits behind the higher core count AMD parts.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs. *It has been mentioned that LuxMark, since the Spectre/Meltdown patches, is not a great representation of the LuxRender engine. We still use the test as a good example of different code path projections.

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++

Our test here seems to put processors into buckets of performance. In this case, the Core i9-9980XE goes up a bucket.

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

POV-Ray is as expected: a performance improvement, but behind the higher core count AMD parts.

HEDT Performance: Encoding Tests HEDT Performance: System Tests
Comments Locked

143 Comments

View All Comments

  • Cellar Door - Tuesday, November 13, 2018 - link

    The best part is that an i7 part(9800X) is more expensive then a i9 part(9900k). Intel smoking some good stuff.
  • DigitalFreak - Tuesday, November 13, 2018 - link

    You're paying more for those extra 28 PCI-E lanes
  • Hixbot - Tuesday, November 13, 2018 - link

    And much more L3. It's also interesting that HEDT is no longer behind in process node.
  • Hixbot - Tuesday, November 13, 2018 - link

    And AVX512
  • eastcoast_pete - Tuesday, November 13, 2018 - link

    @Ian: Thanks, good overview and review!
    Agree on the "iteration when an evolutionary upgrade was needed"; it seems that Intel's development was a lot more affected by its blocked/constipated transition to 10 nm (now scrapped), and the company's attention was also diverted by its forays into mobile (didn't work out so great) and looking for progress elsewhere (Altera acquisition). This current "upgrade" is mainly good for extra PCI-e lanes (nice to have more), but it's performance is no better than the previous generation. If the new generation chips from AMD are halfway as good as they promise, Intel will loose a lot more profitable ground in the server and HEDT space to AMD.
    @Ian, and all: While Intel goes on about their improved FinFet 14 nm being the reason for better performance/Wh, I wonder how big the influence of better heat removal through the (finally again) soldered heat spreader is? Yes, most of us like to improve cooling to be able to overclock more aggressively, but shouldn't better cooling also improve the overall efficiency of the processor? After all, semiconductors conduct more current as they get hotter, leading to ever more heat and eventual "gate crashing". Have you or anybody else looked at performance/Wh between, for example, an i7 8700 with stock cooler and pasty glued heat spreader vs. the same processor with proper delidding, liquid metal replacement and a great aftermarket cooler, both at stock frequencies? I'd expect the better cooled setup to have more performance/Wh, but is that the case?
  • Arbie - Tuesday, November 13, 2018 - link

    The "Competition" chart is already ghastly for Intel. Imagine how much worse it will be when AMD moves to 7 nm with Zen 2.
  • zepi - Tuesday, November 13, 2018 - link

    How about including some kind of DB test?

    I think quite a few people are looking at these workstation class CPU's to develop BI things and it might quite helpful to actually measure results with some SQL / NoSQL / BI-suites. Assuming bit more complex parallel SQL executions with locking could show some interesting differences between NUMA-Threadrippers and Intels.
  • GreenReaper - Wednesday, November 14, 2018 - link

    It's a good idea, Phoronix does them so in the short term you could probably look there.
  • jospoortvliet - Friday, November 16, 2018 - link

    But then make sure it is realistic, not running in cache or such... A real db suitable for these chips is terabytes, merely keeping the index in ram... rule of thumb: if your index fits in cache your database doesn't need this CPU ;-)
  • FunBunny2 - Tuesday, November 13, 2018 - link

    I guess I can run my weather simulation in Excel on my personal machine now. neato.

Log in

Don't have an account? Sign up now