CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Corona is a fully multithreaded test, so the non-HT parts get a little behind here. The Core i9-9900K blasts through the AMD 8-core parts with a 25% margin, and taps on the door of the 12-core Threadripper.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Blender has an eclectic mix of requirements, from memory bandwidth to raw performance, but like Corona the processors without HT get a bit behind here. The high frequency of the 9900K pushes it above the 10C Skylake-X part, and AMD's 2700X, but behind the 1920X.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.


Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++LuxMark v3.1 OpenCL

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

CPU Performance: System Tests CPU Performance: Office Tests
Comments Locked

274 Comments

View All Comments

  • eastcoast_pete - Sunday, October 21, 2018 - link

    Yes; unfortunately, that's a major exception, and annoying to somebody like me who'd actually recommend AMD otherwise. I really hope that AMD improves it's AVX/AVX2 implementation and makes it truly 256 bit wide. If I remember correctly, the lag of Ryzen chips in 256 bit AVX vs. Intel is due to AMD using a 2 x 128 bit implementation (workaround, really), which is just nowhere near as fast as real 256 bit AVX. So, I hope that AMD gives their next Ryzen generation full 256 bit AVX, not the 2 x 128 bit workaround.
  • mapesdhs - Sunday, October 21, 2018 - link

    It's actually worse than that with pro apps. Even if AMD hugely improved their AVX, it won't help as much as it could so long as apps like Premiere remain so poorly coded. AE even has plugins that are still single-threaded from more than a decade ago. There are also several CAD apps that only use a single core. I once sold a 5GHz 2700K system to an engineering company for use with Majix, it absolutely blew the socks off their far more expensive XEON system (another largely single-threaded app, though not entirely IIRC).

    Makes me wonder what they're teaching sw engineering students these days; parallel coding and design concepts (hw and sw) was a large part of the comp sci stuff I did 25 years ago. Has it fallen out of favour because there aren't skilled lectures to teach it? Or students don't like tackling the hard stuff? Bot of both? Some of it was certainly difficult to grasp at first, but even back then there was a lot of emphasis on multi-threaded systems, or systems that consisted of multiple separate functional units governed by some kind of management engine (not unlike a modern game I suppose), at the time coding emphasis being on derivatives of C++. It's bizarre that after so long, Premiere inparticular is still so inefficient, ditto AE. One wonders if companies like Adobe simply rely on improving hw trends to provide customers with performance gains, instead of improving the code, though this would fly in the face of their claim a couple of years ago that they would spend a whole year focusing on improving performance since that's what users wanted more than anything else (I remember the survey results being discussed on creativcow).
  • eastcoast_pete - Sunday, October 21, 2018 - link

    Fully agree! Part of the problem is that the re-coding single-thread routines that could really benefit from parallel/multi-thread execution costs the Adobes of this world money, especially if one wants it done right. However, I believe that the biggest reason why so many programs, in full or in part, are solidly stuck in the last century is that their customers simply don't know what they are missing out on. Once volume licensees start asking their software supplier's sales engineers (i.e. sales people) "Yes, nice new interface. But, does this version now fully support multithreaded execution, and, if not, why not?", Adobe and others will give this the priority it should have had all along.
  • repoman27 - Friday, October 19, 2018 - link

    USB Type-C ports don't necessarily require a re-timer or re-driver (especially if they’re only using Gen 1 5 Gbit/s signaling), but they do require a USB Type-C Port Controller.

    The function of that chip is rather different though. Its job is to utilize the CC pins to perform device attach / detach detection, plug orientation detection, establish the initial power and data roles, and advertise available USB Type-C current levels. The port controller also generally includes a high-speed mux to steer the SuperSpeed signals to whichever pins are being used depending on the plug orientation. Referring to a USB Type-C Port Controller as a re-driver is both inaccurate and confusing to readers.
  • willis936 - Friday, October 19, 2018 - link

    Holy damn that's a lot of juice. 220W? That's 60 watts more than a 14x3GHz core IVB E5.

    They had better top charts with that kind of power draw. I have serious reservations about believing two DDR4 memory channels is enough to feed 8x5GHz cores. I would be interested in a study of memory scaling on this chip specifically, since it's the corner case for the question "Is two memory channels enough in 2018?".
  • DominionSeraph - Friday, October 19, 2018 - link

    This chip would be faster in everything than a 14 core IVB E5, while being over 50% faster in single-threaded tasks.
    Also, Intel is VERY generous with voltage in turbo. Note the 9700K at stock takes 156W in Blender for a time of 305, but when they dialed it in at 1.025V at 4.6GHz it took 87W for an improved time of 301, and they don't hit the stock wattage until they've hit 5.2GHz. When they get the 9900K scores up I expect that 220W number to be cut nearly in half by a proper voltage setting.
  • 3dGfx - Friday, October 19, 2018 - link

    How can you claim 9900k is the best when you never tested the HEDT parts in gaming? Making such claims really makes anandtech look bad. I hope you fix this oversight so skyX can be compared properly to 9900K and the skyX refresh parts!!! -- There was supposed to be a part2 to the i9-7980XE review and it never happened, so gaming benchmarks were never done, and i9-7940X and i9-7920X weren't tested either. HEDT is a gaming platform since it has no ECC support and isn't marketed as a workstation platform. Curious that intel says the 8-core part is now "the best" and you just go along with that without testing their flagship HEDT in games.
  • DannyH246 - Friday, October 19, 2018 - link

    If you want an unbiased review go here...

    https://www.extremetech.com/computing/279165-intel...

    Anandtech is a joke. Has been for years. Everyone knows it.
  • TEAMSWITCHER - Friday, October 19, 2018 - link

    Thanks... but no thanks. Why did you even come here? Just to post this? WEAK!
  • Arbie - Friday, October 19, 2018 - link

    What a stupid remark. And BTW Extremetech's conclusion is practically the same as AT's. The bias here is yours.

Log in

Don't have an account? Sign up now