CPU Performance: Rendering Tests

Rendering is often a key target for processor workloads, lending itself to a professional environment. It comes in different formats as well, from 3D rendering through rasterization, such as games, or by ray tracing, and invokes the ability of the software to manage meshes, textures, collisions, aliasing, physics (in animations), and discarding unnecessary work. Most renderers offer CPU code paths, while a few use GPUs and select environments use FPGAs or dedicated ASICs. For big studios however, CPUs are still the hardware of choice.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: Performance Render

An advanced performance based renderer for software such as 3ds Max and Cinema 4D, the Corona benchmark renders a generated scene as a standard under its 1.3 software version. Normally the GUI implementation of the benchmark shows the scene being built, and allows the user to upload the result as a ‘time to complete’.

We got in contact with the developer who gave us a command line version of the benchmark that does a direct output of results. Rather than reporting time, we report the average number of rays per second across six runs, as the performance scaling of a result per unit time is typically visually easier to understand.

The Corona benchmark website can be found at https://corona-renderer.com/benchmark

Corona 1.3 Benchmark

Corona is a fully multithreaded test, so the non-HT parts get a little behind here. The Core i9-9900K blasts through the AMD 8-core parts with a 25% margin, and taps on the door of the 12-core Threadripper.

Blender 2.79b: 3D Creation Suite

A high profile rendering tool, Blender is open-source allowing for massive amounts of configurability, and is used by a number of high-profile animation studios worldwide. The organization recently released a Blender benchmark package, a couple of weeks after we had narrowed our Blender test for our new suite, however their test can take over an hour. For our results, we run one of the sub-tests in that suite through the command line - a standard ‘bmw27’ scene in CPU only mode, and measure the time to complete the render.

Blender can be downloaded at https://www.blender.org/download/

Blender 2.79b bmw27_cpu Benchmark

Blender has an eclectic mix of requirements, from memory bandwidth to raw performance, but like Corona the processors without HT get a bit behind here. The high frequency of the 9900K pushes it above the 10C Skylake-X part, and AMD's 2700X, but behind the 1920X.

LuxMark v3.1: LuxRender via Different Code Paths

As stated at the top, there are many different ways to process rendering data: CPU, GPU, Accelerator, and others. On top of that, there are many frameworks and APIs in which to program, depending on how the software will be used. LuxMark, a benchmark developed using the LuxRender engine, offers several different scenes and APIs.


Taken from the Linux Version of LuxMark

In our test, we run the simple ‘Ball’ scene on both the C++ and OpenCL code paths, but in CPU mode. This scene starts with a rough render and slowly improves the quality over two minutes, giving a final result in what is essentially an average ‘kilorays per second’.

LuxMark v3.1 C++LuxMark v3.1 OpenCL

POV-Ray 3.7.1: Ray Tracing

The Persistence of Vision ray tracing engine is another well-known benchmarking tool, which was in a state of relative hibernation until AMD released its Zen processors, to which suddenly both Intel and AMD were submitting code to the main branch of the open source project. For our test, we use the built-in benchmark for all-cores, called from the command line.

POV-Ray can be downloaded from http://www.povray.org/

POV-Ray 3.7.1 Benchmark

CPU Performance: System Tests CPU Performance: Office Tests
POST A COMMENT

275 Comments

View All Comments

  • Targon - Friday, October 19, 2018 - link

    TSMC will do the job for AMD, and in March/April, we should be seeing AMD release the 3700X and/or 3800X that will be hitting the same clock speeds as the 9900k, but with a better IPC. Reply
  • BurntMyBacon - Friday, October 19, 2018 - link

    I am certainly happy that AMD regained competitiveness. I grabbed an R7 1700X early on for thread heavy tasks while retaining use of my i7-6700K in a gaming PC. That said, I can't credit them with everything good that comes out of Intel. To say that Intel would not have released an 8 core processor without AMD is probably inaccurate. They haven't released a new architecture since Skylake and they are still on a 14nm class process. They had to come up with some reason for customers to buy new processors rather than sit on older models. Clock speeds kinda worked for Kaby Lake, but they need more for Coffee Lake. Small, fixed function add-ons that only affect a small portion of the market probably weren't enough. A six core chip on the mainstream platform may have been inevitable. Going yet another round without a major architecture update or new process node, it is entirely possible that the 8-core processor on the mainstream platform was also inevitable. I give AMD credit for speeding up the release schedule, though.

    As to claims that the GF manufacturing is responsible for the entire 1GHz+ frequency deficit, that is only partially true. It is very likely that some inferior characteristics of the node are reducing the potential maximum frequency achievable. However, much of the limitations on frequency also depends on how AMD layed out the nodes. More capacitance on a node makes switching slower. More logic between flip-flops require more switches to resolve before the final result is presented to the flip-flops. There is a trade-off between the number of buffers you can put on a transmission line as reducing input to output capacitance ratios will speed up individual switch speeds, but they will also increase the number of switches that need to occur. Adding more flip-flops increases the depth of the pipeline (think pentium 4) and increases the penalty for branch misses as well as making clock distribution more complicated. These are just a few of the most basic design considerations that can affect maximum attainable frequency that AMD can control.

    Consequently, there is no guarantee that AMD will be able to match Intel's clock speeds even on TSMC's 7nm process. Also, given that AMD's current IPC is more similar to Haswell and still behind Skylake, it is not certain that they next processors will have better IPC than Intel either. I very much hope one or the other ends up true, but unrealistic expectations won't help the situation. I'd rather be pleasantly surprised than disappointed. As such, I expect that AMD will remain competitive. I expect that they will close the gaming performance gap until Intel releases a new architecture. I expect that regardless of how AMD's 7nm processors stack against Intel's best performance-wise, I expect that AMD likely bring better value at least until Intel gets their 10nm node fully online.
    Reply
  • Spunjji - Monday, October 22, 2018 - link

    "To say that Intel would not have released an 8 core processor without AMD is probably inaccurate."
    It's technically inaccurate to say they would have never made any kind of 8-core processor, sure, but nobody's saying that. That's a straw man. What they are saying is that Intel showed no signs whatsoever of being willing to do it until Ryzen landed at their doorstep.

    To be clear, the evidence is years of Intel making physically smaller and smaller quad-core chips for the mainstream market and pocketing the profit margins, followed by a sudden and hastily-rescheduled grab for the "HEDT" desktop market the second Ryzen came out, followed by a rapid succession of "new" CPU lines with ever-increasing core counts.

    You're also wrong about AMD's IPC, which is very clearly ahead of Haswell. The evidence is here in this very article where you can see the difference in performance between AMD and Intel is mostly a function of the clock speeds they attain. Ryzen was already above Haswell for the 1000 series (more like Broadwell) and the 2000 series brought surprisingly significant steps.
    Reply
  • khanikun - Tuesday, October 23, 2018 - link

    " What they are saying is that Intel showed no signs whatsoever of being willing to do it until Ryzen landed at their doorstep."

    Intel released an 8 core what? 3 years before Ryzen. Sure, it was one of their super expensive Extreme procs, but they still did it. They were slowly ramping up cores for the HEDT market, while slowly bringing them to more normal consumer prices. 3 years before Ryzen, you could get a 6 core i7 for $400 or less. A year before that it was like $550-600. A 1-2 years before that, a 6 core would be $1000+. 8 cores were slowly coming.

    What Ryzen did was speed up Intel's timeframe. They would have came and came at a price point that normal consumers would be purchasing them. If I had to guess, we're probably 2-3 years ahead of what Intel probably wanted to do.

    Now would Ryzen exist, if not for Intel? Core for core, AMD has nothing that can compete with Intel. So...ramp up the core count. We really don't see Intel going away from a unified die design, so that's the best way AMD has to fight Intel. I'm personally surprised AMD didn't push their MCM design years ago. Maybe they didn't want to cannibalize Opteron sales, bad yields, I don't know. Must have been some reason.
    Reply
  • Cooe - Friday, October 19, 2018 - link

    Rofl, delusional poster is delusional. And anyone who bought a 2700X sure as shit doesn't need to do anything to "defend their purchase" to themselves hahaha. Reply
  • evernessince - Saturday, October 20, 2018 - link

    Got on my level newb. The 9900K is a pittance compared to my Xeon 8176. I hope you realized that was sarcasm and how stupid it is to put people down for wanting value. Reply
  • JoeyJoJo123 - Friday, October 19, 2018 - link

    >I think far too much emphasis has been placed on 'value'.

    Then buy the most expensive thing. There's no real need to read reviews at that point either. You just want the best, money is no object to you, and you don't care, cool. Just go down the line and put the most expensive part for each part of the PC build as you browse through Newegg/Amazon/whatever, and you'll have the best of the best.

    For everyone else, where money is a fixed and limited resource, reading reviews MATTERS because we can't afford to buy into something that doesn't perform adequately for the cost investment.

    So yes, Anandtech, keep making reviews to be value-oriented. The fools will be departed with their money either way, value-oriented review or not.
    Reply
  • Arbie - Friday, October 19, 2018 - link

    They'll be parted, yes - and we can hope for departed. Reply
  • GreenReaper - Saturday, October 20, 2018 - link

    Don't be *too* harsh. They're paying the premium to cover lower-level chips which may be barely making back the cost of manufacturing, thus making them a good deal. (Of course, that also helps preserve the monopoly/duopoly by making it harder for others to break in...) Reply
  • Spunjji - Monday, October 22, 2018 - link

    Yeah, to be honest the negatives of idiots buying overpriced "prestige" products tend to outweigh the "trickle down" positives for everyone else. See the product history of nVidia for the past 5 years for reference :/ Reply

Log in

Don't have an account? Sign up now