Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Benchmarking Performance: CPU Office Tests Benchmarking Performance: CPU Encoding Tests
Comments Locked

152 Comments

View All Comments

  • ddriver - Monday, September 25, 2017 - link

    You are living in a world of mainstream TV functional BS.

    Quantum computing will never replace computers as we know and use them. QC is very good at a very few tasks, which classical computers are notoriously bad at. The same goes vice versa - QC suck for regular computing tasks.

    Which is OK, because we already have enough single thread performance. And all the truly demanding tasks that require more performance due to their time staking nature scale very well, often perfectly, with the addition of cores, or even nodes in a cluster mode.

    There might be some wiggle room in terms of process and material, but I am not overly optimistic seeing how we are already hitting the limits on silicon and there is no actual progress made on superior alternatives. Are they like gonna wait until they hit the wall to make something happen?

    At any rate, in 30 years, we'd be far more concerned with surviving war, drought and starvation than with computing. A problem that "solves itself" ;)
  • SharpEars - Monday, September 25, 2017 - link

    You are absolutely correct regarding quantum computing and it is photonic computing that we should be looking towards.
  • Notmyusualid - Monday, September 25, 2017 - link

    @ SharpEars

    Yes, as alluded to by IEEE. But I've not looked at it in a couple of years or so, and I think they were still struggling with an optical DRAM of sorts.
  • Gothmoth - Monday, September 25, 2017 - link

    and what have they done for the past 6 years?

    i am glad that i get more cores instead of 5-10% performance per generation.
  • Krysto - Monday, September 25, 2017 - link

    The would if they could. Improvements in IPC have been negligible since Ivy Bridge.
  • kuruk - Monday, September 25, 2017 - link

    Can you add Monero(Cryptonight) performance? Since Cryptonight requires at least 2MB of L3 cache per core for best performance, it would be nice to see how these compare to Threadripper.
  • evilpaul666 - Monday, September 25, 2017 - link

    I'd really like it if Enthusiast ECC RAM was a thing.

    I used to always run ECC on Athlons back in the Pentium III/4 days.Now with 32-128x more memory that's running 30x faster it doesn't seem like it would be a bad thing to have...
  • someonesomewherelse - Saturday, October 14, 2017 - link

    It is. Buy AMD.
  • IGTrading - Monday, September 25, 2017 - link

    I think we're being to kind on Intel.

    Despite the article clearly mentioning it in a proper and professional way, the calm tone of the conclusion seem to legitimize and make it acceptable that Intel basically deceives its customers and ships a CPU that consumes almost 16% more power than its stated TDP.

    THIS IS UNACCEPTABLE and UNPROFESSIONAL from Intel.

    I'm not "shouting" this :) , but I'm trying to underline this fact by putting it in caps.

    People could burn their systems if they design workstations and use cooling solutions for 165W TDP.

    If AMD would have done anything remotely similar, we would have seen titles like "AMD's CPU can fry eggs / system killer / motherboard breaker" and so on ...

    On the other hand, when Intel does this, it is silently, calmly and professionally deemed acceptable.

    It is my view that such a thing is not acceptable and these products should be banned from the market UNTIL Intel corrects its documentation or the power consumption.

    The i7960X fits perfectly in its TDP of 165W, how come i7980X is allowed to run wild and consume 16% more ?!

    This is similar with the way people accepted every crapping design and driver fail from nVIDIA, even DEAD GPUs while complaining about AMD's "bad drivers" that never destroyed a video card like nVIDIA did. See link : https://www.youtube.com/watch?v=dE-YM_3YBm0

    This is not cutting Intel "some slack" this is accepting shit, lies and mockery and paing 2000 USD for it.

    For 2000$ I expect the CPU to run like a Bentley for life, not like modded Mustang which will blow up if you expect it to work as reliably as a stock model.
  • whatevs - Monday, September 25, 2017 - link

    What a load of ignorance. Intel tdp is *average* power at *base* clocks, uses more power at all core turbo clocks here. Disable turbo if that's too much power for you.

Log in

Don't have an account? Sign up now