Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Benchmarking Performance: CPU Office Tests Benchmarking Performance: CPU Encoding Tests
Comments Locked

152 Comments

View All Comments

  • Reflex - Monday, September 25, 2017 - link

    I'd ask you to post your credentials, but seriously your statements long ago precluded you from being anyone either in the industries you are opinionated about, or with the education to question anyone in those industries.
  • Notmyusualid - Tuesday, September 26, 2017 - link

    @ Reflex

    Aaaand... check mate.

    Well done.
  • vgray35@hotmail.com - Monday, September 25, 2017 - link

    And did you notice that Big Blue has actually lost its marbles and its neurons are misfiring? Both the Core i9-7980XE and the Core i9-7960X have a TDP rating of 165W. However while the latter meets this TDP, the TCore i9-7980XE draws 190W at full load. That is a big no thanks also, when you consider 165W coolers are likely to be installed on the basis of the 165W TDP rating. We haven't even started over clocking yet, and it is likely this CPU will draw in excess of 350W, and one can only pray that thermal paste under the lid will play nice. Or did they really do something different this time around?
  • ddriver - Monday, September 25, 2017 - link

    Those are intel lies. Totally justified, because intel is rich. Not only are intel lies not bad, they are actually good. It makes you more intelligent if you believe in them. Only very intelligent people can get it.
  • ddriver - Monday, September 25, 2017 - link

    Curiously, no word of intel's AMAZING DUAL CORE HEDT i3-7360X here at AT. Lagging behind the cutting edge here :)

    Now that's a real game changer for intel. Although I wish they could launch a single core HEDT processor too. That's really where their portfolio is left gaping.
  • artk2219 - Monday, September 25, 2017 - link

    Big blue is IBM BTW, intel is just intel, or if you want to call them anything else, go with "money grubbing, cheating, anti competitive, bastards who will screw everyone over for a buck in a heart beat". For short.
  • vgray35@hotmail.com - Tuesday, September 26, 2017 - link

    Sorry I meant to say Big Blue II
  • AndrewJacksonZA - Tuesday, September 26, 2017 - link

    Or, you know, "Chipzilla."

    Just sayin'.
  • artk2219 - Friday, September 29, 2017 - link

    Lol, chipzilla would also work
  • damianrobertjones - Saturday, September 30, 2017 - link

    I've created countless videos, processed a lot of documents, but have never, ever, lost anything due to using standard non-ecc ram. Sure, in work, ALL of the servers use ecc but there's not even one standard desktop with the stuff. STILL no data loss. 32Gb at home and 64Gb in work.

    Yes, okay, I understand that ECC is for x and y, but is it 'really', REALLY, that important?

Log in

Don't have an account? Sign up now