Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

Corona 1.3

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++

POV-Ray 3.7b3

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

 

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

264 Comments

View All Comments

  • Flunk - Monday, June 19, 2017 - link

    I'm surprised by how well the $249 Ryzen 5 1600x holds on in those benchmarks. Seems like the processor to go for, for the majority of people. It should keep up in games for years to come. Yes, the top-end stuff is great and all, but it's a < 1% product.
  • prisonerX - Monday, June 19, 2017 - link

    Value for money seems to take a back seat to bragging rights for some people. Makes them look silly I think, but they seem to think it makes them look good.
  • asendra - Monday, June 19, 2017 - link

    ?? In a professional setting, being 20-30% or whatever faster is well worth the 500-1000$ extra. Sure, it may only make that render 5/10min faster, But those gains sure add up over the course of a year.
    Gaining tens of hours of productivity over the course of a year sure is worth the extra $.
  • Sarah Terra - Monday, June 19, 2017 - link

    So does the power bill. you'll note the "superior" intel profs have a much higher thermal rating.
  • ScottSoapbox - Monday, June 19, 2017 - link

    People spending $999 on a CPU alone aren't worried about an extra few dollars on their power bill.
  • Lolimaster - Tuesday, June 20, 2017 - link

    The thing AMD's Threadripper offers much more power for the same price or probably less, intel is not an option for workstation :D
  • Timoo - Saturday, July 1, 2017 - link

    ThreadRipper is not available yet, so it's not an option. Yes, Intel rushed the X299 platform to beat AMD. Which makes it a "bad bet", to my opinion. But we simply cannot compare it to TR, as of yet. Intel in a workstation is very much an option. Just not one I would take :-)
  • Integr8d - Tuesday, June 20, 2017 - link

    People $999 on a CPU to fill 1,000s of blades in a datacenter are definitely worried about a few dollars on their power bill...
  • jospoortvliet - Thursday, June 22, 2017 - link

    Sure but this CPU is for work stations not blades. Epic and Xeon compete in that market..
  • melgross - Monday, June 19, 2017 - link

    Well, since one might expect to make at least tens of thousand on a single machine in a quarter, or more likely, a month, for a real business, considering depreciation, the extra costs are well worth it. In fact, they're negligible.

Log in

Don't have an account? Sign up now