Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Conclusions on Rendering: It is clear from these graphs that most rendering tools require full cores, rather than multiple threads, to get best performance. The exception is Cinebench.

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
POST A COMMENT

178 Comments

View All Comments

  • coolhardware - Monday, February 12, 2018 - link

    Thank you for the recommendation!!! :-) Reply
  • kaidenshi - Tuesday, February 13, 2018 - link

    I'm using the ASRock AB350M Pro4 with a Ryzen 3 1300X, 16GB Crucial Ballistix 2400MHz DDR4 memory, and a GTX 1060 SC. It's been a rock solid board so far, and it has two PCI-E storage slots (one is NVMe, the other is SATA) so you can use it comfortably in a case with limited storage options.

    I was nervous about it after I read some reviews on Newegg talking about stability issues, but it turned out pretty much all of those people were trying to overclock it far beyond its rated capabilities. It's perfectly stable if you don't try to burn it up on purpose.
    Reply
  • Samus - Monday, February 12, 2018 - link

    Seriously. It's now obvious why Intel is using AMD graphics. Considering that its mostly on par (sometimes faster, sometimes slower) with a GT 1030, a $100 GPU that uses 30 watts alone, Intel made the right choice using VEGA. Reply
  • Flunk - Monday, February 12, 2018 - link

    Wow, that's some impressive numbers for the price point (either of them). I think the R5 2400G would cover the vast majority of users' CPU and GPU needs to the point where they wouldn't notice a difference from anything more expensive. Anyone short of a power user or hardcore gamer could buy one of these and feel like they'd bought a real high-end system, with a $169.99 CPU. That's real value. I kinda want one to play around with, I don't know how I'll justify that to myself... Maybe I'll give it to my father next Christmas. Reply
  • jjj - Monday, February 12, 2018 - link

    Was hoping to see GPU OC perf and power, won't scale great unless the memory controller can take faster sticks (than Summit Ridge) but we still need to figure it all out. Reply
  • iter - Monday, February 12, 2018 - link

    Most other sites' reviews feature overclocking and power. Reply
  • Ian Cutress - Monday, February 12, 2018 - link

    I started an initial run with higher speed memory, but nothing substantial enough to put in the article just yet. I'm planning some follow ups. Reply
  • jjj - Monday, February 12, 2018 - link

    Looking forward to all of that.

    Anyway, they do deliver here for folks that can't afford discrete or got other reasons to go with integrated. Even the 2400G is ok if one needs 8 threads.
    Reply
  • Kamgusta - Monday, February 12, 2018 - link

    Where is the i5-8400 that has the same price as the 2400G?
    Oh, yeah, they totally left it out from the benchmarks since it would have proved an absolute supremacy of the Intel offering.
    Ops.
    Reply
  • speely - Monday, February 12, 2018 - link

    "Where is the i5-8400 that has the same price as the 2400G?
    Oh, yeah, they totally left it out from the benchmarks since it would have proved an absolute supremacy of the Intel offering.
    Ops."

    In which benchmarks do you expect to see the i5-8400 prove its "absolute supremacy" where the i5-7400 didn't? Seriously, I'd like to know.

    Because what I see is either the i5-7400 beating the 2400G or going punch to punch with it, or being thoroughly decimated by it.

    If the i5-7400 beats or competes with the 2400G, the i5-8400 refresh chip will do the same. If the i5-7400 gets trounced by the 2400G, the i5-8400 refresh chip isn't suddenly and magically going to beat it.

    I fail to see anything in the article to indicate a pro-AMD bias on AT's part, either intentional or unintentional.

    What I do see is a fanboy who's upset to see his team losing some benchmarks.
    Reply

Log in

Don't have an account? Sign up now