Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

More threads win the day, although the Core i7 does knock at the door of the Ryzen 5 (presumably with $110 in hand as well). It is worth noting that the Core i5-7640X and the older Core i7-2600K are on equal terms.

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

Similar to Corona, more threads means a faster time.

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++

Rendering: LuxMark CPU OpenCL

Luxmark is more thread and cache dependent, and so the Core i7 nips at the heels of the AMD parts with double the threads. The Core i5 sits behind the the Ryzen 5 parts though, due to the 1:3 thread difference.

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Mirror Mirror on the wall...

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 SingleThreaded

Rendering: CineBench 15 MultiThreaded

CineBench gives us singlethreaded numbers, and it is clear who rules the roost, almost scoring 200. The Core i7-2600K, due to its lack of instruction support, sits in the corner.

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

176 Comments

View All Comments

  • mapesdhs - Monday, July 24, 2017 - link

    2700K, +1.5GHz every time.
  • shabby - Monday, July 24, 2017 - link

    So much for upgrading from a kbl-x to skl-x when the motherboard could fry the cpu, nice going intel.
  • Nashiii - Monday, July 24, 2017 - link

    Nice article Ian. What I will say is I am a little confused around this comment:

    "Intel wins for the IO and chipset, offering 24 PCIe 3.0 lanes for USB 3.1/SATA/Ethernet/storage, while AMD is limited on that front, having 8 PCIe 2.0 from the chipset."

    You forgot to mention the AMD total PCI-E IO. It has 24 PCI-E 3.0 lanes with 4xPCI-e 3.0 going to the chipset which can be set to 8x PCI-E 2.0 if 5Gbps is enough per lane, i.e in the case of USB3.0.

    I have read that Kabylake-X only has 16 PCI-E 3.0 lanes native. Not sure about PCH support though...
  • KAlmquist - Monday, July 24, 2017 - link

    With Kabylake-X, the only I/O that doesn't go through the chipset is the 16 PCI-E 3.0 lanes you mention. With Ryzen, in addition to what is provided by the chipset, the CPU provides

    1) Four USB 3.1 connections
    2) Two SATA connections
    3) 18 PCI-E 3.0 lanes, or 20 lanes if you don't use the SATA connections

    So if you just look at the CPU, Ryzen has more connectivity than Kabylake-X, but the X299 chip set used with Kabylake-X is much more capable (and expensive) than anything in the AMD lineup. Also, the X299 doesn't provide any USB 3.1 ports (or more precisely, 10 gb per second speed ports), so those are typically provided by a separate chip, adding to the cost of X299 motherboards.
  • Allan_Hundeboll - Monday, July 24, 2017 - link

    Interesting review with great benchmarks. (I don't understand why so many reviews only report average frames pr. second)
    The ryzen r5 1600 seems to offer great value for money, but i'm a bit puzzled why the slowest clocked R5 beats the higher clocked R7 in a lot of the 99% benchmarks, Im guessing its because the latency delta when moving data from one core to another penalize the higher core count R7 more?
  • BenSkywalker - Monday, July 24, 2017 - link

    The gaming benchmarks are, uhm..... pretty useless.

    Third tier graphics cards as a starting point, why bother?

    Seems like an awful lot of wasted time. As a note you may want to consider- when testing a new graphics card you get the fastest CPU you can so we can see what the card is capable of, when testing a new CPU you get the fastest GPU you can so we can see what the CPU is capable of. The way the benches are constructed, pretty useless for those of us that want to know gaming performance.
  • Tetsuo1221 - Monday, July 24, 2017 - link

    Benchmarking at 1080p... enough said.. Completely and utterly redundant
  • Qasar - Tuesday, July 25, 2017 - link

    why is benchmarking @ 1080p Completely and utterly redundant ?????
  • meacupla - Tuesday, July 25, 2017 - link

    I don't know that guy's particulars, but, to me, using X299 to game at 1080p seems like a waste.
    If I was going to throw down that kind of money, I would want to game at 1440p or 4K
  • silverblue - Tuesday, July 25, 2017 - link

    Yes, but 1080p shifts the bottleneck towards the CPU.

Log in

Don't have an account? Sign up now