Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Conclusions on Rendering: It is clear from these graphs that most rendering tools require full cores, rather than multiple threads, to get best performance. The exception is Cinebench.

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

177 Comments

View All Comments

  • sonicmerlin - Tuesday, February 13, 2018 - link

    Now if only AMD had a competent GPU arch. The APU performance could be given a huge boost with Nvidia’s tech
  • dr.denton - Thursday, February 15, 2018 - link

    They do. It's called Vega. Very efficient in mid- to low range and compute, and if I'm not mistaken that's where the money is. Highend gaming is just wi**ie waving for us geeks.
  • HStewart - Tuesday, February 13, 2018 - link

    Check out performance of up and coming i8809G with Vega Graphics compare to Ryzen 7

    http://cpu.userbenchmark.com/Compare/Intel-Core-i7...

    Keep in mine this is a mobile chips - this is new mobile chips is quite powerful - I thinking of actually getting one - only big concern is compatibility with Vega chip.
  • haplo602 - Wednesday, February 14, 2018 - link

    the i8809G is a desktop chip, 100W TDP ....
  • hansmuff - Tuesday, February 13, 2018 - link

    Any idea where I could buy the MSI B350I Pro AC? I have searched every retailer I've ever bought from and can not find the damn thing. I'm hoping it can run a 2400G out of the box, at least to update to the newest BIOS.
  • Dragonstongue - Tuesday, February 13, 2018 - link

    they REALLY should not have cut back the L3 cache SO MUCH...beyond that, truly are amazing for what they are...they should have also made a higher TDP version such as 125-160w so they could cram more cpu cores or at very least a more substantial graphics portion and not limit dGPU access to 8x pci-e (from what I have read)

    Graphics cards and memory are anything but low cost.

    2200 IMO is "fine" for what it is, the 2400 should have had at least 4mb l3 cache (or more) then there should have been "enthusiast end" with the higher TDP versions so they could more or less ensure someone trying to do it "on a budget" really would not have to worry about getting anything less than (current) RX 570-580 or 1060-1070 level.

    many cpu over the years (especially when overclocked) had a 140+w TDP, they could have and should have made many steps for their Raven Ridge and not limit them so much..IMO...they could have even had a frankenstein like one that has a 6pin pci-e connector on it to feed more direct power to the chip instead of relying on the socket alone to provide all the power needed (at least more stable power)

    AM4 socket has already been up to 8 core 16 thread, and TR what 16 core 32 thread says to me the "chip size" has much more room available internally to have a bigger cpu portion and/or a far larger GPU portion, now, if they go TR4 size, TR as it is already has 1/2 of it "not used" this means they could "double up" the vega cores in it to be a very much "enthusiast grade" APU, by skimping cost on the HBM memory and relying on the system memory IMO there is a vast amount of potential performance they can capture, not to mention, properly designed, the cooling does not really become an issue (has not in the past with massive TDP cpu afterall)

    anyways..really is very amazing how much potency they managed to stuff into Raven Ridge, they IMO should not have "purposefully limited it" especially on the L3 cache amount, 2mb is very limiting as far as I am concerned especially when trying to feed 4 core 8 thread at 65w TDP alojng with the gpu portion.

    Either they are asking a bit much for the 2400g or, they are asking enough they just need to "tweak" a bit more quickly to make sure it is not bottlenecking itself for the $ they want for it ^.^

    either way, very well done....basically above Phenom II and into Core i7 level performance with 6870+ level graphics grunt using much less power...amazing job AMD...Keep it up.
  • SaturnusDK - Wednesday, February 14, 2018 - link

    Well done AMD. Well done.

    Both these APUs are extremely attractive. The R5 just screams upgradable. You get a very capable 4 core / 8 thread CPU packaged with an entry level dGPU for less than the competition charges for the CPU (with abyssmal iGPU) alone. In the current market with astronomical, even comical, dGPU prices this is a clear winner for anyone wanting to build a powerful mid-tier system but doesn't have the means to fork out ridiculous cash for higher tier dGPU now.

    The R3 scream HTPC or small gaming box. A good low end CPU paired with a bare bones but still decently performing iGPU. Add MB, RAM, PSU, and HDD/SSD and you're good to go. I imagine these will sell like hot cakes in markets with less overall GDP and in the brick'n'mortar retail market.

    The question is now. Is Intel ever going to produce a decent iGPU for the low end market? They've had plenty of time to do so but before Ryzen, AMD APUs just wasn't that attractive. Now though, you really have to think hard for a reason to justify buying a low end Intel CPU at all.
  • yhselp - Wednesday, February 14, 2018 - link

    "Now with the new Ryzen APUs, AMD has risen that low-end bar again."

    You had to do it. I understand. And thank you.
  • dr.denton - Thursday, February 15, 2018 - link

    <3
  • Hifihedgehog - Wednesday, February 14, 2018 - link

    I have been doing some digging and found that although current-generation AM4 motherboards lack formal HDMI 2.0 certification, just like many HDMI 1.4 cables will pass an HDMI 2.0 signal seamlessly without a hitch, the same appears to be the case for these boards whose HDMI traces and connectors may indeed be agnostic to the differences, if any. Could you do a quick test to see if HDMI 2.0 signals work for the Raven Ridge APUs on the AM4 motherboards you have access to? For further reference on the topic, see this forum thread “Raven Ridge HDMI 2.0 Compatibility — AM4 Motherboard Test Request Megathread” at SmallFormFactor.

Log in

Don't have an account? Sign up now