Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Conclusions on Rendering: It is clear from these graphs that most rendering tools require full cores, rather than multiple threads, to get best performance. The exception is Cinebench.

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

177 Comments

View All Comments

  • Fritzkier - Tuesday, February 13, 2018 - link

    Well not really. While they using Pentium G with GT 730 or lower, many uses AMD A-series APU too (since they no need to use low end discrete GPU to be on par)

    And Ryzen 2200G also priced the same as Pentium G with GT 730 tho. The exception is RAM prices...
  • watzupken - Tuesday, February 13, 2018 - link

    If AMD uses a beefier Vega IGPU, are you willing to pay for it is the question? I feel iGPU will only make sense if the price is low, or if the power consumption is low. Where Intel is using AMD graphics, is likely for a fruity client. Outside of that, you won't see many manufacturers using it because of the cost. For the same amount of money Intel is asking for the chip only, there are many possible configuration with dedicated graphics that you can think of. Also, the supposedly beefier AMD graphics is about as fast as a GTX 1050 class. You are better off buying a GTX 1050Ti.
  • iwod - Tuesday, February 13, 2018 - link

    Well unless we could solve the GPU Crypto problem in the near future ( Which we wont ) I think having better Vega GFx combined with CPU is good deal.
  • Gadgety - Monday, February 12, 2018 - link

    Will these APUs do HDR UHD 4k Bluray playback (yes I know it's a tiny niche), or is that still Intel only?
  • GreenReaper - Wednesday, February 14, 2018 - link

    Probably best to just get an Xbox One S for it. As a bonus you could play a few games on it, too!
  • watzupken - Tuesday, February 13, 2018 - link

    I feel the R3 2200G is still a better deal than the R5 2400G. The price gap is too big relative to the difference in performance. And because these chips are over clocking friendly, so despite the R3 being a cut down chip, there could be some performance catchup with some overclocking. Overall, I feel both are great chips especially for some light/ casual gaming. If gaming is the main stay, then there is no substitute for a dedicated graphic solution.
  • serendip - Tuesday, February 13, 2018 - link

    The 2200G is a sweet because it offers most of the 2400G's performance at a sub-$100 point. For most business and home desktops, it's more than enough for both CPU and GPU performance. And with discrete GPUs being so hard to get now, good-enough APU graphics will do for the majority of home users. Hopefully AMD can translate all this into actual shipping machines.

    I'm going to sound like a broken record but AMD could send another boot up Intel's behind by making an Atom competitor. A dual-core Zen with SMT and cut-down Vega graphics would still be enough to blow Atom out of the water.
  • msroadkill612 - Tuesday, February 13, 2018 - link

    Its a pity they dont get hbcc.
  • msroadkill612 - Tuesday, February 13, 2018 - link

    Simply put, amd now own the entry level up to most 1080p gaming, and its a daunting jump in cost to improve by much.

    Its polite and nice of this review to pretend intel has competitive products, and include them for old times sake.
  • serendip - Tuesday, February 13, 2018 - link

    Looks like AMD owns the good-enough category. As I said previously, let's hope this translates into actual machines being shipped, seeing as OEMs previously made some terrible AMD-based systems at the low end.

Log in

Don't have an account? Sign up now