Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

Corona 1.3

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host. 

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

POV-Ray 3.7

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out. 

Rendering: CineBench 15 SingleThreaded

Rendering: CineBench 15 MultiThreaded

 

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

574 Comments

View All Comments

  • BurntMyBacon - Friday, March 3, 2017 - link

    @ShieTar: "Well, the point of low-resolution testing is, that at normal resolutions you will always be GPU-restricted."

    If this statement is accepted as true, then by deduction, for people playing at normal (or high) resolutions, gaming is not a differentiator and therefore unimportant to the CPU selection process. If gaming is your only criteria for CPU selection, then that means you can get the cheapest CPU possible until you are not GPU restricted.

    @ShieTar: "The most interesting question will be how Ryzen performs on those few modern games which manage to be CPU-restricted even in relevant resolutions, e.g. Battlefield 1 Multiplayer."

    I agree here fully. Show CPU heavy titles to tease out the difference between CPUs. Artificially low resolutions are academic at best. That said, according to Steam Surveys, just over half of their respondents are playing at resolutions less than 1080P. Over a third are playing at 1366x768 or less. Though, I suspect the overlap between people playing at these resolutions and people using high end processors is pretty small.

    Average frame rate is fairly uninteresting in most games for high end CPUs, due to being GPU bound or using unrealistic settings. Some, more interesting, metrics are min frame rate, frame time distribution (or simply graph it), frame time consistency, and similar. These metrics do more to show how different CPUs will change the experience for the player in a configuration the player is more likely to use.
  • Lord-Bryan - Thursday, March 2, 2017 - link

    Who buys a 500 dollar cpu to play games at 720p res. All that talk is just BS.
  • JMB1897 - Friday, March 3, 2017 - link

    That test is not done for real world testing reasons. At that low resolution, you're not GPU bound, you're CPU bound. That's why the test exists.

    Now advance a few years into the future when you still have your $500 Ryzen 7 CPU and a brand new GPU - you may suddenly become CPU bound even at QHD or 4k, whereas a 7700k might not quite be CPU bound just yet.
  • MAC001010 - Saturday, March 4, 2017 - link

    Or a few years in the future (when you get your new GPU) you find that games have become more demanding but better multi-threaded, in which case your Ryzen 7 CPU works fine and the 7700k has become a bottleneck despite its high single-threaded performance.

    This illustrates the inherent difficulty of comparing high freq. CPUs to high core count CPUs in regards to future potential performance.
  • cmdrdredd - Saturday, March 4, 2017 - link

    "Or a few years in the future (when you get your new GPU) you find that games have become more demanding but better multi-threaded, in which case your Ryzen 7 CPU works fine and the 7700k has become a bottleneck despite its high single-threaded performance."

    Maybe, the overclocking scenario is also important. Most gamers will overclock to get a bit of a boost. I have yet to replace my 4.5Ghz 3570k even though new CPUs offer more raw performance, the need hasn't been there yet.

    One other interesting thing is how Microsoft's PlayReady 3.0 will be supported for 4k HDR video content protection. So far I know Kaby Lake supports it, but haven't heard about any of AMD's offerings unless I missed it somewhere.
  • Cooe - Sunday, February 28, 2021 - link

    Lol, except here in reality the EXACT OPPOSITE thing happened. A 6-core/12-thread Ryzen 5 1600 still holds up GREAT in modern titles/game engines thanks to the massive advantage in extra CPU threads. A 4c/4t i5-7600K otoh? Nowadays it performs absolutely freaking TERRIBLY!!!
  • basha - Thursday, March 2, 2017 - link

    all the reviews i read are using NVidia 1080 gfx card. my understanding is AMD graphics has better implementation of DX12 with ability to use multiple cores. I would like to see benchmarks with something like RX480 crosfire with 1700x. this would be in the similar budget as i7 7700 + GTX 1080.
  • Notmyusualid - Friday, March 3, 2017 - link

    http://www.gamersnexus.net/hwreviews/2822-amd-ryze...
  • cmdrdredd - Saturday, March 4, 2017 - link

    Overclocking will be interesting. I don't use my PC for much besides gaming and lately it hasn't been a lot of that either due to lack of compelling titles. However, I would still be interested in seeing what it can offer here too for whenever I finally break down and decide I need to replace my 3570k @ 4.5Ghz.
  • Midwayman - Thursday, March 2, 2017 - link

    Here's hoping the 1600x hits the same gaming benches as the 1800x when OC'd. $500 for the 1800x is fine, Its just not the best value for gaming. Just like the i5's having been better value gaming systems in the past.

Log in

Don't have an account? Sign up now