Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 SingleThreadedRendering: CineBench 15 MultiThreaded

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

140 Comments

View All Comments

  • Oxford Guy - Thursday, July 27, 2017 - link

    "The Ryzen 3 1200 brings up the rear of the stack, being the lowest CPU in the stack, having the lowest frequency at 3.1G base, 3.4G turbo, 3.1G all-core turbo, no hyperthreading and the lowest amount of L3 cache."

    That bit about the L3 is incorrect unless the chart on page 1 is incorrect. It shows the same L3 size for 1400, 1300X, and 1200.
  • Oxford Guy - Thursday, July 27, 2017 - link

    And this:

    "Number 3 leads to a lop-sided silicon die, and obviously wasn’t chosen."

    Obviously?
  • Oxford Guy - Thursday, July 27, 2017 - link

    "DDR4-2400 C15"

    2400, really — even though it is, obviously, known that Zen needs faster RAM to perform efficiently?

    Joel Hruska managed to test Ryzen with 3200 speed RAM on his day 1 review. I bought 16 GB of 3200 RAM from Microcenter last Christmastime for $80. Just because RAM prices are nuts right now doesn't mean we should gut Ryzen's performance by sticking it with low-speed RAM.
  • Oxford Guy - Thursday, July 27, 2017 - link

    "This is also typically run at JEDEC subtimings where possible. It is noted that some users are not keen on this policy"

    Maybe you guys should rethink your logic.

    1) You have claimed, when overclocking, that it's not necessary to do full stability testing, like with Prime. Just passing some lower-grade stress testing is enough to make an overclock "stable enough".

    2) Your overclocking reviews have pushed unwise levels of voltage into CPUs to go along with this "stable enough" overclock.

    So... you argue against proof of true stability, both in the final overclock settings being satisfactorily tested and in safe voltages being decided upon.

    And — simultaneously — kneecap Zen processors by using silly JEDEC standards, trying to look conservative?

    Please.

    Everyone knows the JEDEC standard applies to enterprise. Patriot is just one manufacturer of RAM that tested and certified far better RAM performance on B350 and A320 Zen boards. You had that very article on your site just a short time ago.

    Your logic doesn't add up. It is not a significant enough cost savings for system builders to go with slow RAM for Zen. The only argument you can use, at all, is that OEMs are likely to kneecap Zen with slow RAM. That is not a given, though. OEMs can use faster RAM, like, at least, 2666, if they choose to. If they're marketing toward gamers they likely will.
  • Oxford Guy - Thursday, July 27, 2017 - link

    "Truth be told I never actually played the first version, but every edition from the second to the sixth, including the fifth as voiced by the late Leonard Nimoy"

    You mean Civ IV.
  • Oxford Guy - Thursday, July 27, 2017 - link

    And, yeah, we can afford to test with an Nvidia 1080 but we can't afford to use decent speed RAM.

    Yeah... makes sense.
  • Hixbot - Thursday, July 27, 2017 - link

    Are you having a conversation with yourself? Try to condense your points into a single post.
  • Oxford Guy - Friday, July 28, 2017 - link

    I don't live in a static universe where all of the things I'm capable of thinking of are immediately apparent, but thanks for the whine.
  • Manch - Friday, July 28, 2017 - link

    Really snowflake? You're saying he is whining? How many rants have you posted? LOL The difference between 2400 and 3200 shows up more on the higher end processors bc bigger L3 & HT err SMT. The diff in CPU bound gaming is 5-10% at most with the Ryzen 7's. Smaller with the 5's. Even more so with the 3's. Small enough to the point that it would not change the outlook on the CPU's. Also consider that if Ian change the parameters of his test constantly it would also skew numbers more so and render bench unreliable. Test the Ryzen 7's with 2133 then the 5's with 2400 then the 3's with 3200? Obviously anandtechs test are not the definitive performance bench mark for the world. What it is, is a reliably consistent benchmark allowing you to compare diff cpus with as little changed as possible as too not skew performance. Think EPA gas mileage stickers on cars. Will you get that rating? maybe. What it does is it gives you comparative results. From there its fairly easy to extrapolate the difference. Now I'm sure they will as they have in the past update there baseline specs for testing. You're running off the rails about how much the memory effects are. Look at all the youtube vids and other reviews out there. Difference yes. A lot? meh I also believe anandtech has mentioned doing a write up on the latest agesa update since its had a significant impact(including memory) on the series.
  • Oxford Guy - Friday, July 28, 2017 - link

    "You're saying he is whining? How many rants have you posted?"

    Pot kettle fallacy.

Log in

Don't have an account? Sign up now