Benchmarking Performance: CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Benchmarking Performance: CPU Office Tests Benchmarking Performance: CPU Encoding Tests
Comments Locked

152 Comments

View All Comments

  • Gothmoth - Monday, September 25, 2017 - link

    well i did not notice as much bias and other stuff when anand was still here.
  • Spunjji - Monday, September 25, 2017 - link

    Seriously..? Ever read any of the Apple product reviews? :D
  • andrewaggb - Monday, September 25, 2017 - link

    lol, I was going to say that too. Anand had (in my opinion) a clear apple bias at the end and then went to work for them. That's not to say apple wasn't making good products or not doing interesting things - they were one of the few tech companies doing anything interesting.
  • Notmyusualid - Tuesday, September 26, 2017 - link

    +1
  • tipoo - Tuesday, September 26, 2017 - link

    I mean, imo he was pretty fair about them, he liked them and didn't say they were utter garbage because they tend not to make utter garbage. He did point out flaws fairly.
  • flyingpants1 - Tuesday, September 26, 2017 - link

    Yes that is the general consensus around here.

    Some of the podcasts with Anand and Brian Klug were embarrassing, they had a third guy but they would just talk over him. Brian was this really obnoxious guy who made fun of people who want removable batteries and microSD cards, he said "You got what you got!"

    lmao... industry shills.. wants to save the companies 10 cents for a microSD slot, and force people to overpay for 12GB space plus data usage.. How are you supposed to shoot 4k video and keep a movie/TV database with that. 128gb microSD card is perfect. Meanwhile they add ridiculous nonsense like taptic engine and face scanning instead of making the battery a bit thicker
  • FreckledTrout - Monday, September 25, 2017 - link

    I do because they know a disproportionate amount of their user base is tech savvy and run ad blockers with one click will en mass black block adds. Keep the adds clean and we leave the blockers off....we help each other but it is a give and take.
  • damianrobertjones - Saturday, September 30, 2017 - link

    Did you know that capitals can be your friend!
  • ddriver - Monday, September 25, 2017 - link

    Workstation without ECC... that's a bad joke right there. Or at best, some very casual workstation. But hey, if you like losing data, time and money - be my guest. Twice the memory channels, and usually all dims would be populated in a workstation scenario, that's plenty of ram to get faulty and ruin tons of potentially important data.

    Also, what ads? Haven't you heard of uBlock :)

    "Explaining the Jump to Using HCC Silicon" - basically the only way for intel to avoid embarrassment. Which they did in a truly embarrassing way - by gutting the ECC support out of silicon that already has it.

    AVX512 - all good, but it will take a lot of time before software catches up. Kudos to intel for doing the early pioneering for once.

    At that price - thanks but no thanks. At that price point, you might as well skip TR and go EPYC. Performance advantages, where intel has them, are hardly worth the price premium. You also get more IO on top of not supporting a vile, greedy, anticompetitive monopoly that has held progress back for decades so it can milk it. But hey, as AT seems to hint it, you have got to buy intel not to be considered a poor peasant who can't afford it. I guess being dumb enough to not value your money is a good thing if it sends your money in intel's pocket.
  • nowayandnohow - Monday, September 25, 2017 - link

    "Haven't you heard of uBlock :)"

    Haven't you heard that this site isn't free to run, and some of us support anandtech by letting them display ads?

Log in

Don't have an account? Sign up now