CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Corona loves threads.

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

Blender loves threads and memory bandwidth.

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

Like Blender, LuxMark is all about the thread count. Ray tracing is very nearly a textbook case for easy multi-threaded scaling. Though it's interesting just how close the 10-core Core i9-7900X gets in the CPU (C++) test despite a significant core count disadvantage, likely due to a combination of higher IPC and clockspeeds.

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Similar to LuxMark, POV-Ray also wins on account of threads.

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost.

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

347 Comments

View All Comments

  • imaheadcase - Thursday, August 10, 2017 - link

    So you lost respect for a website based on how they word titles of articles? I think you don't understand advertising at all. lol

    If you want to know a website that lost respect, look at HardOCP and you know why people don't like them for obvious reasons.
  • Alexey291 - Thursday, August 10, 2017 - link

    No offence but HardOCP is far more respectable than what we have in ATech these days.

    But that's not hard. AT website is pretty much a shell for the forum which is where most of the traffic is. I'm sure they only so the reviews because 'it was something we have always done'
  • Johan Steyn - Thursday, August 10, 2017 - link

    You may not understand how wording is used to convey sentiments in a different way. That is what politicians thrive on. You could for instance say "I am sorry that you misunderstood me." It gives the impression that you are sorry, but you are not. People also ask for forgiveness like this: "If I have hurt you, please forgive me." It sounds sincer, but it is a hidden lie, not acknowledging that you have actually hurt anybody, actually saying that you do not think that you did.

    Well, this is a science and I cannot explain it all here. If you miss it, then it does not mean it is not there.
  • mikato - Monday, August 14, 2017 - link

    I thought I'd just comment to say I understand what you're saying and agree. Even if a sentence gives facts, it can sound more positive one way or another way based on how it is stated. The author has to do some reflection sometimes to catch this. I believe him whenever he says he doesn't have much time, and maybe that plays into it. But articles at different sites may not have this bias effect and it can be an important component of a review article.

    "Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost."

    These 2 sentences give facts, but sound favorable to Intel until just the very end. It's a subtle perception thing but it's real. The facts in the sentences, however, are massively favorable to AMD. Threadripper does only 6.7% less performance than an announced (not yet released) Intel CPU for half the cost!

    Here is another version-

    "Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. So Threadripper, for half the cost of Intel's as-yet unreleased chip, performs only 6.7% slower in Cinebench."

    There, that one leads with Threadripper and "half the cost" in the second sentence, and sounds much different.
  • Johan Steyn - Thursday, August 10, 2017 - link

    HardOCP and PCPer is more respected in my opinion. Wccftech is unpredictable, sometimes they shine and sometimes they are really odd.
  • mapesdhs - Thursday, August 10, 2017 - link

    I've kinda taken to GamersNexus recently, but I still always read AT and toms to compare.

    Ian.
  • fanofanand - Tuesday, August 15, 2017 - link

    WCCFtech is a joke, it's nothing but rumors and trolling. If you are seriously going to put WCCFtech above Anandtech then everyone here can immediately disregard all of your comments.
  • Drumsticks - Thursday, August 10, 2017 - link

    Fantastic review In. I was curious exactly how AMD would handle the NUMA problem with Threadripper. It seems that anybody buying Threadripper for real work is going to have to continue being very aware of exactly what configuration gets them the best performance.

    One minor correction, at the bottom of the CPU Rendering tests page:

    "Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost." - this score is for the 16 core i9-7960X, not the 7980XE.
  • Drumsticks - Thursday, August 10, 2017 - link

    Ian*. Can't wait for the edit button one day!
  • launchcodemexico - Thursday, August 10, 2017 - link

    Why did you end all the gaming review sections with something like "Switching it to Game mode would have made better numbers..."? Why didn't you run the benchmarks in Gaming mode in the first place?

Log in

Don't have an account? Sign up now