CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Corona loves threads.

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

Blender loves threads and memory bandwidth.

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++Rendering: LuxMark CPU OpenCL

Like Blender, LuxMark is all about the thread count. Ray tracing is very nearly a textbook case for easy multi-threaded scaling. Though it's interesting just how close the 10-core Core i9-7900X gets in the CPU (C++) test despite a significant core count disadvantage, likely due to a combination of higher IPC and clockspeeds.

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Similar to LuxMark, POV-Ray also wins on account of threads.

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 MultiThreaded

Rendering: CineBench 15 SingleThreaded

Intel recently announced that its new 18-core chip scores 3200 on Cinebench R15. That would be an extra 6.7% performance over the Threadripper 1950X for 2x the cost.

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
Comments Locked

347 Comments

View All Comments

  • BOBOSTRUMF - Friday, August 11, 2017 - link

    Actually Intel's 140 can consume more than 210 if You want the top unrestricted performance limited. Read tomshardware review
  • Filiprino - Thursday, August 10, 2017 - link

    How comes WinRAR is faster with the 10 core Broadwell than with the 10 core Skylake?
    What did they change on Cinebench going from 10 to 11.5? Threadripper is the faster CPU in Cinebench 10, but in the newer one it is not. Then again Cinebench 15 shows TR as the faster CPU. Is this benchmark reliable?

    How comes Chromium compilation is so slow? Others have pointed out they get much better scaling (linear speedup). That makes sense because compilation basically consists in launching isolated processes (compiler instances). Is this related with the segfaulting problem under GNU/Linux systems?

    For encoding I would start to use FFmpeg when benchmarking so many cores. In my brain lies a memory of FFmpeg being faster than Handbrake for the same number of cores. Maybe the GUI loop interrupts the process in a performance-unfriendly way. Too much overhead. HPC workloads can suffer even from the network driver having too many interrupts (hence, Linux tickless configuration).

    I have read SYSMARK Results and I find strange that TR media results are slower than data, being TR slower than Intel in media and faster than Intel in data. Isn't SYSMARK from BAPCo (http://www.pcworld.com/article/3023373/hardware/am... You already point it out on the article, sorry.

    How comes R9 Fury in Shadow of Mordor has AMD and Intel CPUs running consistently at two different frame rates (~95 vs ~103)?

    The same but with the GTX 1080. Both cases happen regardless of the Intel architecture (Haswell, Broadwell and Skylake all have the same FPS value).

    What happens with NVIDIA driver on Rocket League? Bad caching algorithm (TR has more cores/threads -> more cache available to store GPU command data)? You say you had issues but, what are your thoughts?
    How comes GTA V has those Under 60 and 30 FPS graphs knowing that the game is available for PS4 and XBox One (it has been already optimized for two CCX CPU, at least there is a version for that case)? Nevertheless, with NVIDIA cards, 2 seconds out of 90 is not that much.

    What I can think is that all these benchmarks are programmed using threading libraries from the "good old times" due to bad scaling. And in some cases there is architecture-specific targeted code. I also include in my conception the small dataset being used. I also would not make a case out of a benchmark programmed with code having false sharing (¡:O!)

    Currently for gaming, it seems that the easiest way is to have a Virtual Machine with PCIe passthrough pinned to one of the MCM dies.

    As a suggestion to Anandtech, I would like to see more free (libre) software being used to measure CPU performance, compiling the benchmarks from source against the target CPU architecture. Something like Phoronix. Maybe you could use PTS (Phoronix Test Suite).
  • Filiprino - Thursday, August 10, 2017 - link

    Positive things: ThreadRipper is under its TDP consumption. Intel is more power hungry. The Intel 16-core might go through the rough in power consumption.
    Good gaming performance. Intel is generally better, but TR still offers a beefy CPU for that too, losing a few frames only.
    Strong rendering performance.
    Strong video encoding performance.

    When you talk about IPC, it would be useful to measure it with profiling tools, not just getting "points", "miliseconds" and "seconds".
    Seeing how these benchmarks do not scale by much beyond 10 cores you might realize software has to get better.
  • Chad - Thursday, August 10, 2017 - link

    Second ffmpeg test (pretty please!)
  • mapesdhs - Thursday, August 10, 2017 - link


    Ian, a query about the CPU Legacy Tests: why do you reckon does the 1920X beat both 1950X and 1950X-G for CB 11.5 MT, yet the latter win out for CB 10 MT? Is there a max-thread limit in V11.5? Filiprino asked much the same above.

    "...and so losing half the threads in Game Mode might actually be a detriment to a workstation implementation."

    Isn't that the whole point though? For most workstation tasks, don't use Game Mode. There will be exceptions of course, but in general...

    Btw, where's C-ray? ;)

    Ian.
  • Da W - Thursday, August 10, 2017 - link

    ALL OF YOU COMPLAINERS: START A TECH REVIEW WEBSITE YOURSELVES AND STFU!
  • hansmuff - Thursday, August 10, 2017 - link

    Don't read the comments. Also, a lot of the "complaints" are read by Ryan and he actually addresses them and his articles improve as a result of criticism. He's never been bad, but you can see an ascension in quality over time, along with his partaking in critical commentary.
    IOW, we don't really need a referee.
  • hansmuff - Thursday, August 10, 2017 - link

    And of course I mean Ian, not Ryan.
  • mapesdhs - Friday, August 11, 2017 - link

    It is great that he replies at all, and does so to quite a lot of the posts too.
  • Kepe - Thursday, August 10, 2017 - link

    Wait a second, according to AMD and all the other articles about the 1950X and Game Mode, game mode disables all the physical cores of one of the CPU clusters and leaves SMT on, so you get 8 cores and 16 threads. It doesn't just turn off SMT for a 16 core / 16 thread setup.

    AMD's info here: https://community.amd.com/community/gaming/blog/20...

Log in

Don't have an account? Sign up now