CPU Rendering Tests

Rendering tests are a long-time favorite of reviewers and benchmarkers, as the code used by rendering packages is usually highly optimized to squeeze every little bit of performance out. Sometimes rendering programs end up being heavily memory dependent as well - when you have that many threads flying about with a ton of data, having low latency memory can be key to everything. Here we take a few of the usual rendering packages under Windows 10, as well as a few new interesting benchmarks.

All of our benchmark results can also be found in our benchmark engine, Bench.

Corona 1.3: link

Corona is a standalone package designed to assist software like 3ds Max and Maya with photorealism via ray tracing. It's simple - shoot rays, get pixels. OK, it's more complicated than that, but the benchmark renders a fixed scene six times and offers results in terms of time and rays per second. The official benchmark tables list user submitted results in terms of time, however I feel rays per second is a better metric (in general, scores where higher is better seem to be easier to explain anyway). Corona likes to pile on the threads, so the results end up being very staggered based on thread count.

Rendering: Corona Photorealism

Blender 2.78: link

For a render that has been around for what seems like ages, Blender is still a highly popular tool. We managed to wrap up a standard workload into the February 5 nightly build of Blender and measure the time it takes to render the first frame of the scene. Being one of the bigger open source tools out there, it means both AMD and Intel work actively to help improve the codebase, for better or for worse on their own/each other's microarchitecture.

Rendering: Blender 2.78

This is one multi-threaded test where the 8-core Skylake-based Intel processor wins against the new AMD Ryzen 7 2700X; the variable threaded nature of Blender means that the mesh architecture and memory bandwidth work well here. On a price/parity comparison, the Ryzen 7 2700X easily takes the win from the top performers. Users with the Core i7-6700K are being easily beaten by the Ryzen 5 2600.

LuxMark v3.1: Link

As a synthetic, LuxMark might come across as somewhat arbitrary as a renderer, given that it's mainly used to test GPUs, but it does offer both an OpenCL and a standard C++ mode. In this instance, aside from seeing the comparison in each coding mode for cores and IPC, we also get to see the difference in performance moving from a C++ based code-stack to an OpenCL one with a CPU as the main host.

Rendering: LuxMark CPU C++
Rendering: LuxMark CPU OpenCL

POV-Ray 3.7.1b4: link

Another regular benchmark in most suites, POV-Ray is another ray-tracer but has been around for many years. It just so happens that during the run up to AMD's Ryzen launch, the code base started to get active again with developers making changes to the code and pushing out updates. Our version and benchmarking started just before that was happening, but given time we will see where the POV-Ray code ends up and adjust in due course.

Rendering: POV-Ray 3.7

Cinebench R15: link

The latest version of CineBench has also become one of those 'used everywhere' benchmarks, particularly as an indicator of single thread performance. High IPC and high frequency gives performance in ST, whereas having good scaling and many cores is where the MT test wins out.

Rendering: CineBench 15 SingleThreaded
Rendering: CineBench 15 MultiThreaded

Intel is still the single thread champion in benchmarks like CineBench, but it would appear that the Ryzen 7 2700X is now taking the lead in the multithreaded test.

Benchmarking Performance: CPU System Tests Benchmarking Performance: CPU Web Tests
POST A COMMENT

545 Comments

View All Comments

  • jjj - Thursday, April 19, 2018 - link

    I was wondering about gaming, so there is no mistake there as Ryzen 2 seems to top Intel.
    As of right now, I don't seem to find memory specs in the review yet, safe to assume you did as always, highest non-OC so Ryzen is using faster DRAM?
    Also yet to spot memory letency, any chance you have some numbers at 3600MHz vs Intel? Thanks.
    Reply
  • jjj - Thursday, April 19, 2018 - link

    And just between us, would be nice to have some Vega gaming results under DX12. Reply
  • aliquis - Thursday, April 19, 2018 - link

    Would be nice if any reviewer actually benchmarked storage devices maybe even virtualization because then we'd see meltdown and spectre mitigation performance. Then again do AMD have any for spectre v2 yet? If not who knows what that will do. Reply
  • HStewart - Thursday, April 19, 2018 - link

    I notice that that systems had higher memory, but for me I believe single threaded performance is more important that more cores. But it would be bias if one platform is OC more than another. Personally I don't over clock - except for what is provided with CPU like Turbo mode.

    One thing that I foresee in the future is Intel coming out with 8 core Coffee Lake

    But at least it appears one thing is over is this Meltdown/Spectre stuff
    Reply
  • Lolimaster - Thursday, April 19, 2018 - link

    Intel 8 core CL won't stop the bleeding, lose more profits making them "cheap" vs a new Ryzen 7nm with at least 10% more clocks and 10% more IPC, RIP. Reply
  • HStewart - Thursday, April 19, 2018 - link

    I just have to agree to disagree on that statement - especially on "cheap" statement Reply
  • ACE76 - Thursday, April 19, 2018 - link

    CL can't scale to 8 cores...not without done serious changes to it's architecture...Intel is in some trouble with this Ryzen refresh...also worth noting is that 7nm Ryzen 2 will likely bring a considerable performance jump while Intel isn't sitting on anything worthwhile at the moment. Reply
  • Alphasoldier - Friday, April 20, 2018 - link

    All Intel's 8cores in HEDT except SkylakeX are based on their year older architecture with a bigger cache and the quad channel.

    So if Intel have the need, they will simply make a CL 8core. 2700X is pretty hungry when OC'd, so Intel don't have to worry at all about its power consuption.
    Reply
  • moozooh - Sunday, April 22, 2018 - link

    > 2700X is pretty hungry when OC'd
    And Intel chips aren't? If Zen+ is already on Intel's heels for both performance per watt and raw frequency, a 7nm chip with improved IPC and/or cache is very likely going to have them pull ahead by a significant margin. And even if it won't, it's still going to eat into Intel's profit as their next tech is 10nm vs. AMD's 7nm, meaning more optimal wafer estate utilization for the latter.

    AMD has really climbed back at the top of their game; I've been in the Intel camp for the last 12 years or so, but the recent developments throw me way back to K7 and A64 days. Almost makes me sad that I won't have any reason to move to a different mobo in the next 6–8 years or so.
    Reply
  • mapesdhs - Friday, March 29, 2019 - link

    Amusing to look back given how things panned out. So yes, Intel released the 9900K, but it was 100% more expensive than the 2700X. :D A complete joke. And meanwhile tech reviewers raved about a peasly 5 to 5.2 oc, on a chip that already has a 4.7 max turbo (major yawn fest), focusing on specific 1080p gaming tests that gave silly high fps number favoured by a market segment that is a tiny minority. Then what happens, RTX comes out and pushes the PR focus right back down to 60Hz. :D

    I wish people to stop drinking the Intel/NVIDIA coolaid. AMD does it aswell sometimes, but it's bizarre how uncritical tech reviewers often are about these things. The 9900K dragged mainstream CPU pricing up to HEDT levels; epic fail. Some said oh but it's great for poorly optimised apps like Premiere, completely ignoring the "poorly optimised" part (ie. why the lack of pressure to make Adobe write better code? It's weird to justify an overpriced CPU on the back of a pro app that ought to run a lot better on far cheaper products).
    Reply

Log in

Don't have an account? Sign up now