CPU Tests: Rendering

Rendering tests, compared to others, are often a little more simple to digest and automate. All the tests put out some sort of score or time, usually in an obtainable way that makes it fairly easy to extract. These tests are some of the most strenuous in our list, due to the highly threaded nature of rendering and ray-tracing, and can draw a lot of power. If a system is not properly configured to deal with the thermal requirements of the processor, the rendering benchmarks is where it would show most easily as the frequency drops over a sustained period of time. Most benchmarks in this case are re-run several times, and the key to this is having an appropriate idle/wait time between benchmarks to allow for temperatures to normalize from the last test.

Blender 2.83 LTS: Link

One of the popular tools for rendering is Blender, with it being a public open source project that anyone in the animation industry can get involved in. This extends to conferences, use in films and VR, with a dedicated Blender Institute, and everything you might expect from a professional software package (except perhaps a professional grade support package). With it being open-source, studios can customize it in as many ways as they need to get the results they require. It ends up being a big optimization target for both Intel and AMD in this regard.

For benchmarking purposes, we fell back to one rendering a frame from a detailed project. Most reviews, as we have done in the past, focus on one of the classic Blender renders, known as BMW_27. It can take anywhere from a few minutes to almost an hour on a regular system. However now that Blender has moved onto a Long Term Support model (LTS) with the latest 2.83 release, we decided to go for something different.

We use this scene, called PartyTug at 6AM by Ian Hubert, which is the official image of Blender 2.83. It is 44.3 MB in size, and uses some of the more modern compute properties of Blender. As it is more complex than the BMW scene, but uses different aspects of the compute model, time to process is roughly similar to before. We loop the scene for at least 10 minutes, taking the average time of the completions taken. Blender offers a command-line tool for batch commands, and we redirect the output into a text file.

(4-1) Blender 2.83 Custom Render Test

Intel loses out here due to core count, but AMD shows a small but not inconsequential uplift in performance generation-on-generation.

Corona 1.3: Link

Corona is billed as a popular high-performance photorealistic rendering engine for 3ds Max, with development for Cinema 4D support as well. In order to promote the software, the developers produced a downloadable benchmark on the 1.3 version of the software, with a ray-traced scene involving a military vehicle and a lot of foliage. The software does multiple passes, calculating the scene, geometry, preconditioning and rendering, with performance measured in the time to finish the benchmark (the official metric used on their website) or in rays per second (the metric we use to offer a more linear scale).

The standard benchmark provided by Corona is interface driven: the scene is calculated and displayed in front of the user, with the ability to upload the result to their online database. We got in contact with the developers, who provided us with a non-interface version that allowed for command-line entry and retrieval of the results very easily.  We loop around the benchmark five times, waiting 60 seconds between each, and taking an overall average. The time to run this benchmark can be around 10 minutes on a Core i9, up to over an hour on a quad-core 2014 AMD processor or dual-core Pentium.

(4-2) Corona 1.3 Benchmark

Corona shows a big uplift for Cezanne compared to Renoir.

Crysis CPU-Only Gameplay

One of the most oft used memes in computer gaming is ‘Can It Run Crysis?’. The original 2007 game, built in the Crytek engine by Crytek, was heralded as a computationally complex title for the hardware at the time and several years after, suggesting that a user needed graphics hardware from the future in order to run it. Fast forward over a decade, and the game runs fairly easily on modern GPUs.

But can we also apply the same concept to pure CPU rendering? Can a CPU, on its own, render Crysis? Since 64 core processors entered the market, one can dream. So we built a benchmark to see whether the hardware can.

For this test, we’re running Crysis’ own GPU benchmark, but in CPU render mode. 

(4-3c) Crysis CPU Render at 1080p Medium

At these resolutions we're seeing a small uplift for Cezanne. We spotted a performance issue when running our 320x200 test where Cezanne scores relatively low (20 FPS vs Renoir at 30 FPS), and so we're investigating that performance issue.

POV-Ray 3.7.1: Link

A long time benchmark staple, POV-Ray is another rendering program that is well known to load up every single thread in a system, regardless of cache and memory levels. After a long period of POV-Ray 3.7 being the latest official release, when AMD launched Ryzen the POV-Ray codebase suddenly saw a range of activity from both AMD and Intel, knowing that the software (with the built-in benchmark) would be an optimization tool for the hardware.

We had to stick a flag in the sand when it came to selecting the version that was fair to both AMD and Intel, and still relevant to end-users. Version 3.7.1 fixes a significant bug in the early 2017 code that was advised against in both Intel and AMD manuals regarding to write-after-read, leading to a nice performance boost.

The benchmark can take over 20 minutes on a slow system with few cores, or around a minute or two on a fast system, or seconds with a dual high-core count EPYC. Because POV-Ray draws a large amount of power and current, it is important to make sure the cooling is sufficient here and the system stays in its high-power state. Using a motherboard with a poor power-delivery and low airflow could create an issue that won’t be obvious in some CPU positioning if the power limit only causes a 100 MHz drop as it changes P-states.

(4-4) POV-Ray 3.7.1

V-Ray: Link

We have a couple of renderers and ray tracers in our suite already, however V-Ray’s benchmark came through for a requested benchmark enough for us to roll it into our suite. Built by ChaosGroup, V-Ray is a 3D rendering package compatible with a number of popular commercial imaging applications, such as 3ds Max, Maya, Undreal, Cinema 4D, and Blender.

We run the standard standalone benchmark application, but in an automated fashion to pull out the result in the form of kilosamples/second. We run the test six times and take an average of the valid results.

(4-5) V-Ray Renderer

Another good bump in performance here for Cezanne.

Cinebench R20: Link

Another common stable of a benchmark suite is Cinebench. Based on Cinema4D, Cinebench is a purpose built benchmark machine that renders a scene with both single and multi-threaded options. The scene is identical in both cases. The R20 version means that it targets Cinema 4D R20, a slightly older version of the software which is currently on version R21. Cinebench R20 was launched given that the R15 version had been out a long time, and despite the difference between the benchmark and the latest version of the software on which it is based, Cinebench results are often quoted a lot in marketing materials.

Results for Cinebench R20 are not comparable to R15 or older, because both the scene being used is different, but also the updates in the code bath. The results are output as a score from the software, which is directly proportional to the time taken. Using the benchmark flags for single CPU and multi-CPU workloads, we run the software from the command line which opens the test, runs it, and dumps the result into the console which is redirected to a text file. The test is repeated for a minimum of 10 minutes for both ST and MT, and then the runs averaged.

(4-6a) CineBench R20 Single Thread(4-6b) CineBench R20 Multi-Thread

We didn't quite hit AMD's promoted performance of 600 pts here in single thread, and Intel's Tiger Lake is not far behind. In fact, our MSI Prestige 14 Evo, despite being listed as a 35W sustained processor, doesn't seem to hit the same single-core power levels that our reference design did, and as a result Intel's reference design is actually beating both MSI and ASUS in single thread. This disappears in multi-thread, but it's important to note that different laptops will have different single core power modes.

CPU Tests: Simulation CPU Tests: Encoding
Comments Locked

218 Comments

View All Comments

  • Meteor2 - Thursday, February 4, 2021 - link

    Great point.
  • ikjadoon - Tuesday, January 26, 2021 - link

    It's great to see AMD kicking Intel's butt in a much larger market (i.e., laptops vastly outsell desktops): AMD really should be alongside, or simply replacing, Intel in most premium notebooks. Gaming notebooks are not my cup of tea, but glad to see for upcoming 15W Zen3 parts.

    Will we see actual, high-end Zen3 notebooks? Lenovo, HP, ASUS, Dell: for shame if you keep ramming toasty Tiger Lake down customers' throats. Lenovo's done some great offerings with both AMD & Intel; that means some compromises with notebook design (just go all AMD, man; if/when Intel is on top, switch back!), but beefier cooling for Intel will also help AMD.

    Still, overall, I don't see anything convincing me that x86 is really right for notebooks, either. So much waste heat...for what? The M1 has rightly rejiggered expectations: 20 hours on 150 nits should be ordinary, not miraculous. Limited to no fan spin-up and max CPU load should yield a chassis maximum of 40C (slightly warmer than body temperature). And, all the while with class-leading 1T performance.

    As this is a gaming laptop, it's not too relevant to compare web benchmarks (what most laptops do), but this is peak Zen3 mobile and it still falls quite short:

    Speedometer 2.0
    35W Ryzen 5980HS: 102 points (-57%)
    125W i9-10900K: 119 points (-49%)
    35W i7-1185G7: 128 points (-46%)
    105W Ryzen 5950X: 140 points (-40%)
    30W Apple M1: 234 points

    You can double / triple x86 wattage and still be miles behind M1. I almost feel silly buying an x86 laptop again: just kilowatts of waste heat over time. Why? Electrons that never get used, just exhausted and thrown out as soon as possible because it'll throttle even worse otherwise.
  • undervolted_dc - Tuesday, January 26, 2021 - link

    because you here are benchmarking javascript engine in the browser
    but not being enough you are comparing those in single thread so here you are comparing 1/16 of the 5950hs vs 1/4 of the m1
    a 128core epyc or a 64core threadripper probably will be even worse in this single threaded benchmark ( because those are levaring threads and are less efficient in single threaded app )
    if you like wrong calculations then 1 core of the 15w version use less tha 1w for what result ? ~ 100 points ? so who is wasting electrons here ?
    ( btw 1 core doesn't use 1/16 because there are boosts , but it's even less wrong than your comparison )
  • ZoZo - Tuesday, January 26, 2021 - link

    128-core EPYC? Where?
    His comparison is indeed misleading in terms of energy efficiency, but it's sad that no x86 is able to come even close to that single-threaded performance.
  • WaltC - Tuesday, January 26, 2021 - link

    Doubly sad for the M1 that we are living in the multicore/multithread era...;)
  • ikjadoon - Tuesday, January 26, 2021 - link

    The energy efficient comparisons are pretty clear: the best x86 (Zen3) has stunningly lower IPC than M1, which barely cracks 3 GHz. The only way to make up for such a gulf in IPC is faster clocks. Faster clocks require the 100+W TDPs so common in high-performance desktop CPUs. It's why Zen3 mobile clocks so much lower than Zen3 desktop (3-4 GHz instead of 4-5 GHz)

    A CPU that needs 3x power to do the same work (and do it slower in most cases) must exhaust an enormous amount of heat, when considering nT or 1T benchmarks (Zen3 requires ~20W for 5 GHz boost on a *single* core). Look at those boost power consumption measurements.

    Specifically in desktops (noted in my comparison about tripling TDP...), the CPU *alone* eats up an extra 60 to 90 watts during peak usage. Call it +20W average continuously, so we can do the math.

    20W x 8 hours x 7 days a week = +1.1 kWh excess exhaust heat per week. x86 had two corporate giants to do better. It's been severely litigated, but that's Intel's comeuppance. If Intel can't put out high-perf, high-efficiency x86 architectures, then people will start to feel less attached to x86 as an ISA. x86 had billions and billions and billions of R&D.

    I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from? Even if Apple *had flat 1T* for the next three years, I'd still feel more optimistic about M1-based CPUs in the long-term than x86.
  • Dug - Tuesday, January 26, 2021 - link

    "I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from?"

    Software, and getting work done. M1 is great and all, but just need to convince the boss that Apple or 3rd party has software available for our company....... Nope, oh well.
    Other negatives-
    For personal use, people aren't going to spend thousands of dollars to get new software on new platform.
    They can't play games (or should I say they can't play a majority), which is probably the largest market.
    They can't change anything about their software
    They can't customize anything.
    They can't upgrade any piece of their hardware.
    They don't have options for same accessories.

    So I'll go ahead and spend the extra $15 a year on energy to keep Windows.
  • Spunjji - Thursday, January 28, 2021 - link

    "A CPU that needs 3x power to do the same work"
    It doesn't. It's been demonstrated a few times now that if you scale back Zen 3 cores to similar performance levels to M1, M1's perf/watt advantage drops to about 30%. It's still better than the node advantage alone, but it's not crippling, and M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads.

    They're different core designs matched to different purposes (ultra-mobile first vs. server first) and show different strengths as a result.

    M1 is a significant achievement - no doubt about it - but you're *massively* overstating the case in its favour.
  • GeoffreyA - Friday, January 29, 2021 - link

    Thank you for this.
  • Meteor2 - Thursday, February 4, 2021 - link

    "M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads" ...Yet. In a couple of years x86 will be behind ARM across the board.

    Fastest HPC in the world is ARM *right now*. Only the fifth fastest is x86.

Log in

Don't have an account? Sign up now